1. Introduction
This paper proposes a self-contained design for a Standard C++ framework for managing asynchronous execution on generic execution contexts. It is based on the ideas in A Unified Executors Proposal for C++ and its companion papers.
1.1. Motivation
Today, C++ software is increasingly asynchronous and parallel, a trend that is likely to only continue going forward. Asynchrony and parallelism appears everywhere, from processor hardware interfaces, to networking, to file I/O, to GUIs, to accelerators. Every C++ domain and every platform needs to deal with asynchrony and parallelism, from scientific computing to video games to financial services, from the smallest mobile devices to your laptop to GPUs in the world’s fastest supercomputer.
While the C++ Standard Library has a rich set of concurrency primitives (
This paper proposes a Standard C++ model for asynchrony, based around three key abstractions: schedulers, senders, and receivers, and a set of customizable asynchronous algorithms.
1.2. Priorities
- 
     Be composable and generic, allowing users to write code that can be used with many different types of execution contexts. 
- 
     Encapsulate common asynchronous patterns in customizable and reusable algorithms, so users don’t have to invent things themselves. 
- 
     Make it easy to be correct by construction. 
- 
     Support the diversity of execution contexts and execution agents, because not all execution agents are created equal; some are less capable than others, but not less important. 
- 
     Allow everything to be customized by an execution context, including transfer to other execution contexts, but don’t require that execution contexts customize everything. 
- 
     Care about all reasonable use cases, domains and platforms. 
- 
     Errors must be propagated, but error handling must not present a burden. 
- 
     Support cancellation, which is not an error. 
- 
     Have clear and concise answers for where things execute. 
- 
     Be able to manage and terminate the lifetimes of objects asynchronously. 
1.3. Examples: End User
In this section we demonstrate the end-user experience of asynchronous programming directly with the sender algorithms presented in this paper. See § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers for short explanations of the algorithms used in these code examples.
1.3.1. Hello world
using namespace std :: execution ; scheduler auto sch = thread_pool . scheduler (); // 1 sender auto begin = schedule ( sch ); // 2 sender auto hi = then ( begin , []{ // 3 std :: cout << "Hello world! Have an int." ; // 3 return 13 ; // 3 }); // 3 sender auto add_42 = then ( hi , []( int arg ) { return arg + 42 ; }); // 4 auto [ i ] = this_thread :: sync_wait ( add_42 ). value (); // 5 
This example demonstrates the basics of schedulers, senders, and receivers:
- 
     First we need to get a scheduler from somewhere, such as a thread pool. A scheduler is a lightweight handle to an execution resource. 
- 
     To start a chain of work on a scheduler, we call § 4.20.1 execution::schedule, which returns a sender that completes on the scheduler. A sender describes asynchronous work and sends a signal (value, error, or stopped) to some recipient(s) when that work completes. 
- 
     We use sender algorithms to produce senders and compose asynchronous work. § 4.21.2 execution::then is a sender adaptor that takes an input sender and a std :: invocable std :: invocable then schedule void std :: invocable int 
- 
     Now, we add another operation to the chain, again using § 4.21.2 execution::then. This time, we get sent a value - the int 42 
- 
     Finally, we’re ready to submit the entire asynchronous pipeline and wait for its completion. Everything up until this point has been completely asynchronous; the work may not have even started yet. To ensure the work has started and then block pending its completion, we use § 4.22.2 this_thread::sync_wait, which will either return a std :: optional < std :: tuple < ... >> std :: optional 
1.3.2. Asynchronous inclusive scan
using namespace std :: execution ; sender auto async_inclusive_scan ( scheduler auto sch , // 2 std :: span < const double > input , // 1 std :: span < double > output , // 1 double init , // 1 std :: size_t tile_count ) // 3 { std :: size_t const tile_size = ( input . size () + tile_count - 1 ) / tile_count ; std :: vector < double > partials ( tile_count + 1 ); // 4 partials [ 0 ] = init ; // 4 return transfer_just ( sch , std :: move ( partials )) // 5 | bulk ( tile_count , // 6 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 7 auto start = i * tile_size ; // 8 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 8 partials [ i + 1 ] = *-- std :: inclusive_scan ( begin ( input ) + start , // 9 begin ( input ) + end , // 9 begin ( output ) + start ); // 9 }) // 10 | then ( // 11 []( std :: vector < double >&& partials ) { std :: inclusive_scan ( begin ( partials ), end ( partials ), // 12 begin ( partials )); // 12 return std :: move ( partials ); // 13 }) | bulk ( tile_count , // 14 [ = ]( std :: size_t i , std :: vector < double >& partials ) { // 14 auto start = i * tile_size ; // 14 auto end = std :: min ( input . size (), ( i + 1 ) * tile_size ); // 14 std :: for_each ( begin ( output ) + start , begin ( output ) + end , // 14 [ & ] ( double & e ) { e = partials [ i ] + e ; } // 14 ); }) | then ( // 15 [ = ]( std :: vector < double >&& partials ) { // 15 return output ; // 15 }); // 15 } 
This example builds an asynchronous computation of an inclusive scan:
- 
     It scans a sequence of double std :: span < const double > input double std :: span < double > output 
- 
     It takes a scheduler, which specifies what execution context the scan should be launched on. 
- 
     It also takes a tile_count 
- 
     First we need to allocate temporary storage needed for the algorithm, which we’ll do with a std :: vector partials double 
- 
     Next we’ll create our initial sender with § 4.20.3 execution::transfer_just. This sender will send the temporary storage, which we’ve moved into the sender. The sender has a completion scheduler of sch sch 
- 
     Senders and sender adaptors support composition via operator | operator | tile_count 
- 
     Each agent will call a std :: invocable i [ 0 , tile_count ) 
- 
     We start by computing the start and end of the range of input and output elements that this agent is responsible for, based on our agent index. 
- 
     Then we do a sequential std :: inclusive_scan partials 
- 
     After all computation in that initial § 4.21.9 execution::bulk pass has completed, every one of the spawned execution agents will have written the sum of its elements into its slot in partials 
- 
     Now we need to scan all of the values in partials 
- 
     § 4.21.2 execution::then takes an input sender and an std :: invocable std :: invocable std :: invocable std :: inclusive_scan partials 
- 
     Then we return partials 
- 
     Finally we do another § 4.21.9 execution::bulk of the same shape as before. In this § 4.21.9 execution::bulk, we will use the scanned values in partials 
- 
     async_inclusive_scan std :: span < double > async_inclusive_scan 
1.3.3. Asynchronous dynamically-sized read
using namespace std :: execution ; sender_of < std :: size_t > auto async_read ( // 1 sender_of < std :: span < std :: byte >> auto buffer , // 1 auto handle ); // 1 struct dynamic_buffer { // 3 std :: unique_ptr < std :: byte [] > data ; // 3 std :: size_t size ; // 3 }; // 3 sender_of < dynamic_buffer > auto async_read_array ( auto handle ) { // 2 return just ( dynamic_buffer {}) // 4 | let_value ([ handle ] ( dynamic_buffer & buf ) { // 5 return just ( std :: as_writeable_bytes ( std :: span ( & buf . size , 1 )) // 6 | async_read ( handle ) // 7 | then ( // 8 [ & buf ] ( std :: size_t bytes_read ) { // 9 assert ( bytes_read == sizeof ( buf . size )); // 10 buf . data = std :: make_unique < std :: byte [] > ( buf . size ); // 11 return std :: span ( buf . data . get (), buf . size ); // 12 }) | async_read ( handle ) // 13 | then ( [ & buf ] ( std :: size_t bytes_read ) { assert ( bytes_read == buf . size ); // 14 return std :: move ( buf ); // 15 }); }); } 
This example demonstrates a common asynchronous I/O pattern - reading a payload of a dynamic size by first reading the size, then reading the number of bytes specified by the size:
- 
     async_read std :: span < std :: byte > std :: span 
- 
     async_read_array dynamic_buffer 
- 
     dynamic_buffer std :: unique_ptr < std :: byte [] > 
- 
     The first thing we do inside of async_read_array dynamic_array operator | 
- 
     We need the lifetime of this dynamic_array let_value std :: invocable let_value std :: invocable std :: invocable 
- 
     Inside of the let_value std :: invocable async_read std :: span buf . size 
- 
     We chain the async_read operator | 
- 
     Next, we pipe a std :: invocable async_read 
- 
     That std :: invocable 
- 
     We need to check that the number of bytes read is what we expected. 
- 
     Now that we have read the size of the data, we can allocate storage for it. 
- 
     We return a std :: span < std :: byte > std :: invocable 
- 
     And that recipient will be another async_read 
- 
     Once the data has been read, in another § 4.21.2 execution::then, we confirm that we read the right number of bytes. 
- 
     Finally, we move out of and return our dynamic_buffer async_read_array 
1.4. Asynchronous Windows socket recv 
   To get a better feel for how this interface might be used by low-level operations see this example implementation
of a cancellable 
struct operation_base : WSAOVERALAPPED { using completion_fn = void ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept ; // Assume IOCP event loop will call this when this OVERLAPPED structure is dequeued. completion_fn * completed ; }; template < typename Receiver > struct recv_op : operation_base { recv_op ( SOCKET s , void * data , size_t len , Receiver r ) : receiver ( std :: move ( r )) , sock ( s ) { this -> Internal = 0 ; this -> InternalHigh = 0 ; this -> Offset = 0 ; this -> OffsetHigh = 0 ; this -> hEvent = NULL; this -> completed = & recv_op :: on_complete ; buffer . len = len ; buffer . buf = static_cast < CHAR *> ( data ); } friend void tag_invoke ( std :: tag_t < std :: execution :: start > , recv_op & self ) noexcept { // Avoid even calling WSARecv() if operation already cancelled auto st = std :: execution :: get_stop_token ( std :: execution :: get_env ( self . receiver )); if ( st . stop_requested ()) { std :: execution :: set_stopped ( std :: move ( self . receiver )); return ; } // Store and cache result here in case it changes during execution const bool stopPossible = st . stop_possible (); if ( ! stopPossible ) { self . ready . store ( true, std :: memory_order_relaxed ); } // Launch the operation DWORD bytesTransferred = 0 ; DWORD flags = 0 ; int result = WSARecv ( self . sock , & self . buffer , 1 , & bytesTransferred , & flags , static_cast < WSAOVERLAPPED *> ( & self ), NULL); if ( result == SOCKET_ERROR ) { int errorCode = WSAGetLastError (); if ( errorCode != WSA_IO_PENDING )) { if ( errorCode == WSA_OPERATION_ABORTED ) { std :: execution :: set_stopped ( std :: move ( self . receiver )); } else { std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } return ; } } else { // Completed synchronously (assuming FILE_SKIP_COMPLETION_PORT_ON_SUCCESS has been set) execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); return ; } // If we get here then operation has launched successfully and will complete asynchronously. // May be completing concurrently on another thread already. if ( stopPossible ) { // Register the stop callback self . stopCallback . emplace ( std :: move ( st ), cancel_cb { self }); // Mark as 'completed' if ( self . ready . load ( std :: memory_order_acquire ) || self . ready . exchange ( true, std :: memory_order_acq_rel )) { // Already completed on another thread self . stopCallback . reset (); BOOL ok = WSAGetOverlappedResult ( self . sock , ( WSAOVERLAPPED * ) & self , & bytesTransferred , FALSE , & flags ); if ( ok ) { std :: execution :: set_value ( std :: move ( self . receiver ), bytesTransferred ); } else { int errorCode = WSAGetLastError (); std :: execution :: set_error ( std :: move ( self . receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } } struct cancel_cb { recv_op & op ; void operator ()() noexcept { CancelIoEx (( HANDLE ) op . sock , ( OVERLAPPED * )( WSAOVERLAPPED * ) & op ); } }; static void on_complete ( operation_base * op , DWORD bytesTransferred , int errorCode ) noexcept { recv_op & self = * static_cast < recv_op *> ( op ); if ( ready . load ( std :: memory_order_acquire ) || ready . exchange ( true, std :: memory_order_acq_rel )) { // Unsubscribe any stop-callback so we know that CancelIoEx() is not accessing 'op' // any more stopCallback . reset (); if ( errorCode == 0 ) { std :: execution :: set_value ( std :: move ( receiver ), bytesTransferred ); } else { std :: execution :: set_error ( std :: move ( receiver ), std :: error_code ( errorCode , std :: system_category ())); } } } Receiver receiver ; SOCKET sock ; WSABUF buffer ; std :: optional < typename stop_callback_type_t < Receiver > :: template callback_type < cancel_cb >> stopCallback ; std :: atomic < bool > ready { false}; }; struct recv_sender { SOCKET sock ; void * data ; size_t len ; template < typename Receiver > friend recv_op < Receiver > tag_invoke ( std :: tag_t < std :: execution :: connect > const recv_sender & s , Receiver r ) { return recv_op < Receiver > { s . sock , s . data , s . len , std :: move ( r )}; } }; recv_sender async_recv ( SOCKET s , void * data , size_t len ) { return recv_sender { s , data , len }; } 
1.4.1. More end-user examples
1.4.1.1. Sudoku solver
This example comes from Kirk Shoop, who ported an example from TBB’s documentation to sender/receiver in his fork of the libunifex repo. It is a Sudoku solver that uses a configurable number of threads to explore the search space for solutions.
The sender/receiver-based Sudoku solver can be found here. Some things that are worth noting about Kirk’s solution:
- 
     Although it schedules asychronous work onto a thread pool, and each unit of work will schedule more work, its use of structured concurrency patterns make reference counting unnecessary. The solution does not make use of shared_ptr 
- 
     In addition to eliminating the need for reference counting, the use of structured concurrency makes it easy to ensure that resources are cleaned up on all code paths. In contrast, the TBB example that inspired this one leaks memory. 
For comparison, the TBB-based Sudoku solver can be found here.
1.4.1.2. File copy
This example also comes from Kirk Shoop which uses sender/receiver to recursively copy the files a directory tree. It demonstrates how sender/receiver can be used to do IO, using a scheduler that schedules work on Linux’s io_uring.
As with the Sudoku example, this example obviates the need for reference counting by employing structured concurrency. It uses iteration with an upper limit to avoid having too many open file handles.
You can find the example here.
1.4.1.3. Echo server
Dietmar Kuehl has a hobby project that implements networking APIs on top of sender/receiver. He recently implemented an echo server as a demo. His echo server code can be found here.
Below, I show the part of the echo server code. This code is executed for each client that connects to the echo server. In a loop, it reads input from a socket and echos the input back to the same socket. All of this, including the loop, is implemented with generic async algorithms.
outstanding . start ( EX :: repeat_effect_until ( EX :: let_value ( NN :: async_read_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer )) | EX :: then ([ ptr ]( :: std :: size_t n ){ :: std :: cout << "read='" << :: std :: string_view ( ptr -> d_buffer , n ) << "' \n " ; ptr -> d_done = n == 0 ; return n ; }), [ & context , ptr ]( :: std :: size_t n ){ return NN :: async_write_some ( ptr -> d_socket , context . scheduler (), NN :: buffer ( ptr -> d_buffer , n )); }) | EX :: then ([]( auto && ...){}) , [ owner = :: std :: move ( owner )]{ return owner -> d_done ; } ) ); 
In this code, 
This is a good example of seamless composition of async IO functions with non-IO operations. And by composing the senders in this structured way, all the state for the composite operation -- the 
1.5. Examples: Algorithms
In this section we show a few simple sender/receiver-based algorithm implementations.
1.5.1. then 
namespace exec = std :: execution ; template < class R , class F > class _then_receiver : exec :: receiver_adaptor < _then_receiver < R , F > , R > { friend exec :: receiver_adaptor < _then_receiver , R > ; F f_ ; // Customize set_value by invoking the callable and passing the result to the inner receiver template < class ... As > void set_value ( As && ... as ) && noexcept try { exec :: set_value ( std :: move ( * this ). base (), std :: invoke (( F && ) f_ , ( As && ) as ...)); } catch (...) { exec :: set_error ( std :: move ( * this ). base (), std :: current_exception ()); } public : _then_receiver ( R r , F f ) : exec :: receiver_adaptor < _then_receiver , R > { std :: move ( r )} , f_ ( std :: move ( f )) {} }; template < exec :: sender S , class F > struct _then_sender { S s_ ; F f_ ; template < class ... Args > using _set_value_t = exec :: completion_signatures < exec :: set_value_t ( std :: invoke_result_t < F , Args ... > ) > ; // Compute the completion signatures template < class Env > friend auto tag_invoke ( exec :: get_completion_signatures_t , _then_sender && , Env ) -> exec :: make_completion_signatures < S , Env , exec :: completion_signatures < exec :: set_error_t ( std :: exception_ptr ) > , _set_value_t > ; // Connect: template < exec :: receiver R > friend auto tag_invoke ( exec :: connect_t , _then_sender && self , R r ) -> exec :: connect_result_t < S , _then_receiver < R , F >> { return exec :: connect ( ( S && ) self . s_ , _then_receiver < R , F > {( R && ) r , ( F && ) self . f_ }); } }; template < exec :: sender S , class F > exec :: sender auto then ( S s , F f ) { return _then_sender < S , F > {( S && ) s , ( F && ) f }; } 
This code builds a 
In detail, it does the following:
- 
     Defines a receiver in terms of execution :: receiver_adaptor - 
       Defines a constrained tag_invoke 
- 
       Defines another constrained overload of tag_invoke 
 The tag_invoke execution :: receiver_adaptor _then_receiver :: set_value 
- 
       
- 
     Defines a sender that aggregates another sender and the invocable, which defines a tag_invoke std :: execution :: connect std :: execution :: connect tag_invoke get_completion_signatures 
1.5.2. retry 
using namespace std ; namespace exec = execution ; template < class From , class To > using _decays_to = same_as < decay_t < From > , To > ; // _conv needed so we can emplace construct non-movable types into // a std::optional. template < invocable F > requires is_nothrow_move_constructible_v < F > struct _conv { F f_ ; explicit _conv ( F f ) noexcept : f_ (( F && ) f ) {} operator invoke_result_t < F > () && { return (( F && ) f_ )(); } }; template < class S , class R > struct _op ; // pass through all customizations except set_error, which retries the operation. template < class S , class R > struct _retry_receiver : exec :: receiver_adaptor < _retry_receiver < S , R >> { _op < S , R >* o_ ; R && base () && noexcept { return ( R && ) o_ -> r_ ; } const R & base () const & noexcept { return o_ -> r_ ; } explicit _retry_receiver ( _op < S , R >* o ) : o_ ( o ) {} void set_error ( auto && ) && noexcept { o_ -> _retry (); // This causes the op to be retried } }; // Hold the nested operation state in an optional so we can // re-construct and re-start it if the operation fails. template < class S , class R > struct _op { S s_ ; R r_ ; optional < exec :: connect_result_t < S & , _retry_receiver < S , R >>> o_ ; _op ( S s , R r ) : s_ (( S && ) s ), r_ (( R && ) r ), o_ { _connect ()} {} _op ( _op && ) = delete ; auto _connect () noexcept { return _conv {[ this ] { return exec :: connect ( s_ , _retry_receiver < S , R > { this }); }}; } void _retry () noexcept try { o_ . emplace ( _connect ()); // potentially throwing exec :: start ( * o_ ); } catch (...) { exec :: set_error (( R && ) r_ , std :: current_exception ()); } friend void tag_invoke ( exec :: start_t , _op & o ) noexcept { exec :: start ( * o . o_ ); } }; template < class S > struct _retry_sender { S s_ ; explicit _retry_sender ( S s ) : s_ (( S && ) s ) {} template < class ... Ts > using _value_t = exec :: completion_signatures < exec :: set_value_t ( Ts ...) > ; template < class > using _error_t = exec :: completion_signatures <> ; // Declare the signatures with which this sender can complete template < class Env > friend auto tag_invoke ( exec :: get_completion_signatures_t , const _retry_sender & , Env ) -> exec :: make_completion_signatures < S & , Env , exec :: completion_signatures < exec :: set_error_t ( std :: exception_ptr ) > , _value_t , _error_t > ; template < exec :: receiver R > friend _op < S , R > tag_invoke ( exec :: connect_t , _retry_sender && self , R r ) { return {( S && ) self . s_ , ( R && ) r }; } }; template < exec :: sender S > exec :: sender auto retry ( S s ) { return _retry_sender {( S && ) s }; } 
The 
This example does the following:
- 
     Defines a _conv std :: optional 
- 
     Defines a _retry_receiver set_error _retry () 
- 
     Defines an operation state that aggregates the input sender and receiver, and declares storage for the nested operation state in an optional _retry_receiver 
- 
     Starting the operation state dispatches to start 
- 
     The _retry () 
- 
     After reinitializing the inner operation state, _retry () start 
- 
     Defines a _retry_sender connect 
- 
     _retry_sender get_completion_signatures 
1.6. Examples: Schedulers
In this section we look at some schedulers of varying complexity.
1.6.1. Inline scheduler
class inline_scheduler { template < class R > struct _op { [[ no_unique_address ]] R rec_ ; friend void tag_invoke ( std :: execution :: start_t , _op & op ) noexcept try { std :: execution :: set_value (( R && ) op . rec_ ); } catch (...) { std :: execution :: set_error (( R && ) op . rec_ , std :: current_exception ()); } }; struct _sender { using completion_signatures = std :: execution :: completion_signatures < std :: execution :: set_value_t (), std :: execution :: set_error_t ( std :: exception_ptr ) > ; template < class R > friend auto tag_invoke ( std :: execution :: connect_t , _sender , R && rec ) noexcept ( std :: is_nothrow_constructible_v < std :: remove_cvref_t < R > , R > ) -> _op < std :: remove_cvref_t < R >> { return {( R && ) rec }; } }; friend _sender tag_invoke ( std :: execution :: schedule_t , const inline_scheduler & ) noexcept { return {}; } public : inline_scheduler () = default ; bool operator == ( const inline_scheduler & ) const noexcept = default ; }; 
The inline scheduler is a trivial scheduler that completes immediately and synchronously on
the thread that calls 
Although not a particularly useful scheduler, it serves to illustrate the basics of
implementing one. The 
- 
     Customizes execution :: schedule _sender 
- 
     The _sender sender exception_ptr set_stopped execution :: completion_signatures 
- 
     The _sender execution :: connect _op 
- 
     The operation state customizes std :: execution :: start std :: execution :: set_value std :: execution :: set_error exception_ptr 
1.6.2. Single thread scheduler
This example shows how to create a scheduler for an execution context that consists of a single
thread. It is implemented in terms of a lower-level execution context called 
class single_thread_context { std :: execution :: run_loop loop_ ; std :: thread thread_ ; public : single_thread_context () : loop_ () , thread_ ([ this ] { loop_ . run (); }) {} ~ single_thread_context () { loop_ . finish (); thread_ . join (); } auto get_scheduler () noexcept { return loop_ . get_scheduler (); } std :: thread :: id get_thread_id () const noexcept { return thread_ . get_id (); } }; 
The 
The interesting bits are in the 
1.7. Examples: Server theme
In this section we look at some examples of how one would use senders to implement an HTTP server. The examples ignore the low-level details of the HTTP server and looks at how senders can be combined to achieve the goals of the project.
General application context:
- 
     server application that processes images 
- 
     execution contexts: - 
       1 dedicated thread for network I/O 
- 
       N worker threads used for CPU-intensive work 
- 
       M threads for auxiliary I/O 
- 
       optional GPU context that may be used on some types of servers 
 
- 
       
- 
     all parts of the applications can be asynchronous 
- 
     no locks shall be used in user code 
1.7.1. Composability with execution :: let_ * 
   Example context:
- 
     we are looking at the flow of processing an HTTP request and sending back the response 
- 
     show how one can break the (slightly complex) flow into steps with execution :: let_ * 
- 
     different phases of processing HTTP requests are broken down into separate concerns 
- 
     each part of the processing might use different execution contexts (details not shown in this example) 
- 
     error handling is generic, regardless which component fails; we always send the right response to the clients 
Goals:
- 
     show how one can break more complex flows into steps with let_* functions 
- 
     exemplify the use of let_value let_error let_stopped just 
The example shows how one can separate out the concerns for interpreting requests, validating requests, running the main logic for handling the request, generating error responses, handling cancellation and sending the response back to the client. They are all different phases in the application, and can be joined together with thenamespace ex = std :: execution ; // Returns a sender that yields an http_request object for an incoming request ex :: sender auto schedule_request_start ( read_requests_ctx ctx ) {...} // Sends a response back to the client; yields a void signal on success ex :: sender auto send_response ( const http_response & resp ) {...} // Validate that the HTTP request is well-formed; forwards the request on success ex :: sender auto validate_request ( const http_request & req ) {...} // Handle the request; main application logic ex :: sender auto handle_request ( const http_request & req ) { //... return ex :: just ( http_response { 200 , result_body }); } // Transforms server errors into responses to be sent to the client ex :: sender auto error_to_response ( std :: exception_ptr err ) { try { std :: rethrow_exception ( err ); } catch ( const std :: invalid_argument & e ) { return ex :: just ( http_response { 404 , e . what ()}); } catch ( const std :: exception & e ) { return ex :: just ( http_response { 500 , e . what ()}); } catch (...) { return ex :: just ( http_response { 500 , "Unknown server error" }); } } // Transforms cancellation of the server into responses to be sent to the client ex :: sender auto stopped_to_response () { return ex :: just ( http_response { 503 , "Service temporarily unavailable" }); } //... // The whole flow for transforming incoming requests into responses ex :: sender auto snd = // get a sender when a new request comes schedule_request_start ( the_read_requests_ctx ) // make sure the request is valid; throw if not | ex :: let_value ( validate_request ) // process the request in a function that may be using a different execution context | ex :: let_value ( handle_request ) // If there are errors transform them into proper responses | ex :: let_error ( error_to_response ) // If the flow is cancelled, send back a proper response | ex :: let_stopped ( stopped_to_response ) // write the result back to the client | ex :: let_value ( send_response ) // done ; // execute the whole flow asynchronously ex :: start_detached ( std :: move ( snd )); 
let_ * All our functions return 
Also, because of using 
1.7.2. Moving between execution contexts with execution :: on execution :: transfer 
   Example context:
- 
     reading data from the socket before processing the request 
- 
     reading of the data is done on the I/O context 
- 
     no processing of the data needs to be done on the I/O context 
Goals:
- 
     show how one can change the execution context 
- 
     exemplify the use of on transfer 
namespace ex = std :: execution ; size_t legacy_read_from_socket ( int sock , char * buffer , size_t buffer_len ) {} void process_read_data ( const char * read_data , size_t read_len ) {} //... // A sender that just calls the legacy read function auto snd_read = ex :: just ( sock , buf , buf_len ) | ex :: then ( legacy_read_from_socket ); // The entire flow auto snd = // start by reading data on the I/O thread ex :: on ( io_sched , std :: move ( snd_read )) // do the processing on the worker threads pool | ex :: transfer ( work_sched ) // process the incoming data (on worker threads) | ex :: then ([ buf ]( int read_len ) { process_read_data ( buf , read_len ); }) // done ; // execute the whole flow asynchronously ex :: start_detached ( std :: move ( snd )); 
The example assume that we need to wrap some legacy code of reading sockets, and handle execution context switching. (This style of reading from socket may not be the most efficient one, but it’s working for our purposes.) For performance reasons, the reading from the socket needs to be done on the I/O thread, and all the processing needs to happen on a work-specific execution context (i.e., thread pool).
Calling 
The completion signal will be issued in the I/O execution context, so we have to move it to the work thread pool.
This is achieved with the help of the 
The reader should notice the difference between 
1.8. What this proposal is not
This paper is not a patch on top of A Unified Executors Proposal for C++; we are not asking to update the existing paper, we are asking to retire it in favor of this paper, which is already self-contained; any example code within this paper can be written in Standard C++, without the need to standardize any further facilities.
This paper is not an alternative design to A Unified Executors Proposal for C++; rather, we have taken the design in the current executors paper, and applied targeted fixes to allow it to fulfill the promises of the sender/receiver model, as well as provide all the facilities we consider essential when writing user code using standard execution concepts; we have also applied the guidance of removing one-way executors from the paper entirely, and instead provided an algorithm based around senders that serves the same purpose.
1.9. Design changes from P0443
- 
     The executor 
- 
     Properties are not included in this paper. We see them as a possible future extension, if the committee gets more comfortable with them. 
- 
     Senders now advertise what scheduler, if any, their evaluation will complete on. 
- 
     The places of execution of user code in P0443 weren’t precisely defined, whereas they are in this paper. See § 4.5 Senders can propagate completion schedulers. 
- 
     P0443 did not propose a suite of sender algorithms necessary for writing sender code; this paper does. See § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers. 
- 
     P0443 did not specify the semantics of variously qualified connect 
- 
     This paper extends the sender traits/typed sender design to support typed senders whose value/error types depend on type information provided late via the receiver. 
- 
     Support for untyped senders is dropped; the typed_sender sender sender_traits completion_signatures_of_t 
- 
     Specific type erasure facilities are omitted, as per LEWG direction. Type erasure facilities can be built on top of this proposal, as discussed in § 5.9 Ranges-style CPOs vs tag_invoke. 
- 
     A specific thread pool implementation is omitted, as per LEWG direction. 
- 
     Some additional utilities are added: - 
       run_loop 
- 
       receiver_adaptor 
- 
       completion_signatures make_completion_signatures 
 
- 
       
1.10. Prior art
This proposal builds upon and learns from years of prior art with asynchronous and parallel programming frameworks in C++. In this section, we discuss async abstractions that have previously been suggested as a possible basis for asynchronous algorithms and why they fall short.
1.10.1. Futures
A future is a handle to work that has already been scheduled for execution. It is one end of a communication channel; the other end is a promise, used to receive the result from the concurrent operation and to communicate it to the future.
Futures, as traditionally realized, require the dynamic allocation and management of a shared state, synchronization, and typically type-erasure of work and continuation. Many of these costs are inherent in the nature of "future" as a handle to work that is already scheduled for execution. These expenses rule out the future abstraction for many uses and makes it a poor choice for a basis of a generic mechanism.
1.10.2. Coroutines
C++20 coroutines are frequently suggested as a basis for asynchronous algorithms. It’s fair to ask why, if we added coroutines to C++, are we suggesting the addition of a library-based abstraction for asynchrony. Certainly, coroutines come with huge syntactic and semantic advantages over the alternatives.
Although coroutines are lighter weight than futures, coroutines suffer many of the same problems. Since they typically start suspended, they can avoid synchronizing the chaining of dependent work. However in many cases, coroutine frames require an unavoidable dynamic allocation and indirect function calls. This is done to hide the layout of the coroutine frame from the C++ type system, which in turn makes possible the separate compilation of coroutines and certain compiler optimizations, such as optimization of the coroutine frame size.
Those advantages come at a cost, though. Because of the dynamic allocation of coroutine frames, coroutines in embedded or heterogeneous environments, which often lack support for dynamic allocation, require great attention to detail. And the allocations and indirections tend to complicate the job of the inliner, often resulting in sub-optimal codegen.
The coroutine language feature mitigates these shortcomings somewhat with the HALO optimization Halo: coroutine Heap Allocation eLision Optimization: the joint response, which leverages existing compiler optimizations such as allocation elision and devirtualization to inline the coroutine, completely eliminating the runtime overhead. However, HALO requires a sophisiticated compiler, and a fair number of stars need to align for the optimization to kick in. In our experience, more often than not in real-world code today’s compilers are not able to inline the coroutine, resulting in allocations and indirections in the generated code.
In a suite of generic async algorithms that are expected to be callable from hot code paths, the extra allocations and indirections are a deal-breaker. It is for these reasons that we consider coroutines a poor choise for a basis of all standard async.
1.10.3. Callbacks
Callbacks are the oldest, simplest, most powerful, and most efficient mechanism for creating chains of work, but suffer problems of their own. Callbacks must propagate either errors or values. This simple requirement yields many different interface possibilities. The lack of a standard callback shape obstructs generic design.
Additionally, few of these possibilities accommodate cancellation signals when the user requests upstream work to stop and clean up.
1.11. Field experience
1.11.1. libunifex
This proposal draws heavily from our field experience with libunifex. Libunifex implements all of the concepts and customization points defined in this paper (with slight variations -- the design of P2300 has evolved due to LEWG feedback), many of this paper’s algorithms (some under different names), and much more besides.
Libunifex has several concrete schedulers in addition to the 
In addition to the proposed interfaces and the additional schedulers, it has several important extensions to the facilities described in this paper, which demonstrate directions in which these abstractions may be evolved over time, including:
- 
     Timed schedulers, which permit scheduling work on an execution context at a particular time or after a particular duration has elapsed. In addition, it provides time-based algorithms. 
- 
     File I/O schedulers, which permit filesystem I/O to be scheduled. 
- 
     Two complementary abstractions for streams (asynchronous ranges), and a set of stream-based algorithms. 
Libunifex has seen heavy production use at Facebook. As of October 2021, it is currently used in production within the following applications and platforms:
- 
     Facebook Messenger on iOS, Android, Windows, and macOS 
- 
     Instagram on iOS and Android 
- 
     Facebook on iOS and Android 
- 
     Portal 
- 
     An internal Facebook product that runs on Linux 
All of these applications are making direct use of the sender/receiver abstraction as presented in this paper. One product (Instagram on iOS) is making use of the sender/coroutine integration as presented. The monthly active users of these products number in the billions.
1.11.2. Other implementations
The authors are aware of a number of other implementations of sender/receiver from this paper. These are presented here in perceived order of maturity and field experience.
- 
     HPX - The C++ Standard Library for Parallelism and Concurrency HPX is a general purpose C++ runtime system for parallel and distributed applications that has been under active development since 2007. HPX exposes a uniform, standards-oriented API, and keeps abreast of the latest standards and proposals. It is used in a wide variety of high-performance applications. The sender/receiver implementation in HPX has been under active development since May 2020. It is used to erase the overhead of futures and to make it possible to write efficient generic asynchronous algorithms that are agnostic to their execution context. In HPX, algorithms can migrate execution between execution contexts, even to GPUs and back, using a uniform standard interface with sender/receiver. Far and away, the HPX team has the greatest usage experience outside Facebook. Mikael Simberg summarizes the experience as follows: Summarizing, for us the major benefits of sender/receiver compared to the old model are: - 
        Proper hooks for transitioning between execution contexts. 
- 
        The adaptors. Things like let_value 
- 
        Separation of the error channel from the value channel (also cancellation, but we don’t have much use for it at the moment). Even from a teaching perspective having to explain that the future f2 f1 . then ([]( future < T > f2 ) {...}) 
- 
        For futures we have a thing called hpx :: dataflow when_all (...). then (...) when_all (...) | then (...) 
 
- 
        
- 
     kuhllib by Dietmar Kuehl This is a prototype Standard Template Library with an implementation of sender/receiver that has been under development since May, 2021. It is significant mostly for its support for sender/receiver-based networking interfaces. Here, Dietmar Kuehl speaks about the perceived complexity of sender/receiver: ... and, also similar to STL: as I had tried to do things in that space before I recognize sender/receivers as being maybe complicated in one way but a huge simplification in another one: like with STL I think those who use it will benefit - if not from the algorithm from the clarity of abstraction: the separation of concerns of STL (the algorithm being detached from the details of the sequence representation) is a major leap. Here it is rather similar: the separation of the asynchronous algorithm from the details of execution. Sure, there is some glue to tie things back together but each of them is simpler than the combined result. Elsewhere, he said: ... to me it feels like sender/receivers are like iterators when STL emerged: they are different from what everybody did in that space. However, everything people are already doing in that space isn’t right. Kuehl also has experience teaching sender/receiver at Bloomberg. About that experience he says: When I asked [my students] specifically about how complex they consider the sender/receiver stuff the feedback was quite unanimous that the sender/receiver parts aren’t trivial but not what contributes to the complexity. 
- 
     
     This is a partial implementation written from the specification in this paper. Its primary purpose is to help find specification bugs and to harden the wording of the proposal. When finished, it will be a minimal and complete implementation of this proposal, fit for broad use and for contribution to libc++. It will be finished before this proposal is approved. It currently lacks some of the proposed sender adaptors and execution :: start_detached execution :: receiver_adaptor execution :: run_loop 
- 
     Reference implementation for the Microsoft STL by Michael Schellenberger Costa This is another reference implementation of this proposal, this time in a fork of the Mircosoft STL implementation. Michael Schellenberger Costa is not affiliated with Microsoft. He intends to contribute this implementation upstream when it is complete. 
1.11.3. Inspirations
This proposal also draws heavily from our experience with Thrust and Agency. It is also inspired by the needs of countless other C++ frameworks for asynchrony, parallelism, and concurrency, including:
2. Revision history
2.1. R5
The changes since R4 are as follows:
Fixes:
- 
     start_detached void set_value 
Enhancements:
- 
     Receiver concepts refactored to no longer require an error channel for exception_ptr 
- 
     sender_of connect 
- 
     get_completion_signatures completion_signatures dependent_completion_signatures 
- 
     make_completion_signatures 
- 
     receiver_adaptor get_env set_ * receiver_adaptor get_env () get_env_t 
- 
     just just_error just_stopped into_variant 
2.2. R4
The changes since R3 are as follows:
Fixes:
- 
     Fix specification of get_completion_scheduler transfer schedule_from transfer_when_all set_error 
- 
     The value of sends_stopped falsetotrueto acknowledge the fact that some coroutine types are generally awaitable and may implement theunhandled_stopped () 
- 
     Fix the incorrect use of inline namespaces in the < execution > 
- 
     Shorten the stable names for the sections. 
- 
     sync_wait std :: error_code std :: system_error 
- 
     Fix how ADL isolation from class template arguments is specified so it doesn’t constrain implmentations. 
- 
     Properly expose the tag types in the header < execution > 
Enhancements:
- 
     Support for "dependently-typed" senders, where the completion signatures -- and thus the sender metadata -- depend on the type of the receiver connected to it. See the section dependently-typed senders below for more information. 
- 
     Add a read ( query ) 
- 
     Add completion_signatures make_completion_signatures 
- 
     Add make_completion_signatures 
- 
     Drop support for untyped senders and rename typed_sender sender 
- 
     set_done set_stopped done stopped 
- 
     Add customization points for controlling the forwarding of scheduler, sender, receiver, and environment queries through layers of adaptors; specify the behavior of the standard adaptors in terms of the new customization points. 
- 
     Add get_delegatee_scheduler 
- 
     Add schedule_result_t 
- 
     More precisely specify the sender algorithms, including precisely what their completion signatures are. 
- 
     stopped_as_error 
- 
     tag_invoke 
2.2.1. Dependently-typed senders
Background:
In the sender/receiver model, as with coroutines, contextual information about
the current execution is most naturally propagated from the consumer to the
producer. In coroutines, that means information like stop tokens, allocators and
schedulers are propagated from the calling coroutine to the callee. In
sender/receiver, that means that that contextual information is associated with
the receiver and is queried by the sender and/or operation state after the
sender and the receiver are 
Problem:
The implication of the above is that the sender alone does not have all the
information about the async computation it will ultimately initiate; some of
that information is provided late via the receiver. However, the 
Example:
To get concrete, consider the case of the "
This causes knock-on problems since some important algorithms require a typed
sender, such as 
namespace ex = std :: execution ; ex :: sender auto task = ex :: let_value ( ex :: get_scheduler (), // Fetches scheduler from receiver. []( auto current_sched ) { // Lauch some nested work on the current scheduler: return ex :: on ( current_sched , nested work ... ); }); std :: this_thread :: sync_wait ( std :: move ( task )); 
The code above is attempting to schedule some work onto the 
Solution:
The solution is conceptually quite simple: extend the 
Design:
Using the receiver type to compute the sender traits turns out to have pitfalls
in practice. Many receivers make use of that type information in their
implementation. It is very easy to create cycles in the type system, leading to
inscrutible errors. The design pursued in R4 is to give receivers an associated environment object -- a bag of key/value pairs -- and to move the contextual
information (schedulers, etc) out of the receiver and into the environment. The 
A further refinement of this design would be to separate the receiver and the
environment entirely, passing then as separate arguments along with the sender to 
Impact:
This change, apart from increasing the expressive power of the sender/receiver abstraction, has the following impact:
- 
     Typed senders become moderately more challenging to write. (The new completion_signatures make_completion_signatures 
- 
     Sender adaptor algorithms that previously constrained their sender arguments to satisfy the typed_sender connect 
- 
     Operation states that own receivers that add to or change the environment are typically larger by one pointer. It comes with the benefit of far fewer indirections to evaluate queries. 
"Has it been implemented?"
Yes, the reference implementation, which can be found at https://github.com/brycelelbach/wg21_p2300_std_execution, has implemented this design as well as some dependently-typed senders to confirm that it works.
Implementation experience
Although this change has not yet been made in libunifex, the most widely adopted sender/receiver implementation, a similar design can be found in Folly’s coroutine support library. In Folly.Coro, it is possible to await a special awaitable to obtain the current coroutine’s associated scheduler (called an executor in Folly).
For instance, the following Folly code grabs the current executor, schedules a task for execution on that executor, and starts the resulting (scheduled) task by enqueueing it for execution.
// From Facebook’s Folly open source library: template < class T > folly :: coro :: Task < void > CancellableAsyncScope :: co_schedule ( folly :: coro :: Task < T >&& task ) { this -> add ( std :: move ( task ). scheduleOn ( co_await co_current_executor )); co_return ; } 
Facebook relies heavily on this pattern in its coroutine code. But as described
above, this pattern doesn’t work with R3 of 
Why now?
The authors are loathe to make any changes to the design, however small, at this
stage of the C++23 release cycle. But we feel that, for a relatively minor
design change -- adding an extra template parameter to 
One might wonder why this missing feature not been added to sender/receiver before now. The designers of sender/receiver have long been aware of the need. What was missing was a clean, robust, and simple design for the change, which we now have.
Drive-by:
We took the opportunity to make an additional drive-by change: Rather than
providing the sender traits via a class template for users to specialize, we
changed it into a sender query: 
Details:
Below are the salient parts of the new support for dependently-typed senders in R4:
- 
     Receiver queries have been moved from the receiver into a separate environment object. 
- 
     Receivers have an associated environment. The new get_env get_env 
- 
     sender_traits Env 
- 
     The primary sender_traits completion_signatures_of_t get_completion_signatures tag_invoke get_completion_signatures 
- 
     Support for untyped senders is dropped. The typed_sender sender 
- 
     The environment argument to the sender get_completion_signatures no_env no_env 
- 
     A type S sender < S > dependent_completion_signatures 
- 
     If a sender satisfies both sender < S > sender < S , Env > 
- 
     All of the algorithms and examples have been updated to work with dependently-typed senders. 
2.3. R3
The changes since R2 are as follows:
Fixes:
- 
     Fix specification of the on get_scheduler 
- 
     Fix a memory safety bug in the implementation of connect - awaitable 
- 
     Fix recursive definition of the scheduler 
Enhancements:
- 
     Add run_loop 
- 
     Add receiver_adaptor 
- 
     Require a scheduler’s sender to model sender_of 
- 
     Specify the cancellation scope of the when_all 
- 
     Make as_awaitable 
- 
     Change connect as_awaitable 
- 
     Add value_types_of_t error_types_of_t stop_token_type_t stop_token_of_t 
- 
     Add a design rationale for the removal of the possibly eager algorithms. 
- 
     Expand the section on field experience. 
2.4. R2
The changes since R1 are as follows:
- 
     Remove the eagerly executing sender algorithms. 
- 
     Extend the execution :: connect sender_traits <> typed_sender 
- 
     Add utilities as_awaitable () with_awaitable_senders <> 
- 
     Add a section describing the design of the sender/awaitable interactions. 
- 
     Add a section describing the design of the cancellation support in sender/receiver. 
- 
     Add a section showing examples of simple sender adaptor algorithms. 
- 
     Add a section showing examples of simple schedulers. 
- 
     Add a few more examples: a sudoku solver, a parallel recursive file copy, and an echo server. 
- 
     Refined the forward progress guarantees on the bulk 
- 
     Add a section describing how to use a range of senders to represent async sequences. 
- 
     Add a section showing how to use senders to represent partial success. 
- 
     Add sender factories execution :: just_error execution :: just_stopped 
- 
     Add sender adaptors execution :: stopped_as_optional execution :: stopped_as_error 
- 
     Document more production uses of sender/receiver at scale. 
- 
     Various fixes of typos and bugs. 
2.5. R1
The changes since R0 are as follows:
- 
     Added a new concept, sender_of 
- 
     Added a new scheduler query, this_thread :: execute_may_block_caller 
- 
     Added a new scheduler query, get_forward_progress_guarantee 
- 
     Removed the unschedule 
- 
     Various fixes of typos and bugs. 
2.6. R0
Initial revision.
3. Design - introduction
The following three sections describe the entirety of the proposed design.
- 
     § 3 Design - introduction describes the conventions used through the rest of the design sections, as well as an example illustrating how we envision code will be written using this proposal. 
- 
     § 4 Design - user side describes all the functionality from the perspective we intend for users: it describes the various concepts they will interact with, and what their programming model is. 
- 
     § 5 Design - implementer side describes the machinery that allows for that programming model to function, and the information contained there is necessary for people implementing senders and sender algorithms (including the standard library ones) - but is not necessary to use senders productively. 
3.1. Conventions
The following conventions are used throughout the design section:
- 
     The namespace proposed in this paper is the same as in A Unified Executors Proposal for C++: std :: execution std :: execution :: foo std :: execution :: foo 
- 
     Universal references and explicit calls to std :: move std :: forward 
- 
     None of the names proposed here are names that we are particularly attached to; consider the names to be reasonable placeholders that can freely be changed, should the committee want to do so. 
3.2. Queries and algorithms
A query is a 
An algorithm is a 
4. Design - user side
4.1. Execution contexts describe the place of execution
An execution context is a resource that represents the place where execution will happen. This could be a concrete resource - like a specific thread pool object, or a GPU - or a more abstract one, like the current thread of execution. Execution contexts don’t need to have a representation in code; they are simply a term describing certain properties of execution of a function.
4.2. Schedulers represent execution contexts
A scheduler is a lightweight handle that represents a strategy for scheduling work onto an execution context. Since execution contexts don’t necessarily manifest in C++ code, it’s not possible to program
directly against their API. A scheduler is a solution to that problem: the scheduler concept is defined by a single sender algorithm, 
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); // snd is a sender (see below) describing the creation of a new execution resource // on the execution context associated with sch 
Note that a particular scheduler type may provide other kinds of scheduling operations
which are supported by its associated execution context. It is not limited to scheduling
purely using the 
Future papers will propose additional scheduler concepts that extend 
- 
     A time_scheduler scheduler schedule_after ( sched , duration ) schedule_at ( sched , time_point ) now ( sched ) 
- 
     Concepts that extend scheduler 
- 
     Concepts that extend scheduler 
4.3. Senders describe work
A sender is an object that describes work. Senders are similar to futures in existing asynchrony designs, but unlike futures, the work that is being done to arrive at the values they will send is also directly described by the sender object itself. A sender is said to send some values if a receiver connected (see § 5.3 execution::connect) to that sender will eventually receive said values.
The primary defining sender algorithm is § 5.3 execution::connect; this function, however, is not a user-facing API; it is used to facilitate communication between senders and various sender algorithms, but end user code is not expected to invoke it directly.
The way user code is expected to interact with senders is by using sender algorithms. This paper proposes an initial set of such sender algorithms, which are described in § 4.4 Senders are composable through sender algorithms, § 4.20 User-facing sender factories, § 4.21 User-facing sender adaptors, and § 4.22 User-facing sender consumers. For example, here is how a user can create a new sender on a scheduler, attach a continuation to it, and then wait for execution of the continuation to complete:
execution :: scheduler auto sch = thread_pool . scheduler (); execution :: sender auto snd = execution :: schedule ( sch ); execution :: sender auto cont = execution :: then ( snd , []{ std :: fstream file { "result.txt" }; file << compute_result ; }); this_thread :: sync_wait ( cont ); // at this point, cont has completed execution 
4.4. Senders are composable through sender algorithms
Asynchronous programming often departs from traditional code structure and control flow that we are familiar with. A successful asynchronous framework must provide an intuitive story for composition of asynchronous work: expressing dependencies, passing objects, managing object lifetimes, etc.
The true power and utility of senders is in their composability. With senders, users can describe generic execution pipelines and graphs, and then run them on and across a variety of different schedulers. Senders are composed using sender algorithms:
- 
     sender factories, algorithms that take no senders and return a sender. 
- 
     sender adaptors, algorithms that take (and potentially execution :: connect 
- 
     sender consumers, algorithms that take (and potentially execution :: connect 
4.5. Senders can propagate completion schedulers
One of the goals of executors is to support a diverse set of execution contexts, including traditional thread pools, task and fiber frameworks (like HPX and Legion), and GPUs and other accelerators (managed by runtimes such as CUDA or SYCL). On many of these systems, not all execution agents are created equal and not all functions can be run on all execution agents. Having precise control over the execution context used for any given function call being submitted is important on such systems, and the users of standard execution facilities will expect to be able to express such requirements.
A Unified Executors Proposal for C++ was not always clear about the place of execution of any given piece of code. Precise control was present in the two-way execution API present in earlier executor designs, but it has so far been missing from the senders design. There has been a proposal (Towards C++23 executors: A proposal for an initial set of algorithms) to provide a number of sender algorithms that would enforce certain rules on the places of execution of the work described by a sender, but we have found those sender algorithms to be insufficient for achieving the best performance on all platforms that are of interest to us. The implementation strategies that we are aware of result in one of the following situations:
- 
     trying to submit work to one execution context (such as a CPU thread pool) from another execution context (such as a GPU or a task framework), which assumes that all execution agents are as capable as a std :: thread 
- 
     forcibly interleaving two adjacent execution graph nodes that are both executing on one execution context (such as a GPU) with glue code that runs on another execution context (such as a CPU), which is prohibitively expensive for some execution contexts (such as CUDA or SYCL). 
- 
     having to customise most or all sender algorithms to support an execution context, so that you can avoid problems described in 1. and 2, which we believe is impractical and brittle based on months of field experience attempting this in Agency. 
None of these implementation strategies are acceptable for many classes of parallel runtimes, such as task frameworks (like HPX) or accelerator runtimes (like CUDA or SYCL).
Therefore, in addition to the 
4.5.1. execution :: get_completion_scheduler 
   
execution :: scheduler auto cpu_sched = new_thread_scheduler {}; execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto snd0 = execution :: schedule ( cpu_sched ); execution :: scheduler auto completion_sch0 = execution :: get_completion_scheduler < execution :: set_value_t > ( snd0 ); // completion_sch0 is equivalent to cpu_sched execution :: sender auto snd1 = execution :: then ( snd0 , []{ std :: cout << "I am running on cpu_sched! \n " ; }); execution :: scheduler auto completion_sch1 = execution :: get_completion_scheduler < execution :: set_value_t > ( snd1 ); // completion_sch1 is equivalent to cpu_sched execution :: sender auto snd2 = execution :: transfer ( snd1 , gpu_sched ); execution :: sender auto snd3 = execution :: then ( snd2 , []{ std :: cout << "I am running on gpu_sched! \n " ; }); execution :: scheduler auto completion_sch3 = execution :: get_completion_scheduler < execution :: set_value_t > ( snd3 ); // completion_sch3 is equivalent to gpu_sched 
4.6. Execution context transitions are explicit
A Unified Executors Proposal for C++ does not contain any mechanisms for performing an execution context transition. The only sender algorithm that can create a sender that will move execution to a specific execution context is 
We propose that, for senders advertising their completion scheduler, all execution context transitions must be explicit; running user code anywhere but where they defined it to run must be considered a bug.
The 
execution :: scheduler auto sch1 = ...; execution :: scheduler auto sch2 = ...; execution :: sender auto snd1 = execution :: schedule ( sch1 ); execution :: sender auto then1 = execution :: then ( snd1 , []{ std :: cout << "I am running on sch1! \n " ; }); execution :: sender auto snd2 = execution :: transfer ( then1 , sch2 ); execution :: sender auto then2 = execution :: then ( snd2 , []{ std :: cout << "I am running on sch2! \n " ; }); this_thread :: sync_wait ( then2 ); 
4.7. Senders can be either multi-shot or single-shot
Some senders may only support launching their operation a single time, while others may be repeatable and support being launched multiple times. Executing the operation may consume resources owned by the sender.
For example, a sender may contain a 
A single-shot sender can only be connected to a receiver at most once. Its implementation of 
A multi-shot sender can be connected to multiple receivers and can be launched multiple
times. Multi-shot senders customise 
If the user of a sender does not require the sender to remain valid after connecting it to a
receiver then it can pass an rvalue-reference to the sender to the call to 
If the caller does wish for the sender to remain valid after the call then it can pass an lvalue-qualified sender
to the call to 
Algorithms that accept senders will typically either decay-copy an input sender and store it somewhere
for later usage (for example as a data-member of the returned sender) or will immediately call 
Some multi-use sender algorithms may require that an input sender be copy-constructible but will only call 
For a sender to be usable in both multi-use scenarios, it will generally be required to be both copy-constructible and lvalue-connectable.
4.8. Senders are forkable
Any non-trivial program will eventually want to fork a chain of senders into independent streams of work, regardless of whether they are single-shot or multi-shot. For instance, an incoming event to a middleware system may be required to trigger events on more than one downstream system. This requires that we provide well defined mechanisms for making sure that connecting a sender multiple times is possible and correct.
The 
auto some_algorithm ( execution :: sender auto && input ) { execution :: sender auto multi_shot = split ( input ); // "multi_shot" is guaranteed to be multi-shot, // regardless of whether "input" was multi-shot or not return when_all ( then ( multi_shot , [] { std :: cout << "First continuation \n " ; }), then ( multi_shot , [] { std :: cout << "Second continuation \n " ; }) ); } 
4.9. Senders are joinable
Similarly to how it’s hard to write a complex program that will eventually want to fork sender chains into independent streams, it’s also hard to write a program that does not want to eventually create join nodes, where multiple independent streams of execution are merged into a single one in an asynchronous fashion.
4.10. Senders support cancellation
Senders are often used in scenarios where the application may be concurrently executing multiple strategies for achieving some program goal. When one of these strategies succeeds (or fails) it may not make sense to continue pursuing the other strategies as their results are no longer useful.
For example, we may want to try to simultaneously connect to multiple network servers and use whichever server responds first. Once the first server responds we no longer need to continue trying to connect to the other servers.
Ideally, in these scenarios, we would somehow be able to request that those other strategies stop executing promptly so that their resources (e.g. cpu, memory, I/O bandwidth) can be released and used for other work.
While the design of senders has support for cancelling an operation before it starts
by simply destroying the sender or the operation-state returned from 
The ability to be able to cancel in-flight operations is fundamental to supporting some kinds of generic concurrency algorithms.
For example:
- 
     a when_all ( ops ...) 
- 
     a first_successful ( ops ...) 
- 
     a generic timeout ( src , duration ) src 
- 
     a stop_when ( src , trigger ) src trigger trigger src 
The mechanism used for communcating cancellation-requests, or stop-requests, needs to have a uniform interface so that generic algorithms that compose sender-based operations, such as the ones listed above, are able to communicate these cancellation requests to senders that they don’t know anything about.
The design is intended to be composable so that cancellation of higher-level operations can propagate those cancellation requests through intermediate layers to lower-level operations that need to actually respond to the cancellation requests.
For example, we can compose the algorithms mentioned above so that child operations are cancelled when any one of the multiple cancellation conditions occurs:
sender auto composed_cancellation_example ( auto query ) { return stop_when ( timeout ( when_all ( first_successful ( query_server_a ( query ), query_server_b ( query )), load_file ( "some_file.jpg" )), 5 s ), cancelButton . on_click ()); } 
In this example, if we take the operation returned by 
- 
     first_successful query_server_a ( query ) 
- 
     when_all load_file ( "some_file.jpg" ) 
- 
     timeout 
- 
     stop_when 
- 
     The parent operation consuming the composed_cancellation_example () 
Note that within this code there is no explicit mention of cancellation, stop-tokens, callbacks, etc. yet the example fully supports and responds to the various cancellation sources.
The intent of the design is that the common usage of cancellation in sender/receiver-based code is primarily through use of concurrency algorithms that manage the detailed plumbing of cancellation for you. Much like algorithms that compose senders relieve the user from having to write their own receiver types, algorithms that introduce concurrency and provide higher-level cancellation semantics relieve the user from having to deal with low-level details of cancellation.
4.10.1. Cancellation design summary
The design of cancellation described in this paper is built on top of and extends the 
At a high-level, the facilities proposed by this paper for supporting cancellation include:
- 
     Add std :: stoppable_token std :: stoppable_token_for std :: stop_token 
- 
     Add std :: unstoppable_token stoppable_token 
- 
     Add std :: in_place_stop_token std :: in_place_stop_source std :: in_place_stop_callback < CB > 
- 
     Add std :: never_stop_token 
- 
     Add std :: execution :: get_stop_token () 
- 
     Add std :: execution :: stop_token_of_t < T > get_stop_token () 
In addition, there are requirements added to some of the algorithms to specify what their cancellation behaviour is and what the requirements of customisations of those algorithms are with respect to cancellation.
The key component that enables generic cancellation within sender-based operations is the 
As the caller of 
4.10.2. Support for cancellation is optional
Support for cancellation is optional, both on part of the author of the receiver and on part of the author of the sender.
If the receiver’s execution environment does not customise the false from the 
Sender code that tries to use this stop-token will in general result in code that handles stop-requests being compiled out and having little to no run-time overhead.
If the sender doesn’t call 
Note that stop-requests are generally racy in nature as there is often a race betwen an operation completing naturally and the stop-request being made. If the operation has already completed or past the point at which it can be cancelled when the stop-request is sent then the stop-request may just be ignored. An application will typically need to be able to cope with senders that might ignore a stop-request anyway.
4.10.3. Cancellation is inherently racy
Usually, an operation will attach a stop-callback at some point inside the call to 
A stop-request can be issued concurrently from another thread. This means the implementation of 
An implementation of 
If the stop-callback is subscribed first and then the operation is launched, care needs to be taken to ensure that a stop-request that invokes the stop-callback on another thread after the stop-callback is registered but before the operation finishes launching does not either result in a missed cancellation request or a data-race. e.g. by performing an atomic write after the launch has finished executing
If the operation is launched first and then the stop-callback is subscribed, care needs to be taken to ensure
that if the launched operation completes concurrently on another thread that it does not destroy the operation-state
until after the stop-callback has been registered. e.g. by having the 
For an example of an implementation strategy for solving these data-races see § 1.4 Asynchronous Windows socket recv.
4.10.4. Cancellation design status
This paper currently includes the design for cancellation as proposed in Composable cancellation for sender-based async operations - "Composable cancellation for sender-based async operations". P2175R0 contains more details on the background motivation and prior-art and design rationale of this design.
It is important to note, however, that initial review of this design in the SG1 concurrency subgroup raised some concerns related to runtime overhead of the design in single-threaded scenarios and these concerns are still being investigated.
The design of P2175R0 has been included in this paper for now, despite its potential to change, as we believe that support for cancellation is a fundamental requirement for an async model and is required in some form to be able to talk about the semantics of some of the algorithms proposed in this paper.
This paper will be updated in the future with any changes that arise from the investigations into P2175R0.
4.11. Sender factories and adaptors are lazy
In an earlier revision of this paper, some of the proposed algorithms supported executing their logic eagerly; i.e., before the returned sender has been connected to a receiver and started. These algorithms were removed because eager execution has a number of negative semantic and performance implications.
We have originally included this functionality in the paper because of a long-standing belief that eager execution is a mandatory feature to be included in the standard Executors facility for that facility to be acceptable for accelerator vendors. A particular concern was that we must be able to write generic algorithms that can run either eagerly or lazily, depending on the kind of an input sender or scheduler that have been passed into them as arguments. We considered this a requirement, because the _latency_ of launching work on an accelerator can sometimes be considerable.
However, in the process of working on this paper and implementations of the features
proposed within, our set of requirements has shifted, as we understood the different
implementation strategies that are available for the feature set of this paper better,
and, after weighting the earlier concerns against the points presented below, we
have arrived at the conclusion that a purely lazy model is enough for most algorithms,
and users who intend to launch work earlier may use an algorithm such as 
4.11.1. Eager execution leads to detached work or worse
One of the questions that arises with APIs that can potentially return
eagerly-executing senders is "What happens when those senders are destructed
without a call to 
In these cases, the operation represented by the sender is potentially executing concurrently in another thread at the time that the destructor of the sender and/or operation-state is running. In the case that the operation has not completed executing by the time that the destructor is run we need to decide what the semantics of the destructor is.
There are three main strategies that can be adopted here, none of which is particularly satisfactory:
- 
     Make this undefined-behaviour - the caller must ensure that any eagerly-executing sender is always joined by connecting and starting that sender. This approach is generally pretty hostile to programmers, particularly in the presence of exceptions, since it complicates the ability to compose these operations. Eager operations typically need to acquire resources when they are first called in order to start the operation early. This makes eager algorithms prone to failure. Consider, then, what might happen in an expression such as when_all ( eager_op_1 (), eager_op_2 ()) eager_op_1 () eager_op_2 () when_all when_all It then becomes the responsibility, not of the algorithm, but of the end user to handle the exception and ensure that eager_op_1 () 
- 
     Detach from the computation - let the operation continue in the background - like an implicit call to std :: thread :: detach () std :: shared_ptr 
- 
     Block in the destructor until the operation completes. This approach is probably the safest to use as it preserves the structured nature of the concurrent operations, but also introduces the potential for deadlocking the application if the completion of the operation depends on the current thread making forward progress. The risk of deadlock might occur, for example, if a thread-pool with a small number of threads is executing code that creates a sender representing an eagerly-executing operation and then calls the destructor of that sender without joining it (e.g. because an exception was thrown). If the current thread blocks waiting for that eager operation to complete and that eager operation cannot complete until some entry enqueued to the thread-pool’s queue of work is run then the thread may wait for an indefinite amount of time. If all thread of the thread-pool are simultaneously performing such blocking operations then deadlock can result. 
There are also minor variations on each of these choices. For example:
- 
     A variation of (1): Call std :: terminate std :: thread 
- 
     A variation of (2): Request cancellation of the operation before detaching. This reduces the chances of operations continuing to run indefinitely in the background once they have been detached but does not solve the lifetime- or shutdown-related challenges. 
- 
     A variation of (3): Request cancellation of the operation before blocking on its completion. This is the strategy that std :: jthread 
4.11.2. Eager senders complicate algorithm implementations
Algorithms that can assume they are operating on senders with strictly lazy
semantics are able to make certain optimizations that are not available if
senders can be potentially eager. With lazy senders, an algorithm can safely
assume that a call to 
When an algorithm needs to deal with potentially eager senders, the potential race conditions can be resolved one of two ways, neither of which is desirable:
- 
     Assume the worst and implement the algorithm defensively, assuming all senders are eager. This obviously has overheads both at runtime and in algorithm complexity. Resolving race conditions is hard. 
- 
     Require senders to declare whether they are eager or not with a query. Algorithms can then implement two different implementation strategies, one for strictly lazy senders and one for potentially eager senders. This addresses the performance problem of (1) while compounding the complexity problem. 
4.11.3. Eager senders incur cancellation-related overhead
Another implication of the use of eager operations is with regards to cancellation. The eagerly executing operation will not have access to the caller’s stop token until the sender is connected to a receiver. If we still want to be able to cancel the eager operation then it will need to create a new stop source and pass its associated stop token down to child operations. Then when the returned sender is eventually connected it will register a stop callback with the receiver’s stop token that will request stop on the eager sender’s stop source.
As the eager operation does not know at the time that it is launched what the
type of the receiver is going to be, and thus whether or not the stop token
returned from 
The eager operation will also need to do this to support sending a stop request to the eager operation in the case that the sender representing the eager work is destroyed before it has been joined (assuming strategy (5) or (6) listed above is chosen).
4.11.4. Eager senders cannot access execution context from the receiver
In sender/receiver, contextual information is passed from parent operations to their children by way of receivers. Information like stop tokens, allocators, current scheduler, priority, and deadline are propagated to child operations with custom receivers at the time the operation is connected. That way, each operation has the contextual information it needs before it is started.
But if the operation is started before it is connected to a receiver, then there isn’t a way for a parent operation to communicate contextual information to its child operations, which may complete before a receiver is ever attached.
4.12. Schedulers advertise their forward progress guarantees
To decide whether a scheduler (and its associated execution context) is sufficient for a specific task, it may be necessary to know what kind of forward progress guarantees it provides for the execution agents it creates. The C++ Standard defines the following forward progress guarantees:
- 
     concurrent, which requires that a thread makes progress eventually; 
- 
     parallel, which requires that a thread makes progress once it executes a step; and 
- 
     weakly parallel, which does not require that the thread makes progress. 
This paper introduces a scheduler query function, 
4.13. Most sender adaptors are pipeable
To facilitate an intuitive syntax for composition, most sender adaptors are pipeable; they can be composed (piped) together with 
execution :: bulk ( snd , N , [] ( std :: size_t i , auto d ) {}); execution :: bulk ( N , [] ( std :: size_t i , auto d ) {})( snd ); snd | execution :: bulk ( N , [] ( std :: size_t i , auto d ) {}); 
Piping enables you to compose together senders with a linear syntax. Without it, you’d have to use either nested function call syntax, which would cause a syntactic inversion of the direction of control flow, or you’d have to introduce a temporary variable for each stage of the pipeline. Consider the following example where we want to execute first on a CPU thread pool, then on a CUDA GPU, then back on the CPU thread pool:
| Syntax Style | Example | 
|---|---|
| Function call (nested) | 
 | 
| Function call (named temporaries) | 
 | 
| Pipe | 
 | 
Certain sender adaptors are not be pipeable, because using the pipeline syntax can result in confusion of the semantics of the adaptors involved. Specifically, the following sender adaptors are not pipeable.
- 
     execution :: when_all execution :: when_all_with_variant 
- 
     execution :: on transfer 
Sender consumers could be made pipeable, but we have chosen to not do so. However, since these are terminal nodes in a pipeline and nothing can be piped after them, we believe a pipe syntax may be confusing as well as unnecessary, as consumers cannot be chained. We believe sender consumers read better with function call syntax.
4.14. A range of senders represents an async sequence of data
Senders represent a single unit of asynchronous work. In many cases though, what is being modelled is a sequence of data arriving asynchronously, and you want computation to happen on demand, when each element arrives. This requires nothing more than what is in this paper and the range support in C++20. A range of senders would allow you to model such input as keystrikes, mouse movements, sensor readings, or network requests.
Given some expression 
for ( auto snd : R ) { if ( auto opt = co_await execution :: stopped_as_optional ( std :: move ( snd ))) co_yield fn ( * std :: move ( opt )); else break ; } 
This transforms each element of the asynchronous sequence 
Now imagine that 
Far more interesting would be if 
4.15. Senders can represent partial success
Receivers have three ways they can complete: with success, failure, or cancellation. This begs the question of how they can be used to represent async operations that partially succeed. For example, consider an API that reads from a socket. The connection could drop after the API has filled in some of the buffer. In cases like that, it makes sense to want to report both that the connection dropped and that some data has been successfully read.
Often in the case of partial success, the error condition is not fatal nor does it mean the API has failed to satisfy its post-conditions. It is merely an extra piece of information about the nature of the completion. In those cases, "partial success" is another way of saying "success". As a result, it is sensible to pass both the error code and the result (if any) through the value channel, as shown below:
// Capture a buffer for read_socket_async to fill in execution :: just ( array < byte , 1024 > {}) | execution :: let_value ([ socket ]( array < byte , 1024 >& buff ) { // read_socket_async completes with two values: an error_code and // a count of bytes: return read_socket_async ( socket , span { buff }) // For success (partial and full), specify the next action: | execution :: let_value ([]( error_code err , size_t bytes_read ) { if ( err != 0 ) { // OK, partial success. Decide how to deal with the partial results } else { // OK, full success here. } }); }) 
In other cases, the partial success is more of a partial failure. That happens when the error condition indicates that in some way the function failed to satisfy its post-conditions. In those cases, sending the error through the value channel loses valuable contextual information. It’s possible that bundling the error and the incomplete results into an object and passing it through the error channel makes more sense. In that way, generic algorithms will not miss the fact that a post-condition has not been met and react inappropriately.
Another possibility is for an async API to return a range of senders: if the API completes with full success, full error, or cancellation, the returned range contains just one sender with the result. Otherwise, if the API partially fails (doesn’t satisfy its post-conditions, but some incomplete result is available), the returned range would have two senders: the first containing the partial result, and the second containing the error. Such an API might be used in a coroutine as follows:
// Declare a buffer for read_socket_async to fill in array < byte , 1024 > buff ; for ( auto snd : read_socket_async ( socket , span { buff })) { try { if ( optional < size_t > bytes_read = co_await execution :: stopped_as_optional ( std :: move ( snd ))) // OK, we read some bytes into buff. Process them here.... } else { // The socket read was cancelled and returned no data. React // appropriately. } } catch (...) { // read_socket_async failed to meet its post-conditions. // Do some cleanup and propagate the error... } } 
Finally, it’s possible to combine these two approaches when the API can both partially succeed (meeting its post-conditions) and partially fail (not meeting its post-conditions).
4.16. All awaitables are senders
Since C++20 added coroutines to the standard, we expect that coroutines and awaitables will be how a great many will choose to express their asynchronous code. However, in this paper, we are proposing to add a suite of asynchronous algorithms that accept senders, not awaitables. One might wonder whether and how these algorithms will be accessible to those who choose coroutines instead of senders.
In truth there will be no problem because all generally awaitable types
automatically model the 
For an example, imagine a coroutine type called 
task < int > doSomeAsyncWork (); int main () { // OK, awaitable types satisfy the requirements for senders: auto o = this_thread :: sync_wait ( doSomeAsyncWork ()); } 
Since awaitables are senders, writing a sender-based asynchronous algorithm is trivial if you have a coroutine task type: implement the algorithm as a coroutine. If you are not bothered by the possibility of allocations and indirections as a result of using coroutines, then there is no need to ever write a sender, a receiver, or an operation state.
4.17. Many senders can be trivially made awaitable
If you choose to implement your sender-based algorithms as coroutines, you’ll run into the issue of how to retrieve results from a passed-in sender. This is not a problem. If the coroutine type opts in to sender support -- trivial with the 
For example, consider the following trivial implementation of the sender-based 
template < class S > requires single - sender < S &> // See [exec.as_awaitable] task < single - sender - value - type < S >> retry ( S s ) { for (;;) { try { co_return co_await s ; } catch (...) { } } } 
Only some senders can be made awaitable directly because of the fact that callbacks are more expressive than coroutines. An awaitable expression has a single type: the result value of the async operation. In contrast, a callback can accept multiple arguments as the result of an operation. What’s more, the callback can have overloaded function call signatures that take different sets of arguments. There is no way to automatically map such senders into awaitables. The 
4.18. Cancellation of a sender can unwind a stack of coroutines
When looking at the sender-based 
When your task type’s promise inherits from 
In order to "catch" this uncatchable stopped exception, one of the calling coroutines in the stack would have to await a sender that maps the stopped channel into either a value or an error. That is achievable with the 
if ( auto opt = co_await execution :: stopped_as_optional ( some_sender )) { // OK, some_sender completed successfully, and opt contains the result. } else { // some_sender completed with a cancellation signal. } 
As described in the section "All awaitables are senders", the sender customization points recognize awaitables and adapt them transparently to model the sender concept. When 
Obviously, 
4.19. Composition with parallel algorithms
The C++ Standard Library provides a large number of algorithms that offer the potential for non-sequential execution via the use of execution policies. The set of algorithms with execution policy overloads are often referred to as "parallel algorithms", although additional policies are available.
Existing policies, such as 
We will propose a customization point for combining schedulers with policies in order to provide control over where work will execute.
template < class ExecutionPolicy > implementation - defined executing_on ( execution :: scheduler auto scheduler , ExecutionPolicy && policy ); 
This function would return an object of an implementation-defined type which can be used in place of an execution policy as the first argument to one of the parallel algorithms. The overload selected by that object should execute its computation as requested by 
The existing parallel algorithms are synchronous; all of the effects performed by the computation are complete before the algorithm returns to its caller. This remains unchanged with the 
In the future, we expect additional papers will propose asynchronous forms of the parallel algorithms which (1) return senders rather than values or 
4.20. User-facing sender factories
A sender factory is an algorithm that takes no senders as parameters and returns a sender.
4.20.1. execution :: schedule 
execution :: sender auto schedule ( execution :: scheduler auto scheduler ); 
Returns a sender describing the start of a task graph on the provided scheduler. See § 4.2 Schedulers represent execution contexts.
execution :: scheduler auto sch1 = get_system_thread_pool (). scheduler (); execution :: sender auto snd1 = execution :: schedule ( sch1 ); // snd1 describes the creation of a new task on the system thread pool 
4.20.2. execution :: just 
execution :: sender auto just ( auto ... && values ); 
Returns a sender with no completion schedulers, which sends the provided values. The input values are decay-copied into the returned sender. When the returned sender is connected to a receiver, the values are moved into the operation state if the sender is an rvalue; otherwise, they are copied. Then xvalues referencing the values in the operation state are passed to the receiver’s 
execution :: sender auto snd1 = execution :: just ( 3.14 ); execution :: sender auto then1 = execution :: then ( snd1 , [] ( double d ) { std :: cout << d << " \n " ; }); execution :: sender auto snd2 = execution :: just ( 3.14 , 42 ); execution :: sender auto then2 = execution :: then ( snd1 , [] ( double d , int i ) { std :: cout << d << ", " << i << " \n " ; }); std :: vector v3 { 1 , 2 , 3 , 4 , 5 }; execution :: sender auto snd3 = execution :: just ( v3 ); execution :: sender auto then3 = execution :: then ( snd3 , [] ( std :: vector < int >&& v3copy ) { for ( auto && e : v3copy ) { e *= 2 ; } return std :: move ( v3copy ); } auto && [ v3copy ] = this_thread :: sync_wait ( then3 ). value (); // v3 contains {1, 2, 3, 4, 5}; v3copy will contain {2, 4, 6, 8, 10}. execution :: sender auto snd4 = execution :: just ( std :: vector { 1 , 2 , 3 , 4 , 5 }); execution :: sender auto then4 = execution :: then ( std :: move ( snd4 ), [] ( std :: vector < int >&& v4 ) { for ( auto && e : v4 ) { e *= 2 ; } return std :: move ( v4 ); }); auto && [ v4 ] = this_thread :: sync_wait ( std :: move ( then4 )). value (); // v4 contains {2, 4, 6, 8, 10}. No vectors were copied in this example. 
4.20.3. execution :: transfer_just 
execution :: sender auto transfer_just ( execution :: scheduler auto scheduler , auto ... && values ); 
Returns a sender whose value completion scheduler is the provided scheduler, which sends the provided values in the same manner as 
execution :: sender auto vals = execution :: transfer_just ( get_system_thread_pool (). scheduler (), 1 , 2 , 3 ); execution :: sender auto snd = execution :: then ( vals , []( auto ... args ) { std :: ( args ...); }); // when snd is executed, it will print "123" 
This adaptor is included as it greatly simplifies lifting values into senders.
4.20.4. execution :: just_error 
execution :: sender auto just_error ( auto && error ); 
Returns a sender with no completion schedulers, which completes with the specified error. If the provided error is an lvalue reference, a copy is made inside the returned sender and a non-const lvalue reference to the copy is sent to the receiver’s 
4.20.5. execution :: just_stopped 
execution :: sender auto just_stopped (); 
Returns a sender with no completion schedulers, which completes immediately by calling the receiver’s 
4.20.6. execution :: read 
execution :: sender auto read ( auto tag ); execution :: sender auto get_scheduler () { return read ( execution :: get_scheduler ); } execution :: sender auto get_delegatee_scheduler () { return read ( execution :: get_delegatee_scheduler ); } execution :: sender auto get_allocator () { return read ( execution :: get_allocator ); } execution :: sender auto get_stop_token () { return read ( execution :: get_stop_token ); } 
Returns a sender that reaches into a receiver’s environment and pulls out the current value associated with the customization point denoted by 
This can be useful when scheduling nested dependent work. The following sender pulls the current schduler into the value channel and then schedules more work onto it.
execution :: sender auto task = execution :: get_scheduler () | execution :: let_value ([]( auto sched ) { return execution :: on ( sched , some nested work here ); }); this_thread :: sync_wait ( std :: move ( task ) ); // wait for it to finish 
This code uses the fact that 
4.21. User-facing sender adaptors
A sender adaptor is an algorithm that takes one or more senders, which it may 
Sender adaptors are lazy, that is, they are never allowed to submit any work for execution prior to the returned sender being started later on, and are also guaranteed to not start any input senders passed into them. Sender consumers such as § 4.21.13 execution::ensure_started, § 4.22.1 execution::start_detached, and § 4.22.2 this_thread::sync_wait start senders.
For more implementer-centric description of starting senders, see § 5.5 Sender adaptors are lazy.
4.21.1. execution :: transfer 
execution :: sender auto transfer ( execution :: sender auto input , execution :: scheduler auto scheduler ); 
Returns a sender describing the transition from the execution agent of the input sender to the execution agent of the target scheduler. See § 4.6 Execution context transitions are explicit.
execution :: scheduler auto cpu_sched = get_system_thread_pool (). scheduler (); execution :: scheduler auto gpu_sched = cuda :: scheduler (); execution :: sender auto cpu_task = execution :: schedule ( cpu_sched ); // cpu_task describes the creation of a new task on the system thread pool execution :: sender auto gpu_task = execution :: transfer ( cpu_task , gpu_sched ); // gpu_task describes the transition of the task graph described by cpu_task to the gpu 
4.21.2. execution :: then 
execution :: sender auto then ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); 
execution :: sender auto input = get_input (); execution :: sender auto snd = execution :: then ( input , []( auto ... args ) { std :: ( args ...); }); // snd describes the work described by pred // followed by printing all of the values sent by pred 
This adaptor is included as it is necessary for writing any sender code that actually performs a useful function.
4.21.3. execution :: upon_ * 
execution :: sender auto upon_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto upon_stopped ( execution :: sender auto input , std :: invocable auto function ); 
4.21.4. execution :: let_ * 
execution :: sender auto let_value ( execution :: sender auto input , std :: invocable < values - sent - by ( input ) ... > function ); execution :: sender auto let_error ( execution :: sender auto input , std :: invocable < errors - sent - by ( input ) ... > function ); execution :: sender auto let_stopped ( execution :: sender auto input , std :: invocable auto function ); 
4.21.5. execution :: on 
execution :: sender auto on ( execution :: scheduler auto sched , execution :: sender auto snd ); 
Returns a sender which, when started, will start the provided sender on an execution agent belonging to the execution context associated with the provided scheduler. This returned sender has no completion schedulers.
4.21.6. execution :: into_variant 
execution :: sender auto into_variant ( execution :: sender auto snd ); 
Returns a sender which sends a variant of tuples of all the possible sets of types sent by the input sender. Senders can send multiple sets of values depending on runtime conditions; this is a helper function that turns them into a single variant value.
4.21.7. execution :: stopped_as_optional 
execution :: sender auto stopped_as_optional ( single - sender auto snd ); 
Returns a sender that maps the value channel from a 
4.21.8. execution :: stopped_as_error 
template < move_constructible Error > execution :: sender auto stopped_as_error ( execution :: sender auto snd , Error err ); 
Returns a sender that maps the stopped channel to an error of 
4.21.9. execution :: bulk 
execution :: sender auto bulk ( execution :: sender auto input , std :: integral auto size , invocable < decltype ( size ), values - sent - by ( input ) ... > function ); 
Returns a sender describing the task of invoking the provided function with every index in the provided shape along with the values sent by the input sender. The returned sender completes once all invocations have completed, or an error has occurred. If it completes by sending values, they are equivalent to those sent by the input sender.
No instance of 
The 
In this proposal, only integral types are used to specify the shape of the bulk section. We expect that future papers may wish to explore extensions of the interface to explore additional kinds of shapes, such as multi-dimensional grids, that are commonly used for parallel computing tasks.
4.21.10. execution :: split 
execution :: sender auto split ( execution :: sender auto sender ); 
If the provided sender is a multi-shot sender, returns that sender. Otherwise, returns a multi-shot sender which sends values equivalent to the values sent by the provided sender. See § 4.7 Senders can be either multi-shot or single-shot.
4.21.11. execution :: when_all 
execution :: sender auto when_all ( execution :: sender auto ... inputs ); execution :: sender auto when_all_with_variant ( execution :: sender auto ... inputs ); 
The returned sender has no completion schedulers.
See § 4.9 Senders are joinable.
execution :: scheduler auto sched = thread_pool . scheduler (); execution :: sender auto sends_1 = ...; execution :: sender auto sends_abc = ...; execution :: sender auto both = execution :: when_all ( sched , sends_1 , sends_abc ); execution :: sender auto final = execution :: then ( both , []( auto ... args ){ std :: cout << std :: format ( "the two args: {}, {}" , args ...); }); // when final executes, it will print "the two args: 1, abc" 
4.21.12. execution :: transfer_when_all 
execution :: sender auto transfer_when_all ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); execution :: sender auto transfer_when_all_with_variant ( execution :: scheduler auto sched , execution :: sender auto ... inputs ); 
Similar to § 4.21.11 execution::when_all, but returns a sender whose value completion scheduler is the provided scheduler.
See § 4.9 Senders are joinable.
4.21.13. execution :: ensure_started 
execution :: sender auto ensure_started ( execution :: sender auto sender ); 
Once 
If the returned sender is destroyed before 
Note that the application will need to make sure that resources are kept alive in the case that the operation detaches.
e.g. by holding a 
4.22. User-facing sender consumers
A sender consumer is an algorithm that takes one or more senders, which it may 
4.22.1. execution :: start_detached 
void start_detached ( execution :: sender auto sender ); 
Like 
4.22.2. this_thread :: sync_wait 
auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ; 
If the provided sender sends an error instead of values, 
If the provided sender sends the "stopped" signal instead of values, 
For an explanation of the 
Note: This function is specified inside 
4.23. execution :: execute 
   In addition to the three categories of functions presented above, we also propose to include a convenience function for fire-and-forget eager one-way submission of an invocable to a scheduler, to fulfil the role of one-way executors from P0443.
void execution :: execute ( execution :: schedule auto sched , std :: invocable auto fn ); 
Submits the provided function for execution on the provided scheduler, as-if by:
auto snd = execution :: schedule ( sched ); auto work = execution :: then ( snd , fn ); execution :: start_detached ( work ); 
5. Design - implementer side
5.1. Receivers serve as glue between senders
A receiver is a callback that supports more than one channel. In fact, it supports three of them:
- 
     set_value operator () 
- 
     set_error 
- 
     set_stopped set_value set_error 
Exactly one of these channels must be successfully (i.e. without an exception being thrown) invoked on a receiver before it is destroyed; if a call to 
While the receiver interface may look novel, it is in fact very similar to the interface of 
Receivers are not a part of the end-user-facing API of this proposal; they are necessary to allow unrelated senders communicate with each other, but the only users who will interact with receivers directly are authors of senders.
Receivers are what is passed as the second argument to § 5.3 execution::connect.
5.2. Operation states represent work
An operation state is an object that represents work. Unlike senders, it is not a chaining mechanism; instead, it is a concrete object that packages the work described by a full sender chain, ready to be executed. An operation state is neither movable nor
copyable, and its interface consists of a single algorithm: 
Operation states are not a part of the user-facing API of this proposal; they are necessary for implementing sender consumers like 
The return value of § 5.3 execution::connect must satisfy the operation state concept.
5.3. execution :: connect 
   
execution :: sender auto snd = some input sender ; execution :: receiver auto rcv = some receiver ; execution :: operation_state auto state = execution :: connect ( snd , rcv ); execution :: start ( state ); // at this point, it is guaranteed that the work represented by state has been submitted // to an execution context, and that execution context will eventually fulfill the // receiver contract of rcv // operation states are not movable, and therefore this operation state object must be // kept alive until the operation finishes 
5.4. Sender algorithms are customizable
Senders being able to advertise what their completion schedulers are fulfills one of the promises of senders: that of being able to customize an implementation of a sender algorithm based on what scheduler any work it depends on will complete on.
The simple way to provide customizations for functions like 
- 
     sender . then ( invocable ) 
- 
     then ( sender , invocable ) 
- 
     a default implementation of then 
However, this definition is problematic. Imagine another sender adaptor, 
execution :: scheduler auto cuda_sch = cuda_scheduler {}; execution :: sender auto initial = execution :: schedule ( cuda_sch ); // the type of initial is a type defined by the cuda_scheduler // let’s call it cuda::schedule_sender<> execution :: sender auto next = execution :: then ( cuda_sch , []{ return 1 ; }); // the type of next is a standard-library implementation-defined sender adaptor // that wraps the cuda sender // let’s call it execution::then_sender_adaptor<cuda::schedule_sender<>> execution :: sender auto kernel_sender = execution :: bulk ( next , shape , []( int i ){ ... }); 
How can we specialize the 
namespace cuda :: for_adl_purposes { template < typename ... SentValues > class schedule_sender { execution :: operation_state auto connect ( execution :: receiver auto rcv ); execution :: scheduler auto get_completion_scheduler () const ; }; execution :: sender auto bulk ( execution :: sender auto && input , execution :: shape auto && shape , invocable < sender - values ( input ) > auto && fn ) { // return a cuda sender representing a bulk kernel launch } } // namespace cuda::for_adl_purposes 
However, if the input sender is not just a 
This means that well-meant specialization of sender algorithms that are entirely scheduler-agnostic can have negative consequences. The scheduler-specific specialization - which is essential for good performance on platforms providing specialized ways to launch certain sender algorithms - would not be selected in such cases. But it’s really the scheduler that should control the behavior of sender algorithms when a non-default implementation exists, not the sender. Senders merely describe work; schedulers, however, are the handle to the runtime that will eventually execute said work, and should thus have the final say in how the work is going to be executed.
Therefore, we are proposing the following customization scheme (also modified to take § 5.9 Ranges-style CPOs vs tag_invoke into account): the expression 
- 
     tag_invoke ( < sender - algorithm > , get_completion_scheduler < Signal > ( sender ), sender , args ...) 
- 
     tag_invoke ( < sender - algorithm > , sender , args ...) 
- 
     a default implementation, if there exists a default implementation of the given sender algorithm. 
where 
For sender algorithms which accept concepts other than 
5.5. Sender adaptors are lazy
Contrary to early revisions of this paper, we propose to make all sender adaptors perform strictly lazy submission, unless specified otherwise (the one notable exception in this paper is § 4.21.13 execution::ensure_started, whose sole purpose is to start an input sender).
Strictly lazy submission means that there is a guarantee that no work is submitted to an execution context before a receiver is connected to a sender, and 
5.6. Lazy senders provide optimization opportunities
Because lazy senders fundamentally describe work, instead of describing or representing the submission of said work to an execution context, and thanks to the flexibility of the customization of most sender algorithms, they provide an opportunity for fusing multiple algorithms in a sender chain together, into a single function that can later be submitted for execution by an execution context. There are two ways this can happen.
The first (and most common) way for such optimizations to happen is thanks to the structure of the implementation: because all the work is done within callbacks invoked on the completion of an earlier sender, recursively up to the original source of computation, the compiler is able to see a chain of work described using senders as a tree of tail calls, allowing for inlining and removal of most of the sender machinery. In fact, when work is not submitted to execution contexts outside of the current thread of execution, compilers are capable of removing the senders abstraction entirely, while still allowing for composition of functions across different parts of a program.
The second way for this to occur is when a sender algorithm is specialized for a specific set of arguments. For instance, we expect that, for senders which are known to have been started already, § 4.21.13 execution::ensure_started will be an identity transformation, because the sender algorithm will be specialized for such senders. Similarly, an implementation could recognize two subsequent § 4.21.9 execution::bulks of compatible shapes, and merge them together into a single submission of a GPU kernel.
5.7. Execution context transitions are two-step
Because 
This, however, is a problem: because customization of sender algorithms must be controlled by the scheduler they will run on (see § 5.4 Sender algorithms are customizable), the type of the sender returned from 
To allow for such customization from both ends, we propose the inclusion of a secondary transitioning sender adaptor, called 
The default implementation of 
5.8. All senders are typed
All senders must advertise the types they will send when they complete.
This is necessary for a number of features, and writing code in a way that’s
agnostic of whether an input sender is typed or not in common sender adaptors
such as 
The mechanism for this advertisement is similar to the one in A Unified Executors Proposal for C++; the
way to query the types is through 
There’s a choice made in the specification of § 4.22.2 this_thread::sync_wait: it returns a tuple of values sent by the
sender passed to it, wrapped in 
execution :: sender auto sends_1 = ...; execution :: sender auto sends_2 = ...; execution :: sender auto sends_3 = ...; auto [ a , b , c ] = this_thread :: sync_wait ( execution :: transfer_when_all ( execution :: get_completion_scheduler < execution :: set_value_t > ( sends_1 ), sends_1 , sends_2 , sends_3 )). value (); // a == 1 // b == 2 // c == 3 
This works well for senders that always send the same set of arguments. If we ignore the possibility of having a sender that sends different sets of arguments into a receiver, we can specify the "canonical" (i.e. required to be followed by all senders) form of 
template < template < typename ... > typename TupleLike > using value_types = TupleLike ; 
If senders could only ever send one specific set of values, this would probably need to be the required form of 
This matter is somewhat complicated by the fact that (1) 
template < template < typename ... > typename TupleLike , template < typename ... > typename VariantLike > using value_types = VariantLike < TupleLike < Types1 ... > , TupleLike < Types2 ... > , ..., TupleLike < Types3 ... > > ; 
This, however, introduces a couple of complications:
- 
     A just ( 1 ) std :: variant < std :: tuple < int >> value_types 
- 
     As a consequence of (1): because sync_wait std :: tuple < int > just ( 1 ) std :: variant < std :: tuple < int >> sync_wait 
One possible solution to (2) above is to place a requirement on 
auto sync_wait_with_variant ( execution :: sender auto sender ) -> std :: optional < std :: variant < std :: tuple < values 0 - sent - by ( sender ) > , std :: tuple < values 1 - sent - by ( sender ) > , ..., std :: tuple < values n - sent - by ( sender ) > >> ; auto sync_wait ( execution :: sender auto sender ) requires ( always - sends - same - values ( sender )) -> std :: optional < std :: tuple < values - sent - by ( sender ) >> ; 
5.9. Ranges-style CPOs vs tag_invoke 
   The contemporary technique for customization in the Standard Library is customization point objects. A customization point object, will it look for member functions and then for nonmember functions with the same name as the customization point, and calls those if they match. This is the technique used by the C++20 ranges library, and previous executors proposals (A Unified Executors Proposal for C++ and Towards C++23 executors: A proposal for an initial set of algorithms) intended to use it as well. However, it has several unfortunate consequences:
- 
     It does not allow for easy propagation of customization points unknown to the adaptor to a wrapped object, which makes writing universal adapter types much harder - and this proposal uses quite a lot of those. 
- 
     It effectively reserves names globally. Because neither member names nor ADL-found functions can be qualified with a namespace, every customization point object that uses the ranges scheme reserves the name for all types in all namespaces. This is unfortunate due to the sheer number of customization points already in the paper, but also ones that we are envisioning in the future. It’s also a big problem for one of the operations being proposed already: sync_wait std :: this_fiber :: sync_wait std :: this_thread :: sync_wait 
This paper proposes to instead use the mechanism described in tag_invoke: A general pattern for supporting customisable functions: 
In short, instead of using globally reserved names, 
Using 
- 
     It reserves only a single global name, instead of reserving a global name for every customization point object we define. 
- 
     It is possible to propagate customizations to a subobject, because the information of which customization point is being resolved is in the type of an argument, and not in the name of the function: // forward most customizations to a subobject template < typename Tag , typename ... Args > friend auto tag_invoke ( Tag && tag , wrapper & self , Args && ... args ) { return std :: forward < Tag > ( tag )( self . subobject , std :: forward < Args > ( args )...); } // but override one of them with a specific value friend auto tag_invoke ( specific_customization_point_t , wrapper & self ) { return self . some_value ; } 
- 
     It is possible to pass those as template arguments to types, because the information of which customization point is being resolved is in the type. Similarly to how A Unified Executors Proposal for C++ defines a polymorphic executor wrapper which accepts a list of properties it supports, we can imagine scheduler and sender wrappers that accept a list of queries and operations they support. That list can contain the types of the customization point objects, and the polymorphic wrappers can then specialize those customization points on themselves using tag_invoke unifex :: any_unique 
6. Specification
Much of this wording follows the wording of A Unified Executors Proposal for C++.
§ 7 Library introduction [library] is meant to be a diff relative to the wording of the [library] clause of Working Draft, Standard for Programming Language C++.
§ 8 General utilities library [utilities] is meant to be a diff relative to the wording of the [utilities] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from tag_invoke: A general pattern for supporting customisable functions.
§ 9 Thread support library [thread] is meant to be a diff relative to the wording of the [thread] clause of Working Draft, Standard for Programming Language C++. This diff applies changes from Composable cancellation for sender-based async operations.
§ 10 Execution control library [exec] is meant to be added as a new library clause to the working draft of C++.
7. Library introduction [library]
[Editorial: Add the header 
In subclause [conforming], after [lib.types.movedfrom], add the following new subclause with suggested stable name [lib.tmpl-heads].
16.4.6.17 Class template-heads
If a class template’s template-head is marked with "arguments are not associated entities"", any template arguments do not contribute to the associated entities ([basic.lookup.argdep]) of a function call where a specialization of the class template is an associated entity. In such a case, the class template may be implemented as an alias template referring to a templated class, or as a class template where the template arguments themselves are templated classes.
[Example:
template < class T > // arguments are not associated entities struct S {}; namespace N { int f ( auto ); struct A {}; } int x = f ( S < N :: A > {}); // error: N::f not a candidate The template
specified above may be implemented asS template < class T > struct s - impl { struct type { }; }; template < class T > using S = typename s - impl < T >:: type ; or as
template < class T > struct hidden { using type = struct _ { using type = T ; }; }; template < class HiddenT > struct s - impl { using T = typename HiddenT :: type ; }; template < class T > using S = s - impl < typename hidden < T >:: type > ; -- end example]
8. General utilities library [utilities]
8.1. Function objects [function.objects]
8.1.1. Header < functional > 
   At the end of this subclause, insert the following declarations into the synopsis within 
// Expositon-only: template < class Fn , class ... Args > concept callable = requires ( Fn && fn , Args && ... args ) { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...); }; template < class Fn , class ... Args > concept nothrow - callable = callable < Fn , Args ... > && requires ( Fn && fn , Args && ... args ) { { std :: forward < Fn > ( fn )( std :: forward < Args > ( args )...) } noexcept ; }; template < class Fn , class ... Args > using call - result - t = decltype ( declval < Fn > ()( declval < Args > ()...)); // [func.tag_invoke], tag_invoke namespace tag - invoke { // exposition only void tag_invoke (); template < class Tag , class ... Args > concept tag_invocable = requires ( Tag && tag , Args && ... args ) { tag_invoke ( std :: forward < Tag > ( tag ), std :: forward < Args > ( args )...); }; template < class Tag , class ... Args > concept nothrow_tag_invocable = tag_invocable < Tag , Args ... > && requires ( Tag && tag , Args && ... args ) { { tag_invoke ( std :: forward < Tag > ( tag ), std :: forward < Args > ( args )...) } noexcept ; }; template < class Tag , class ... Args > using tag_invoke_result_t = decltype ( tag_invoke ( declval < Tag > (), declval < Args > ()...)); template < class Tag , class ... Args > struct tag_invoke_result {}; template < class Tag , class ... Args > requires tag_invocable < Tag , Args ... > struct tag_invoke_result < Tag , Args ... > { using type = tag_invoke_result_t < Tag , Args ... > ; }; struct tag ; // exposition only } inline constexpr tag - invoke :: tag tag_invoke {}; using tag - invoke :: tag_invocable ; using tag - invoke :: nothrow_tag_invocable ; using tag - invoke :: tag_invoke_result_t ; using tag - invoke :: tag_invoke_result ; template < auto & Tag > using tag_t = decay_t < decltype ( Tag ) > ; 
8.1.2. tag_invoke 
   Insert this section as a new subclause, between Searchers [func.search] and Class template 
The name
denotes a customization point object [customization.point.object]. Given subexpressionsstd :: tag_invoke andT , the expressionA ... is expression-equivalent [defns.expression-equivalent] tostd :: tag_invoke ( T , A ...) if it is a well-formed expression with overload resolution performed in a context in which unqualified lookup fortag_invoke ( T , A ...) finds only the declarationtag_invoke void tag_invoke (); Otherwise,
is ill-formed.std :: tag_invoke ( T , A ...) 
[Note: Diagnosable ill-formed cases above result in substitution failure when
appears in the immediate context of a template instantiation. —end note]std :: tag_invoke ( T , A ...) 
9. Thread support library [thread]
9.1. Stop tokens [thread.stoptoken]
9.1.1. Header < stop_token > 
   At the beginning of this subclause, insert the following declarations into the synopsis within 
template < template < typename > class > struct check - type - alias - exists ; // exposition-only template < typename T > concept stoppable_token = see - below ; template < typename T , typename CB , typename Initializer = CB > concept stoppable_token_for = see - below ; template < typename T > concept unstoppable_token = see - below ; 
At the end of this subclause, insert the following declarations into the synopsis of within 
// [stoptoken.never], class never_stop_token class never_stop_token ; // [stoptoken.inplace], class in_place_stop_token class in_place_stop_token ; // [stopsource.inplace], class in_place_stop_source class in_place_stop_source ; // [stopcallback.inplace], class template in_place_stop_callback template < typename Callback > class in_place_stop_callback ; 
9.1.2. Stop token concepts [thread.stoptoken.concepts]
Insert this section as a new subclause between Header 
The
concept checks for the basic interface of a “stop token” which is copyable and allows polling to see if stop has been requested and also whether a stop request is possible. It also requires an associated nested template-type-alias,stoppable_token , that identifies the stop-callback type to use to register a callback to be executed if a stop-request is ever made on a stoppable_token of type,T :: callback_type < CB > . TheT concept checks for a stop token type compatible with a given callback type. Thestoppable_token_for concept checks for a stop token type that does not allow stopping.unstoppable_token template < typename T > concept stoppable_token = copy_constructible < T > && move_constructible < T > && is_nothrow_copy_constructible_v < T > && is_nothrow_move_constructible_v < T > && equality_comparable < T > && requires ( const T & token ) { { token . stop_requested () } noexcept -> boolean - testable ; { token . stop_possible () } noexcept -> boolean - testable ; typename check - type - alias - exists < T :: template callback_type > ; }; template < typename T , typename CB , typename Initializer = CB > concept stoppable_token_for = stoppable_token < T > && invocable < CB > && requires { typename T :: template callback_type < CB > ; } && constructible_from < CB , Initializer > && constructible_from < typename T :: template callback_type < CB > , T , Initializer > && constructible_from < typename T :: template callback_type < CB > , T & , Initializer > && constructible_from < typename T :: template callback_type < CB > , const T , Initializer > && constructible_from < typename T :: template callback_type < CB > , const T & , Initializer > ; template < typename T > concept unstoppable_token = stoppable_token < T > && requires { { T :: stop_possible () } -> boolean - testable ; } && ( ! T :: stop_possible ()); 
Let
andt be distinct object of typeu . The typeT modelsT only if:stoppable_token 
All copies of a
reference the same logical shared stop state and shall report values consistent with each other.stoppable_token 
If
evaluates tot . stop_possible () falsethen, if, references the same logical shared stop state,u shall also subsequently evaluate tou . stop_possible () falseandshall also subsequently evaluate tou . stop_requested () false.
If
evaluates tot . stop_requested () truethen, if, references the same logical shared stop state,u shall also subsequently evaluate tou . stop_requested () trueandshall also subsequently evaluate tou . stop_possible () true.
Given a callback-type, CB, and a callback-initializer argument,
, of typeinit then constructing an instance,Initializer , of typecb , passingT :: callback_type < CB > as the first argument andt as the second argument to the constructor, shall, ifinit ist . stop_possible () true, construct an instance,, of typecallback , direct-initialized withCB , and register callback withinit ’s shared stop state such that callback will be invoked with an empty argument list if a stop request is made on the shared stop state.t 
If
ist . stop_requested () trueat the time callback is registered then callback may be invoked immediately inline inside the call to’s constructor.cb 
If callback is invoked then, if
references the same shared stop state asu , an evaluation oft will beu . stop_requested () trueif the beginning of the invocation of callback strongly-happens-before the evaluation of.u . stop_requested () 
If
evaluates tot . stop_possible () falsethen the construction ofis not required to construct and initializecb .callback 
Construction of a
instance shall only throw exceptions thrown by the initialization of theT :: callback_type < CB > instance from the value of typeCB .Initializer 
Destruction of the
object,T :: callback_type < CB > , removescb from the shared stop state such thatcallback will not be invoked after the destructor returns.callback 
If
is currently being invoked on another thread then the destructor ofcallback will block until the invocation ofcb returns such that the return from the invocation ofcallback strongly-happens-before the destruction ofcallback .callback 
Destruction of a callback
shall not block on the completion of the invocation of some other callback registered with the same shared stop state.cb 
9.1.3. Class stop_token 
   9.1.3.1. General [stoptoken.general]
Modify the synopsis of class 
namespace std { class stop_token { public : template < class T > using callback_type = stop_callback < T > ; // [stoptoken.cons], constructors, copy, and assignment stop_token () noexcept ; // ... 
9.1.4. Class never_stop_token 
   Insert a new subclause, Class 
9.1.4.1. General [stoptoken.never.general]
- 
     The class never_stop_token unstoppable_token 
namespace std { class never_stop_token { // exposition only struct callback { explicit callback ( never_stop_token , auto && ) noexcept {} }; public : template < class > using callback_type = callback ; static constexpr bool stop_requested () noexcept { return false; } static constexpr bool stop_possible () noexcept { return false; } }; } 
9.1.5. Class in_place_stop_token 
   Insert a new subclause, Class 
9.1.5.1. General [stoptoken.inplace.general]
- 
     The class in_place_stop_token stop_requested stop_possible in_place_stop_source in_place_stop_token in_place_stop_callback in_place_stop_source 
namespace std { class in_place_stop_token { public : template < class CB > using callback_type = in_place_stop_callback < CB > ; // [stoptoken.inplace.cons], constructors, copy, and assignment in_place_stop_token () noexcept ; ~ in_place_stop_token (); void swap ( in_place_stop_token & ) noexcept ; // [stoptoken.inplace.mem], stop handling [[ nodiscard ]] bool stop_requested () const noexcept ; [[ nodiscard ]] bool stop_possible () const noexcept ; [[ nodiscard ]] bool operator == ( const in_place_stop_token & ) const noexcept = default ; friend void swap ( in_place_stop_token & lhs , in_place_stop_token & rhs ) noexcept ; private : friend class in_place_stop_source ; const in_place_stop_source * source_ ; // exposition only explicit in_place_stop_token ( const in_place_stop_source * source ) noexcept ; }; } 
9.1.5.2. Constructors, copy, and assignment [stoptoken.inplace.cons]
in_place_stop_token () noexcept ; 
- 
     Effects: initializes source_ nullptr 
explicit in_place_stop_token ( const in_place_stop_source * source ) noexcept ; 
- 
     Effects: initializes source_ source 
void swap ( stop_token & rhs ) noexcept ; 
- 
     Effects: Exchanges the values of source_ rhs . source_ 
9.1.5.3. Members [stoptoken.inplace.mem]
[[ nodiscard ]] bool stop_requested () const noexcept ; 
- 
     Returns: source_ != nullptr && source_ -> stop_requested () 
- 
     Remarks: If source_ != nullptr * source_ 
[[ nodiscard ]] bool stop_possible () const noexcept ; 
- 
     Returns: source_ != nullptr && source_ -> stop_possible () 
- 
     Remarks: If source_ != nullptr * source_ 
9.1.5.4. Non-member functions [stoptoken.inplace.nonmembers]
friend void swap ( in_place_stop_token & x , in_place_stop_token & y ) noexcept ; 
- 
     Effects: Equivalent to: x . swap ( y ) 
9.1.6. Class in_place_stop_source 
   Insert a new subclause, Class 
9.1.6.1. General [stopsource.inplace.general]
- 
     The class in_place_stop_source in_place_stop_source in_place_stop_token in_place_stop_token in_place_stop_source in_place_stop_token 
namespace std { class in_place_stop_source { public : // [stopsource.inplace.cons], constructors, copy, and assignment in_place_stop_source () noexcept ; in_place_stop_source ( in_place_stop_source && ) noexcept = delete ; ~ in_place_stop_source (); //[stopsource.inplace.mem], stop handling [[ nodiscard ]] in_place_stop_token get_token () const noexcept ; [[ nodiscard ]] bool stop_possible () const noexcept ; [[ nodiscard ]] bool stop_requested () const noexcept ; bool request_stop () noexcept ; }; } 
9.1.6.2. Constructors, copy, and assignment [stopsource.inplace.cons]
in_place_stop_source () noexcept ; 
- 
     Effects: Initializes a new stop state inside * this 
- 
     Postconditions: stop_possible () trueandstop_requested () false.
9.1.6.3. Members [stopsource.inplace.mem]
[[ nodiscard ]] in_place_stop_token get_token () const noexcept ; 
- 
     Returns: in_place_stop_token { this } 
[[ nodiscard ]] bool stop_possible () const noexcept ; 
- 
     Returns: trueif the stop state inside* this false.
[[ nodiscard ]] bool stop_requested () const noexcept ; 
- 
     Returns: trueif the stop state inside* this false.
bool request_stop () noexcept ; 
- 
     Effects: Atomically determines whether the stop state inside * this in_place_stop_callback terminate 
- 
     Postconditions: stop_possible () falseandstop_requested () 
- 
     Returns: trueif this call made a stop request; otherwisefalse.
9.1.7. Class template in_place_stop_callback 
   Insert a new subclause, Class template 
9.1.7.1. General [stopcallback.inplace.general]
- 
namespace std { template < class Callback > class in_place_stop_callback { public : using callback_type = Callback ; // [stopcallback.inplace.cons], constructors and destructor template < class C > explicit in_place_stop_callback ( in_place_stop_token st , C && cb ) noexcept ( is_nothrow_constructible_v < Callback , C > ); ~ in_place_stop_callback (); in_place_stop_callback ( in_place_stop_callback && ) = delete ; private : Callback callback_ ; // exposition only }; template < class Callback > in_place_stop_callback ( in_place_stop_token , Callback ) -> in_place_stop_callback < Callback > ; } 
- 
     Mandates: in_place_stop_callback Callback invocable destructible 
- 
     Preconditions: in_place_stop_callback Callback invocable destructible 
- 
     Recommended practice: Implementation should use the storage of the in_place_stop_callback in_place_stop_source 
9.1.7.2. Constructors and destructor [stopcallback.inplace.cons]
template explicit in_place_stop_callback ( in_place_stop_token st , C && cb ) noexcept ( is_nothrow_constructible_v < Callback , C > ); 
- 
     Constraints: Callback C constructible_from < Callback , C > 
- 
     Preconditions: Callback C constructible_from < Callback , C > 
- 
     Effects: Initializes callback_ std :: forward < C > ( cb ) st . stop_requested () true, thenstd :: forward < Callback > ( callback_ )() st in_place_stop_source in_place_stop_source st std :: forward < Callback > ( callback_ )() request_stop () in_place_stop_source in_place_stop_callback in_place_stop_source st 
- 
     Throws: Any exception thrown by the initialization of callback_ 
- 
     Remarks: If evaluating std :: forward < Callback > ( callback_ )() terminate 
~ in_place_stop_callback (); 
- 
     Effects: Unregisters the callback from the stop state of the associated in_place_stop_source callback_ callback_ callback_ callback_ callback_ 
- 
     Remarks: A program has undefined behavior if the invocation of this function does not strongly happen before the beginning of the invocation of the destructor of the associated in_place_stop_source 
10. Execution control library [exec]
- 
     This Clause describes components supporting execution of function objects [function.objects]. 
- 
     The following subclauses describe the requirements, concepts, and components for execution control primitives as summarized in Table 1. 
| Subclause | Header | |
| [exec.execute] | One-way execution | 
- 
     [Note: A large number of execution control primitives are customization point objects. For an object one might define multiple types of customization point objects, for which different rules apply.**** Table 2 shows the types of customization point objects used in the execution control library: 
| Customization point object type | Purpose | Examples | 
|---|---|---|
| core | provide core execution functionality, and connection between core components | ,, | 
| completion signals | called by senders to announce the completion of the work (success, error, or cancellation) | ,, | 
| senders | allow the specialization of the provided sender algorithms | 
 | 
| general queries | allow querying different properties of execution objects | ,,, | 
| scheduler queries | allow querying schedulers properties | , | 
| sender queries | allow querying senders properties |  | 
-- end note]
10.1. Header < execution > 
namespace std :: execution { // [exec.helpers], helper concepts template < class T > concept movable - value = see - below ; // exposition only template < class From , class To > concept decays - to = same_as < decay_t < From > , To > ; // exposition only template < class T > concept class - type = decays - to < T , T > && is_class_v < T > ; // exposition only // [exec.queries], general queries namespace general - queries { // exposition only struct get_scheduler_t ; struct get_delegatee_scheduler_t ; struct get_allocator_t ; struct get_stop_token_t ; } using general - queries :: get_scheduler_t ; using general - queries :: get_delegatee_scheduler_t ; using general - queries :: get_allocator_t ; using general - queries :: get_stop_token_t ; inline constexpr get_scheduler_t get_scheduler {}; inline constexpr get_delegatee_scheduler_t get_delegatee_scheduler {}; inline constexpr get_allocator_t get_allocator {}; inline constexpr get_stop_token_t get_stop_token {}; template < class T > using stop_token_of_t = remove_cvref_t < decltype ( get_stop_token ( declval < T > ())) > ; // [exec.env], execution environments namespace exec - envs { // exposition only struct no_env ; struct empty - env {}; // exposition only struct get_env_t ; struct forwarding_env_query_t ; } using exec - envs :: no_env ; using exec - envs :: empty - env ; using exec - envs :: get_env_t ; using exec - envs :: forwarding_env_query_t ; inline constexpr get_env_t get_env {}; inline constexpr forwarding_env_query_t forwarding_env_query {}; template < class T > concept forwarding - env - query = // exposition only forwarding_env_query ( T {}); template < class T > using env_of_t = decltype ( get_env ( declval < T > ())); // [exec.sched], schedulers template < class S > concept scheduler = see - below ; // [exec.sched_queries], scheduler queries enum class forward_progress_guarantee ; namespace schedulers - queries { // exposition only struct forwarding_scheduler_query_t ; struct get_forward_progress_guarantee_t ; } using schedulers - queries :: forwarding_scheduler_query_t ; using schedulers - queries :: get_forward_progress_guarantee_t ; inline constexpr forwarding_scheduler_query_t forwarding_scheduler_query {}; inline constexpr get_forward_progress_guarantee_t get_forward_progress_guarantee {}; } namespace std :: this_thread { namespace this - thread - queries { // exposition only struct execute_may_block_caller_t ; } using this - thread - queries :: execute_may_block_caller_t ; inline constexpr execute_may_block_caller_t execute_may_block_caller {}; } namespace std :: execution { // [exec.recv], receivers template < class T > concept receiver = see - below ; template < class T , class Completions > concept receiver_of = see - below ; namespace receivers { // exposition only struct set_value_t ; struct set_error_t ; struct set_stopped_t ; } using receivers :: set_value_t ; using receivers :: set_error_t ; using receivers :: set_stopped_t ; inline constexpr set_value_t set_value {}; inline constexpr set_error_t set_error {}; inline constexpr set_stopped_t set_stopped {}; // [exec.recv_queries], receiver queries namespace receivers - queries { // exposition only struct forwarding_receiver_query_t ; } using receivers - queries :: forwarding_receiver_query_t ; inline constexpr forwarding_receiver_query_t forwarding_receiver_query {}; template < class T > concept forwarding - receiver - query = // exposition only forwarding_receiver_query ( T {}); // [exec.op_state], operation states template < class O > concept operation_state = see - below ; namespace op - state { // exposition only struct start_t ; } using op - state :: start_t ; inline constexpr start_t start {}; // [exec.snd], senders template < class S , class E = no_env > concept sender = see - below ; template < class S , class R > concept sender_to = see - below ; template < class S , class E = no_env , class ... Ts > concept sender_of = see below ; template < class ... Ts > struct type - list ; // exposition only template < class S , class E = no_env > using single - sender - value - type = see below ; // exposition only template < class S , class E = no_env > concept single - sender = see below ; // exposition only // [exec.sndtraits], completion signatures namespace completion - signatures { // exposition only struct get_completion_signatures_t ; } using completion - signatures :: get_completion_signatures_t ; inline constexpr get_completion_signatures_t get_completion_signatures {}; template < class S , class E = no_env > requires sender < S , E > using completion_signatures_of_t = see below ; template < class E > // arguments are not associated entities ([lib.tmpl-heads]) struct dependent_completion_signatures ; template < class ... Ts > using decayed - tuple = tuple < decay_t < Ts > ... > ; // exposition only template < class ... Ts > using variant - or - empty = see below ; // exposition only template < class S , class E = no_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender < S , E > using value_types_of_t = see below ; template < class S , class E = no_env , template < class ... > class Variant = variant - or - empty > requires sender < S , E > using error_types_of_t = see below ; template < class S , class E = no_env > requries sender < S , E > inline constexpr bool sends_stopped = see below ; // [exec.connect], the connect sender algorithm namespace senders - connect { // exposition only struct connect_t ; } using senders - connect :: connect_t ; inline constexpr connect_t connect {}; template < class S , class R > using connect_result_t = decltype ( connect ( declval < S > (), declval < R > ())); // [exec.snd_queries], sender queries namespace senders - queries { // exposition only struct forwarding_sender_query_t ; template < class CPO > struct get_completion_scheduler_t ; } using senders - queries :: forwarding_sender_query_t ; using senders - queries :: get_completion_scheduler_t ; inline constexpr forwarding_sender_query_t forwarding_sender_query {}; namespace senders - queries { // exposition only template < class T > concept forwarding - sender - query = // exposition only forwarding_sender_query ( T {}); } template < class CPO > inline constexpr get_completion_scheduler_t < CPO > get_completion_scheduler {}; // [exec.factories], sender factories namespace senders - factories { // exposition only struct schedule_t ; struct transfer_just_t ; } inline constexpr unspecified just {}; inline constexpr unspecified just_error {}; inline constexpr unspecified just_stopped {}; using senders - factories :: schedule_t ; using senders - factories :: transfer_just_t ; inline constexpr schedule_t schedule {}; inline constexpr transfer_just_t transfer_just {}; inline constexpr unspecified read {}; template < scheduler S > using schedule_result_t = decltype ( schedule ( declval < S > ())); // [exec.adapt], sender adaptors namespace sender - adaptor - closure { // exposition only template < class - type D > struct sender_adaptor_closure { }; } using sender - adaptor - closure :: sender_adaptor_closure ; namespace sender - adaptors { // exposition only struct on_t ; struct transfer_t ; struct schedule_from_t ; struct then_t ; struct upon_error_t ; struct upon_stopped_t ; struct let_value_t ; struct let_error_t ; struct let_stopped_t ; struct bulk_t ; struct split_t ; struct when_all_t ; struct when_all_with_variant_t ; struct transfer_when_all_t ; struct transfer_when_all_with_variant_t ; struct into_variant_t ; struct stopped_as_optional_t ; struct stopped_as_error_t ; struct ensure_started_t ; } using sender - adaptors :: on_t ; using sender - adaptors :: transfer_t ; using sender - adaptors :: schedule_from_t ; using sender - adaptors :: then_t ; using sender - adaptors :: upon_error_t ; using sender - adaptors :: upon_stopped_t ; using sender - adaptors :: let_value_t ; using sender - adaptors :: let_error_t ; using sender - adaptors :: let_stopped_t ; using sender - adaptors :: bulk_t ; using sender - adaptors :: split_t ; using sender - adaptors :: when_all_t ; using sender - adaptors :: when_all_with_variant_t ; using sender - adaptors :: transfer_when_all_t ; using sender - adaptors :: transfer_when_all_with_variant_t ; using sender - adaptors :: into_variant_t ; using sender - adaptors :: stopped_as_optional_t ; using sender - adaptors :: stopped_as_error_t ; using sender - adaptors :: ensure_started_t ; inline constexpr on_t on {}; inline constexpr transfer_t transfer {}; inline constexpr schedule_from_t schedule_from {}; inline constexpr then_t then {}; inline constexpr upon_error_t upon_error {}; inline constexpr upon_stopped_t upon_stopped {}; inline constexpr let_value_t let_value {}; inline constexpr let_error_t let_error {}; inline constexpr let_stopped_t let_stopped {}; inline constexpr bulk_t bulk {}; inline constexpr split_t split {}; inline constexpr when_all_t when_all {}; inline constexpr when_all_with_variant_t when_all_with_variant {}; inline constexpr transfer_when_all_t transfer_when_all {}; inline constexpr transfer_when_all_with_variant_t transfer_when_all_with_variant {}; inline constexpr into_variant_t into_variant {}; inline constexpr stopped_as_optional_t stopped_as_optional ; inline constexpr stopped_as_error_t stopped_as_error ; inline constexpr ensure_started_t ensure_started {}; // [exec.consumers], sender consumers namespace sender - consumers { // exposition only struct start_detached_t ; } using sender - consumers :: start_detached_t ; inline constexpr start_detached_t start_detached {}; // [exec.utils], sender and receiver utilities // [exec.utils.rcvr_adptr] template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor ; template < class Fn > concept completion - signature = // exposition only see below ; // [exec.utils.cmplsigs] template < completion - signature ... Fns > struct completion_signatures {}; template < class ... Args > // exposition only using default - set - value = completion_signatures < set_value_t ( Args ...) > ; template < class Err > // exposition only using default - set - error = completion_signatures < set_error_t ( Err ) > ; template < class Sigs , class E > // exposition only concept valid - completion - signatures = see below ; // [exec.utils.mkcmplsigs] template < sender Sndr , class Env = no_env , valid - completion - signatures < Env > AddlSigs = completion_signatures <> , template < class ... > class SetValue = /* see below */ , template < class > class SetError = /* see below */ , valid - completion - signatures < Env > SetStopped = completion_signatures < set_stopped_t () >> requires sender < Sndr , Env > using make_completion_signatures = completion_signatures < /* see below */ > ; // [exec.ctx], execution contexts class run_loop ; } namespace std :: this_thread { namespace this - thread { // exposition only struct sync - wait - env ; // exposition only template < class S > requires sender < S , sync - wait - env > using sync - wait - type = see - below ; // exposition-only template < class S > using sync - wait - with - variant - type = see - below ; // exposition-only struct sync_wait_t ; struct sync_wait_with_variant_t ; } using this - thread :: sync_wait_t ; using this - thread :: sync_wait_with_variant_t ; inline constexpr sync_wait_t sync_wait {}; inline constexpr sync_wait_with_variant_t sync_wait_with_variant {}; } namespace std :: execution { // [exec.execute], one-way execution namespace execute { // exposition only struct execute_t ; } using execute :: execute_t ; inline constexpr execute_t execute {}; // [exec.as_awaitable] namespace coro - utils { // exposition only struct as_awaitable_t ; } using coro - utils :: as_awaitable_t ; inline constexpr as_awaitable_t as_awaitable ; // [exec.with_awaitable_senders] template < class - type Promise > struct with_awaitable_senders ; } 
10.2. Helper concepts [exec.helpers]
template < class T > concept movable - value = // exposition only move_constructible < decay_t < T >> && constructible_from < decay_t < T > , T > ; 
10.3. General queries [exec.queries]
10.3.1. execution :: get_scheduler 
   - 
     execution :: get_scheduler 
- 
     The name execution :: get_scheduler r r no_env execution :: get_scheduler ( r ) - 
       tag_invoke ( execution :: get_scheduler , as_const ( r )) - 
         Mandates: The tag_invoke execution :: scheduler 
 
- 
         
- 
       Otherwise, execution :: get_scheduler ( r ) 
 
- 
       
- 
     execution :: get_scheduler () execution :: read ( execution :: get_scheduler ) 
10.3.2. execution :: get_delegatee_scheduler 
   - 
     execution :: get_delegatee_scheduler 
- 
     The name execution :: get_delegatee_scheduler r r no_env execution :: get_delegatee_scheduler ( r ) - 
       tag_invoke ( execution :: get_delegatee_scheduler , as_const ( r )) - 
         Mandates: The tag_invoke execution :: scheduler 
 
- 
         
- 
       Otherwise, execution :: get_delegatee_scheduler ( r ) 
 
- 
       
- 
     execution :: get_delegatee_scheduler () execution :: read ( execution :: get_delegatee_scheduler ) 
10.3.3. execution :: get_allocator 
   - 
     execution :: get_allocator 
- 
     The name execution :: get_allocator r r no_env execution :: get_allocator ( r ) - 
       tag_invoke ( execution :: get_allocator , as_const ( r )) - 
         Mandates: The tag_invoke 
 
- 
         
- 
       Otherwise, execution :: get_allocator ( r ) 
 
- 
       
- 
     execution :: get_allocator () execution :: read ( execution :: get_allocator ) 
10.3.4. execution :: get_stop_token 
   - 
     execution :: get_stop_token 
- 
     The name execution :: get_stop_token r r no_env execution :: get_stop_token ( r ) - 
       tag_invoke ( execution :: get_stop_token , as_const ( r )) - 
         Mandates: The tag_invoke stoppable_token 
 
- 
         
- 
       Otherwise, never_stop_token {} 
 
- 
       
- 
     execution :: get_stop_token () execution :: read ( execution :: get_stop_token ) 
10.4. Execution environments [exec.env]
- 
     An execution environment contains state associated with the completion of an asynchronous operation. Every receiver has an associated execution environment, accessible with the get_env 
- 
     An environment query is a customization point object that accepts as its first argument an execution environment. For an environment query EQ e no_env EQ ( e ) 
10.4.1. execution :: no_env 
namespace exec - envs { // exposition only struct no_env { friend void tag_invoke ( auto , same_as < no_env > auto , auto && ...) = delete ; }; } 
- 
     no_env sender get_completion_signatures 
10.4.2. execution :: get_env 
namespace exec - envs { // exposition only struct get_env_t ; } inline constexpr exec - envs :: get_env_t get_env {}; 
- 
     get_env r get_env ( r ) - 
       tag_invoke ( execution :: get_env , r ) - 
         Mandates: The decayed type of the above expression is not no_env 
 
- 
         
- 
       Otherwise, get_env ( r ) 
 
- 
       
- 
     If get_env ( r ) r 
10.4.3. execution :: forwarding_env_query 
   - 
     execution :: forwarding_env_query 
- 
     The name execution :: forwarding_env_query t execution :: forwarding_env_query ( t ) - 
       tag_invoke ( execution :: forwarding_env_query , t ) bool tag_invoke - 
         Mandates: The tag_invoke bool t 
 
- 
         
- 
       Otherwise, true.
 
- 
       
10.5. Schedulers [exec.sched]
- 
     The scheduler template < class S > concept scheduler = copy_constructible < remove_cvref_t < S >> && equality_comparable < remove_cvref_t < S >> && requires ( S && s , const get_completion_scheduler_t < set_value_t > tag ) { { execution :: schedule ( std :: forward < S > ( s )) } -> sender ; { tag_invoke ( tag , execution :: schedule ( std :: forward < S > ( s ))) } -> same_as < remove_cvref_t < S >> ; }; 
- 
     Let S E sender < schedule_result_t < S > , E > true. Thensender_of < schedule_result_t < S > , E > true.
- 
     None of a scheduler’s copy constructor, destructor, equality comparison, or swap 
- 
     None of these member functions, nor a scheduler type’s schedule 
- 
     For any two (possibly const) values s1 s2 S s1 == s2 trueonly if boths1 s2 
- 
     For a given scheduler expression s execution :: get_completion_scheduler < set_value_t > ( execution :: schedule ( s )) s 
- 
     A scheduler type’s destructor shall not block pending completion of any receivers connected to the sender objects returned from schedule 
10.5.1. Scheduler queries [exec.sched_queries]
10.5.1.1. execution :: forwarding_scheduler_query 
   - 
     execution :: forwarding_scheduler_query 
- 
     The name execution :: forwarding_scheduler_query t execution :: forwarding_scheduler_query ( t ) - 
       tag_invoke ( execution :: forwarding_scheduler_query , t ) bool tag_invoke - 
         Mandates: The tag_invoke bool t 
 
- 
         
- 
       Otherwise, false.
 
- 
       
10.5.1.2. execution :: get_forward_progress_guarantee 
enum class forward_progress_guarantee { concurrent , parallel , weakly_parallel }; 
- 
     execution :: get_forward_progress_guarantee 
- 
     The name execution :: get_forward_progress_guarantee s S decltype (( s )) S execution :: scheduler execution :: get_forward_progress_guarantee execution :: get_forward_progress_guarantee ( s ) - 
       tag_invoke ( execution :: get_forward_progress_guarantee , as_const ( s )) - 
         Mandates: The tag_invoke execution :: forward_progress_guarantee 
 
- 
         
- 
       Otherwise, execution :: forward_progress_guarantee :: weakly_parallel 
 
- 
       
- 
     If execution :: get_forward_progress_guarantee ( s ) s execution :: forward_progress_guarantee :: concurrent execution :: forward_progress_guarantee :: parallel 
10.5.1.3. this_thread :: execute_may_block_caller 
   - 
     this_thread :: execute_may_block_caller s execution :: execute ( s , f ) f 
- 
     The name this_thread :: execute_may_block_caller s S decltype (( s )) S execution :: scheduler this_thread :: execute_may_block_caller this_thread :: execute_may_block_caller ( s ) - 
       tag_invoke ( this_thread :: execute_may_block_caller , as_const ( s )) - 
         Mandates: The tag_invoke bool 
 
- 
         
- 
       Otherwise, true.
 
- 
       
- 
     If this_thread :: execute_may_block_caller ( s ) s false, noexecution :: execute ( s , f ) f 
10.6. Receivers [exec.recv]
- 
     A receiver represents the continuation of an asynchronous operation. An asynchronous operation may complete with a (possibly empty) set of values, an error, or it may be cancelled. A receiver has three principal operations corresponding to the three ways an asynchronous operation may complete: set_value set_error set_stopped 
- 
     The receiver receiver_of template < class T > concept receiver = move_constructible < remove_cvref_t < T >> && constructible_from < remove_cvref_t < T > , T > && requires ( const remove_cvref_t < T >& t ) { execution :: get_env ( t ); }; template < class Signature , class T > concept valid - completion - for = // exposition only requires ( Signature * sig ) { [] < class Ret , class ... Args > ( Ret ( * )( Args ...)) requires nothrow_tag_invocable < Ret , remove_cvref_t < T > , Args ... > {}( sig ); }; template < class T , class Completions > concept receiver_of = receiver < T > && requires ( Completions * completions ) { [] < valid - completion - for < T > ... Sigs > ( completion_signatures < Sigs ... >* ) {}( completions ); }; 
- 
     The receiver’s completion-signal operations have semantic requirements that are collectively known as the receiver contract, described below: - 
       None of a receiver’s completion-signal operations shall be invoked before execution :: start execution :: connect 
- 
       Once execution :: start 
- 
       If execution :: set_value execution :: set_error execution :: set_stopped execution :: set_value 
 
- 
       
- 
     Once one of a receiver’s completion-signal operations has completed non-exceptionally, the receiver contract has been satisfied. 
- 
     Receivers have an associated execution environment that is accessible by passing the receiver to execution :: get_env - 
       execution :: get_scheduler 
- 
       execution :: get_delegatee_scheduler 
- 
       execution :: get_allocator 
- 
       execution :: get_stop_token 
 
- 
       
- 
     Let r s op_state execution :: connect ( s , r ) token execution :: get_stop_token ( execution :: get_env ( r )) token r r op_state token r token op_state s r 
10.6.1. execution :: set_value 
   - 
     execution :: set_value 
- 
     The name execution :: set_value execution :: set_value ( R , Vs ...) R Vs ... - 
       tag_invoke ( execution :: set_value , R , Vs ...) tag_invoke Vs ... R execution :: set_value ( R , Vs ...) - 
         Mandates: The tag_invoke 
 
- 
         
- 
       Otherwise, execution :: set_value ( R , Vs ...) 
 
- 
       
10.6.2. execution :: set_error 
   - 
     execution :: set_error 
- 
     The name execution :: set_error execution :: set_error ( R , E ) R E - 
       tag_invoke ( execution :: set_error , R , E ) tag_invoke E R execution :: set_error ( R , E ) - 
         Mandates: The tag_invoke 
 
- 
         
- 
       Otherwise, execution :: set_error ( R , E ) 
 
- 
       
10.6.3. execution :: set_stopped 
   - 
     execution :: set_stopped 
- 
     The name execution :: set_stopped execution :: set_stopped ( R ) R - 
       tag_invoke ( execution :: set_stopped , R ) tag_invoke R execution :: set_stopped ( R ) - 
         Mandates: The tag_invoke 
 
- 
         
- 
       Otherwise, execution :: set_stopped ( R ) 
 
- 
       
10.6.4. Receiver queries [exec.recv_queries]
10.6.4.1. execution :: forwarding_receiver_query 
   - 
     execution :: forwarding_receiver_query 
- 
     The name execution :: forwarding_receiver_query t execution :: forwarding_receiver_query ( t ) - 
       tag_invoke ( execution :: forwarding_receiver_query , t ) bool tag_invoke - 
         Mandates: The tag_invoke bool t 
 
- 
         
- 
       Otherwise, falseif the type oft set_value_t set_error_t set_stopped_t 
- 
       Otherwise, true.
 
- 
       
- 
     [Note: Currently the only standard receiver query is execution :: get_env 
10.7. Operation states [exec.op_state]
- 
     The operation_state template < class O > concept operation_state = destructible < O > && is_object_v < O > && requires ( O & o ) { { execution :: start ( o ) } noexcept ; }; 
- 
     Any operation state types defined by the implementation are non-movable types. 
10.7.1. execution :: start 
   - 
     execution :: start 
- 
     The name execution :: start execution :: start ( O ) O - 
       tag_invoke ( execution :: start , O ) tag_invoke O execution :: start ( O ) - 
         Mandates: The tag_invoke 
 
- 
         
- 
       Otherwise, execution :: start ( O ) 
 
- 
       
- 
     The caller of execution :: start ( O ) O R execution :: connect O 
10.8. Senders [exec.snd]
- 
     A sender describes a potentially asynchronous operation. A sender’s responsibility is to fulfill the receiver contract of a connected receiver by delivering one of the receiver completion-signals. 
- 
     The sender sender_to template < class T , template < class ... > class C > inline constexpr bool is - instance - of = false; // exposition only template < class ... Ts , template < class ... > class C > inline constexpr bool is - instance - of < C < Ts ... > , C > = true; template < class Sigs , class E > concept valid - completion - signatures = // exposition only is - instance - of < Sigs , completion_signatures > || ( same_as < Sigs , dependent_completion_signatures < no_env >> && same_as < E , no_env > ); template < class S , class E > concept sender - base = // exposition only requires ( S && s , E && e ) { { get_completion_signatures ( std :: forward < S > ( s ), std :: forward < E > ( e )) } -> valid - completion - signatures < E > ; }; template < class S , class E = no_env > concept sender = sender - base < S , E > && sender - base < S , no_env > && move_constructible < remove_cvref_t < S >> ; template < class S , class R > concept sender_to = sender < S , env_of_t < R >> && receiver_of < R , completion_signatures_of_t < S , env_of_t < R >>> && requires ( S && s , R && r ) { execution :: connect ( std :: forward < S > ( s ), std :: forward < R > ( r )); }; 
- 
     The sender_of template < class S , class E = no_env , class ... Ts > concept sender_of = sender < S , E > && same_as < type - list < Ts ... > , value_types_of_t < S , E , type - list , type_identity_t > > ; 
10.8.1. Completion signatures [exec.sndtraits]
- 
     This clause makes use of the following implementation-defined entities: struct no - completion - signatures {}; 
10.8.1.1. execution :: completion_signatures_of_t 
   - 
     The alias template completion_signatures_of_t 
- 
     completion_signatures_of_t - 
       An awaitable is an expression that would be well-formed as the operand of a co_await 
- 
       For any type T is - awaitable < T > trueif and only if an expression of that type is an awaitable as described above within the context of a coroutine whose promise type does not define a memberawait_transform P is - awaitable < T , P > trueif and only if an expression of that type is an awaitable as described above within the context of a coroutine whose promise type isP 
- 
       For an awaitable a decltype (( a )) A await - result - type < A > decltype ( e ) e a await_transform P await - result - type < A , P > decltype ( e ) e a P 
 
- 
       
- 
     For types S E completion_signatures_of_t < S , E > decltype ( get_completion_signatures ( declval < S > (), declval < E > ())) no - completion - signatures 
- 
     execution :: get_completion_signatures s decltype (( s )) S e decltype (( e )) E get_completion_signatures ( s ) get_completion_signatures ( s , no_env {}) get_completion_signatures ( s , e ) - 
       tag_invoke_result_t < get_completion_signatures_t , S , E > {} - 
         Mandates: is - instance - of < Sigs , completion_signatures > is - instance - of < Sigs , dependent_completion_signatures > Sigs tag_invoke_result_t < get_completion_signatures_t , S , E > 
 
- 
         
- 
       Otherwise, if remove_cvref_t < S >:: completion_signatures remove_cvref_t < S >:: completion_signatures - 
         Mandates: is - instance - of < Sigs , completion_signatures > is - instance - of < Sigs , dependent_completion_signatures > Sigs remove_cvref_t < S >:: completion_signatures 
 
- 
         
- 
       Otherwise, if is - awaitable < S > true, then- 
         If await - result - type < S > cv void completion_signatures < set_value_t (), set_error_t ( exception_ptr ), set_stopped_t () > 
- 
         Otherwise, a prvalue of a type equivalent to: completion_signatures < set_value_t ( await - result - type < S > ), set_error_t ( exception_ptr ), set_stopped_t () > 
 
- 
         
- 
       Otherwise, no - completion - signatures {} 
 
- 
       
- 
     The exposition-only type variant - or - empty < Ts ... > - 
       If sizeof ...( Ts ) variant - or - empty < Ts ... > variant < Us ... > Us ... decay_t < Ts > ... 
- 
       Otherwise, variant - or - empty < Ts ... > struct empty - variant { empty - variant () = delete ; }; 
 
- 
       
- 
     Let r R S value_types_of_t < S , env_of_t < R > , Tuple , Variant > Variant < Tuple < Args 0 ... > , Tuple < Args 1 ... > , ..., Tuple < Args N ... >>> Args 0 Args N S execution :: set_value S execution :: set_value ( r , args ...) decltype ( args )... Args 0 ... Args N ... 
- 
     Let r R S error_types_of_t < S , env_of_t < R > , Variant > Variant < E 0 , E 1 , ..., E N > E 0 E N S execution :: set_error S execution :: set_error ( r , e ) decltype ( e ) E 0 E N 
- 
     Let r R S completion_signatures_of_t < S , env_of_t < R >>:: sends_stopped false, such a senderS execution :: set_stopped ( r ) 
- 
     Let S E execution :: no_env sender < S , E > true. LetTuple Variant1 Variant2 - 
       value_types_of_t < S , no_env , Tuple , Variant1 > 
- 
       error_types_of_t < S , no_env , Variant2 > 
 then the following shall also be true:- 
       value_types_of_t < S , E , Tuple , Variant1 > value_types_of_t < S , no_env , Tuple , Variant1 > 
- 
       error_types_of_t < S , E , Variant2 > error_types_of_t < S , no_env , Variant2 > 
- 
       completion_signatures_of_t < S , E >:: sends_stopped completion_signatures_of_t < S , no_env >:: sends_stopped 
 
- 
       
- 
     [Note: The types Args i ... E i ... value_types error_types exception_ptr error_code error_types Variant < exception_ptr , error_code > Variant < error_code , exception_ptr > 
10.8.1.2. dependent_completion_signatures 
template < class E > // arguments are not associated entities ([lib.tmpl-heads]) struct dependent_completion_signatures {}; 
- 
     dependent_completion_signatures get_completion_signatures 
- 
     When used as the return type of a customization of get_completion_signatures E 
10.8.2. execution :: connect 
   - 
     execution :: connect 
- 
     The name execution :: connect s r S decltype (( s )) R decltype (( r )) S 'R 'S R R execution :: receiver execution :: connect ( s , r ) execution :: connect ( s , r ) - 
       tag_invoke ( execution :: connect , s , r ) tag_invoke execution :: start s execution :: connect ( s , r ) - 
         Constraints: sender < S , env_of_t < R >> && receiver_of < R , completion_signatures_of_t < S , env_of_t < R >>> && tag_invocable < connect_t , S , R > 
- 
         Mandates: The type of the tag_invoke operation_state 
 
- 
         
- 
       Otherwise, connect - awaitable ( s , r ) is - awaitable < S , connect - awaitable - promise > trueand that expression is valid, whereconnect - awaitable operation - state - task connect - awaitable ( S 's , R 'r ) requires see - below { exception_ptr ep ; try { set - value - expr } catch (...) { ep = current_exception (); } set - error - expr } where connect - awaitable - promise connect - awaitable connect - awaitable - 
         set-value-expr first evaluates co_await std :: move ( s ) execution :: set_value ( std :: move ( r )) await - result - type < S , connect - awaitable - promise > cv void auto && res = co_await std :: move ( s ) execution :: set_value ( std :: move ( r ), std :: forward < decltype ( res ) > ( res )) If the call to execution :: set_value [Note: If the call to execution :: set_value connect - awaitable 
- 
         set-error-expr first suspends the coroutine and then executes execution :: set_error ( std :: move ( r ), std :: move ( ep )) [Note: The connect - awaitable execution :: set_error 
- 
         operation - state - task operation_state execution :: start connect - awaitable 
- 
         Let p connect - awaitable b const r tag_invoke ( tag , p , as ...) tag ( b , as ...) as ... tag forwarding - receiver - query 
- 
         The expression p . unhandled_stopped () ( execution :: set_stopped ( std :: move ( r )), noop_coroutine ()) 
- 
         For some expression e p . await_transform ( e ) tag_invoke ( as_awaitable , e , p ) e 
 Let Res await - result - type < S , connect - awaitable - promise > Vs ... Res cv void Res connect - awaitable receiver_of < R , Sigs > Sigs completion_signatures < set_value_t ( Vs ...), set_error_t ( exception_ptr ), set_stopped_t () > 
- 
         
- 
       Otherwise, execution :: connect ( s , r ) 
 
- 
       
- 
     Standard sender types shall always expose an rvalue-qualified overload of a customization of execution :: connect execution :: connect 
10.8.3. Sender queries [exec.snd_queries]
10.8.3.1. execution :: forwarding_sender_query 
   - 
     execution :: forwarding_sender_query 
- 
     The name execution :: forwarding_sender_query t execution :: forwarding_sender_query ( t ) - 
       tag_invoke ( execution :: forwarding_sender_query , t ) bool tag_invoke - 
         Mandates: The tag_invoke bool t 
 
- 
         
- 
       Otherwise, false.
 
- 
       
10.8.3.2. execution :: get_completion_scheduler 
   - 
     execution :: get_completion_scheduler 
- 
     The name execution :: get_completion_scheduler s S decltype (( s )) S execution :: sender execution :: get_completion_scheduler < CPO > ( s ) CPO CPO execution :: get_completion_scheduler < CPO > execution :: set_value_t execution :: set_error_t execution :: set_stopped_t execution :: get_completion_scheduler < CPO > execution :: get_completion_scheduler < CPO > ( s ) - 
       tag_invoke ( execution :: get_completion_scheduler < CPO > , as_const ( s )) - 
         Mandates: The tag_invoke execution :: scheduler 
 
- 
         
- 
       Otherwise, execution :: get_completion_scheduler < CPO > ( s ) 
 
- 
       
- 
     If, for some sender s CPO execution :: get_completion_scheduler < decltype ( CPO ) > ( s ) sch s CPO ( r , args ...) r s args ... sch 
- 
     The expression execution :: forwarding_sender_query ( get_completion_scheduler < CPO > ) bool true. It shall not be potentially-throwing.CPO set_value_t set_error_t set_stopped_t 
10.8.4. Sender factories [exec.factories]
10.8.4.1. General [exec.factories.general]
- 
     Subclause [exec.factories] defines sender factories, which are utilities that return senders without accepting senders as arguments. 
10.8.4.2. execution :: schedule 
   - 
     execution :: schedule 
- 
     The name execution :: schedule s execution :: schedule ( s ) - 
       tag_invoke ( execution :: schedule , s ) tag_invoke set_value s execution :: schedule ( s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, execution :: schedule ( s ) 
 
- 
       
10.8.4.3. execution :: just execution :: just_error execution :: just_stopped 
   - 
     execution :: just execution :: just_error execution :: just_stopped 
- 
     Let just - sender template < class CPO , class ... Ts > struct just - sender { // exposition only using completion_signatures = execution :: completion_signatures < CPO ( Ts ...) > ; [[ no_unique_address ]] tuple < Ts ... > vs_ ; // exposition only template < class R > struct operation_state { // exposition only [[ no_unique_address ]] tuple < Ts ... > vs_ ; // exposition only R r_ ; // exposition only friend void tag_invoke ( start_t , operation_state & s ) noexcept { apply ([ & s ]( Ts & ... values_ ) { CPO {}( std :: move ( s . r_ ), std :: move ( values_ )...); }, s . vs_ ); } }; template < receiver_of < completion_signatures > R > requires ( copy_constructible < Ts > && ...) friend operation_state < decay_t > tag_invoke ( connect_t , const just - sender & s , R && r ) { return { s . vs_ , std :: forward < R > ( r ) }; } template < receiver_of < completion_signatures > R > friend operation_state < decay_t > tag_invoke ( connect_t , just - sender && s , R && r ) { return { std :: move ( s . vs_ ), std :: forward < R > ( r ) }; } }; 
- 
     The name execution :: just vs ... Vs ... decltype (( vs )) V Vs movable - value execution :: just ( vs ...) execution :: just ( vs ...) just - sender < set_value_t , decay_t < Ts > ... > ( vs ...) 
- 
     The name execution :: just_error err Err decltype (( err )) Err movable - value execution :: just_error ( err ) execution :: just_error ( err ) just - sender < set_error_t , decay_t < Err >> ( err ) 
- 
     Then name execution :: just_stopped execution :: just_stopped just - sender < set_stopped_t > () 
10.8.4.4. execution :: transfer_just 
   - 
     execution :: transfer_just 
- 
     The name execution :: transfer_just s vs ... S decltype (( s )) Vs ... decltype (( vs )) S execution :: scheduler V Vs movable - value execution :: transfer_just ( s , vs ...) execution :: transfer_just ( s , vs ...) - 
       tag_invoke ( execution :: transfer_just , s , vs ...) tag_invoke set_value s auto ( vs )... execution :: transfer_just ( s , vs ...) - 
         Mandates: execution :: sender_of < R , no_env , decltype ( auto ( vs ))... > R tag_invoke 
 
- 
         
- 
       Otherwise, execution :: transfer ( execution :: just ( vs ...), s ) 
 
- 
       
10.8.4.5. execution :: read 
   - 
     execution :: read 
- 
     execution :: read template < class Tag > struct read - sender ; // exposition only struct read - t { // exposition only template < class Tag > read - sender < Tag > operator ()( Tag ) const noexcept { return {}; } }; 
- 
     read - sender template < class Tag > struct read - sender { // exposition only template < class R > struct operation - state { // exposition only R r_ ; // exposition only friend void tag_invoke ( start_t , operation - state & s ) noexcept { TRY - SET - VALUE ( std :: move ( s . r_ ), auto ( Tag {}( get_env ( s . r_ )))); } }; template < receiver R > friend operation - state < decay_t < R >> tag_invoke ( connect_t , read - sender , R && r ) { return { std :: forward < R > ( r ) }; } template < class Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> dependent_completion_signatures < Env > ; // not defined template < class Env > requires callable < Tag , Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> completion_signatures < set_value_t ( call - result - t < Tag , Env > ), set_error_t ( exception_ptr ) > ; // not defined template < class Env > requires nothrow - callable < Tag , Env > friend auto tag_invoke ( get_completion_signatures_t , read - sender , Env ) -> completion_signatures < set_value_t ( call - result - t < Tag , Env > ) > ; // not defined }; where TRY - SET - VALUE ( r , e ) r e try { execution :: set_value ( r , e ); } catch (...) { execution :: set_error ( r , current_exception ()); } if e execution :: set_value ( r , e ) 
10.8.5. Sender adaptors [exec.adapt]
10.8.5.1. General [exec.adapt.general]
- 
     Subclause [exec.adapt] defines sender adaptors, which are utilities that transform one or more senders into a sender with custom behaviors. When they accept a single sender argument, they can be chained to create sender chains. 
- 
     The bitwise OR operator is overloaded for the purpose of creating sender chains. The adaptors also support function call syntax with equivalent semantics. 
- 
     Unless otherwise specified, a sender adaptor is required to not begin executing any functions which would observe or modify any of the arguments of the adaptor before the returned sender is connected with a receiver using execution :: connect execution :: start 
- 
     A type T forwarding - sender - query sender 
- 
     A type T forwarding - receiver - query execution :: connect 
- 
     For any sender type, receiver type, operation state type, execution environment type, or coroutine promise type that is part of the implementation of any sender adaptor in this subclause and that is a class template, the template arguments do not contribute to the associated entities ([basic.lookup.argdep]) of a function call where a specialization of the class template is an associated entity. [Example: namespace sender - adaptors { // exposition only template < class Sch , class S > // arguments are not associated entities ([lib.tmpl-heads]) class on - sender { // ... }; struct on_t { template < scheduler Sch , sender S > on - sender < Sch , S > operator ()( Sch && sch , S && s ) const { // ... } }; } inline constexpr sender - adaptors :: on_t on {}; -- end example] 
- 
     If the specification of a sender adaptor requires that the implementation of the get_completion_signatures set_error_t ( exception_ptr ) exception_ptr set_error set_error_t ( exception_ptr ) get_completion_signatures 
10.8.5.2. Sender adaptor closure objects [exec.adapt.objects]
- 
     A pipeable sender adaptor closure object is a function object that accepts one or more sender sender C S decltype (( S )) sender sender C ( S ) S | C Given an additional pipeable sender adaptor closure object D C | D E E - 
       Its target object is an object d decay_t < decltype (( D )) > D 
- 
       It has one bound argument entity, an object c decay_t < decltype (( C )) > C 
- 
       Its call pattern is d ( c ( arg )) arg E 
 The expression C | D E 
- 
       
- 
     An object t T T derived_from < sender_adaptor_closure < T >> T sender_adaptor_closure < U > U T sender 
- 
     The template parameter D sender_adaptor_closure cv D | D derived_from < sender_adaptor_closure < D >> cv D | operator | 
- 
     A pipeable sender adaptor object is a customization point object that accepts a sender sender 
- 
     If a pipeable sender adaptor object accepts only one argument, then it is a pipeable sender adaptor closure object. 
- 
     If a pipeable sender adaptor object adaptor s decltype (( s )) sender args ... adaptor ( s , args ...) BoundArgs decay_t < decltype (( args )) > ... adaptor ( args ...) f - 
       Its target object is a copy of adaptor 
- 
       Its bound argument entities bound_args BoundArgs ... std :: forward < decltype (( args )) > ( args )... 
- 
       Its call pattern is adaptor ( r , bound_args ...) r f 
 The expression adaptor ( args ...) 
- 
       
10.8.5.3. execution :: on 
   - 
     execution :: on 
- 
     Let replace - scheduler ( e , sch ) e 'execution :: get_scheduler ( e ) sch tag_invoke ( tag , e ', args ...) tag ( e , args ...) args ... tag forwarding - env - query execution :: get_scheduler_t 
- 
     The name execution :: on sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: on execution :: on ( sch , s ) - 
       tag_invoke ( execution :: on , sch , s ) s sch execution :: on ( sch , s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s1 s1 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r ) execution :: connect ( s , r2 ) r2 op_state3 execution :: start ( op_state3 ) execution :: set_error out_r current_exception () 
- 
           execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           execution :: set_stopped ( r ) execution :: set_stopped ( out_r ) 
- 
           execution :: get_env ( r ) execution :: get_env ( out_r ) 
 
- 
           
- 
         Calls execution :: schedule ( sch ) s2 execution :: connect ( s2 , r ) op_state2 
- 
         op_state2 op_state1 
- 
         r2 out_r execution :: get_env ( r2 ) replace - scheduler ( e , sch ) 
- 
         When execution :: start op_state1 execution :: start op_state2 
- 
         The lifetime of op_state2 op_state3 op_state1 op_state3 op_state1 
 
- 
         
- 
       Given subexpressions s1 e s1 on S1 decltype (( s1 )) E 'decltype (( replace - scheduler ( e , sch ))) tag_invoke ( get_completion_signatures , s1 , e ) make_completion_signatures < copy_cvref_t < S1 , S > , E ', make_completion_signatures < schedule_result_t < Sch > , E , completion_signatures < set_error_t ( exception_ptr ) > , no - value - completions >> ; where no - value - completions < As ... > completion_signatures <> As ... 
 
- 
       
10.8.5.4. execution :: transfer 
   - 
     execution :: transfer set_value 
- 
     The name execution :: transfer sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: transfer execution :: transfer ( s , sch ) - 
       tag_invoke ( execution :: transfer , get_completion_scheduler < set_value_t > ( s ), s , sch ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: transfer , s , sch ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, schedule_from ( sch , s ) 
 If the function selected above does not return a sender which is a result of a call to execution :: schedule_from ( sch , s2 ) s2 s execution :: transfer ( s , sch ) 
- 
       
- 
     Senders returned from execution :: transfer get_completion_scheduler < CPO > get_completion_scheduler < CPO > CPO set_value_t set_stopped_t sch get_completion_scheduler < set_error_t > 
10.8.5.5. execution :: schedule_from 
   - 
     execution :: schedule_from schedule_from transfer 
- 
     The name execution :: schedule_from sch s Sch decltype (( sch )) S decltype (( s )) Sch execution :: scheduler S execution :: sender execution :: schedule_from execution :: schedule_from ( sch , s ) - 
       tag_invoke ( execution :: schedule_from , sch , s ) tag_invoke sch s execution :: schedule_from ( sch , s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r Signal ( r , args ...) args ... op_state args '... r2 - 
           When execution :: set_value ( r2 ) Signal ( out_r , std :: move ( args ')...) 
- 
           execution :: set_error ( r2 , e ) execution :: set_error ( out_r , e ) 
- 
           execution :: set_stopped ( r2 ) execution :: set_stopped ( out_r ) 
 It then calls execution :: schedule ( sch ) s3 execution :: connect ( s3 , r2 ) op_state3 execution :: start ( op_state3 ) execution :: set_error ( out_r , current_exception ()) Signal ( r , args ...) 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 execution :: connect ( s2 , out_r ) 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) op_state3 op_state 
 
- 
         
- 
       Given subexpressions s2 e s2 schedule_from S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , make_completion_signatures < schedule_result_t < Sch > , E , completion_signatures < set_error_t ( exception_ptr ) > , no - value - completions >> ; where no - value - completions < As ... > completion_signatures <> As ... 
 
- 
       
- 
     Senders returned from execution :: schedule_from get_completion_scheduler < CPO > get_completion_scheduler < CPO > CPO set_value_t set_stopped_t sch get_completion_scheduler < set_error_t > 
10.8.5.6. execution :: then 
   - 
     execution :: then 
- 
     The name execution :: then s f S decltype (( s )) F f f 'f S execution :: sender F movable - value execution :: then execution :: then ( s , f ) - 
       tag_invoke ( execution :: then , get_completion_scheduler < set_value_t > ( s ), s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: then , s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) v invoke ( f ', args ...) decltype ( v ) void execution :: set_value ( out_r ) execution :: set_value ( out_r , v ) execution :: set_error ( out_r , current_exception ()) execution :: set_value ( r , args ...) 
- 
           execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           execution :: set_stopped ( r ) execution :: set_stopped ( out_r ) 
 
- 
           
- 
         Returns an expression equivalent to execution :: connect ( s , r ) 
- 
         Let compl - sig - t < Tag , Args ... > Tag () Args ... void Tag ( Args ...) s2 e s2 then S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , set - value - completions > ; where set - value - completions template < class ... As > set - value - completions = completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F , As ... >>> and set - error - signature completion_signatures < set_error_t ( exception_ptr ) > type - list value_types_of_t < copy_cvref_t < S2 , S > , E , potentially - throwing , type - list > true_type completion_signatures <> potentially - throwing template < class ... As > using potentially - throwing = bool_constant <! is_nothrow_invocable_v < F , As ... >> ; 
 
- 
         
 If the function selected above does not return a sender that invokes f set_value s s execution :: then ( s , f ) 
- 
       
10.8.5.7. execution :: upon_error 
   - 
     execution :: upon_error 
- 
     The name execution :: upon_error s f S decltype (( s )) F f f 'f S execution :: sender F movable - value execution :: upon_error execution :: upon_error ( s , f ) - 
       tag_invoke ( execution :: upon_error , get_completion_scheduler < set_error_t > ( s ), s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: upon_error , s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           execution :: set_value ( r , args ...) execution :: set_value ( out_r , args ...) 
- 
           When execution :: set_error ( r , e ) v invoke ( f ', e ) decltype ( v ) void execution :: set_value ( out_r ) execution :: set_value ( out_r , v ) execution :: set_error ( out_r , current_exception ()) execution :: set_error ( r , e ) 
- 
           execution :: set_stopped ( r ) execution :: set_stopped ( out_r ) 
 
- 
           
- 
         Returns an expression equivalent to execution :: connect ( s , r ) 
- 
         Let compl - sig - t < Tag , Args ... > Tag () Args ... void Tag ( Args ...) s2 e s2 upon_error S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , default - set - value , set - error - completion > ; where set - error - completion template < class E > set - error - completion = completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F , E >>> and set - error - signature completion_signatures < set_error_t ( exception_ptr ) > type - list error_types_of_t < copy_cvref_t < S2 , S > , E , potentially - throwing > true_type completion_signatures <> potentially - throwing template < class ... Es > using potentially - throwing = type - list <! bool_constant < is_nothrow_invocable_v < F , Es >> ... > ; 
 
- 
         
 If the function selected above does not return a sender which invokes f set_error s s execution :: upon_error ( s , f ) 
- 
       
10.8.5.8. execution :: upon_stopped 
   - 
     execution :: upon_stopped 
- 
     The name execution :: upon_stopped s f S decltype (( s )) F f f 'f S execution :: sender F movable - value invocable execution :: upon_stopped execution :: upon_stopped ( s , f ) - 
       tag_invoke ( execution :: upon_stopped , get_completion_scheduler < set_stopped_t > ( s ), s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: upon_stopped , s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           execution :: set_value ( r , args ...) execution :: set_value ( out_r , args ...) 
- 
           execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_stopped ( r ) v invoke ( f ') v void execution :: set_value ( out_r ) execution :: set_value ( out_r , v ) execution :: set_error ( out_r , current_exception ()) execution :: set_stopped ( r ) 
 
- 
           
- 
         Returns an expression equivalent to execution :: connect ( s , r ) 
- 
         Let compl - sig - t < Tag , Args ... > Tag () Args ... void Tag ( Args ...) s2 e s2 upon_stopped S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , set - error - signature , default - set - value , default - set - error , set - stopped - completions > ; where set - stopped - completions completion_signatures < compl - sig - t < set_value_t , invoke_result_t < F >> set - error - signature completion_signatures < set_error_t ( exception_ptr ) > is_nothrow_invocable_v < F > true, orcompletion_signatures <> 
 
- 
         
 If the function selected above does not return a sender which invokes f s set_stopped s execution :: upon_stopped ( s , f ) 
- 
       
10.8.5.9. execution :: let_value execution :: let_error execution :: let_stopped 
   - 
     execution :: let_value execution :: let_error execution :: let_stopped 
- 
     The names execution :: let_value execution :: let_error execution :: let_stopped let - cpo execution :: let_value execution :: let_error execution :: let_stopped s f S decltype (( s )) F f f 'f S execution :: sender let - cpo ( s , f ) F invocable execution :: let_stopped ( s , f ) let - cpo ( s , f ) - 
       tag_invoke ( let - cpo , get_completion_scheduler < set_value_t > ( s ), s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( let - cpo , s , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, given a receiver out_r out_r 'out_r - 
         For execution :: let_value set - cpo execution :: set_value execution :: let_error set - cpo execution :: set_error execution :: let_stopped set - cpo execution :: set_stopped signal execution :: set_value execution :: set_error execution :: set_stopped 
- 
         Let r R - 
           When set - cpo ( r , args ...) r args ... op_state2 args '... invoke ( f ', args '...) s3 execution :: connect ( s3 , std :: move ( out_r ')) op_state3 op_state3 op_state2 execution :: start ( op_state3 ) execution :: set_error ( std :: move ( out_r '), current_exception ()) set - cpo ( r , args ...) 
- 
           signal ( r , args ...) signal ( std :: move ( out_r '), args ...) signal set - cpo 
 
- 
           
- 
         let - cpo ( s , f ) s2 - 
           If the expression execution :: connect ( s , r ) execution :: connect ( s2 , out_r ) 
- 
           Otherwise, let op_state2 execution :: connect ( s , r ) execution :: connect ( s2 , out_r ) op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
 
- 
           
- 
         Given subexpressions s2 e s2 let - cpo ( s , f ) S2 decltype (( s2 )) E decltype (( e )) S 'copy_cvref_t < S2 , S > tag_invoke ( get_completion_signatures , s2 , e ) - 
           If sender < S ', E > false, the type oftag_invoke ( get_completion_signatures , s2 , e ) dependent_completion_signatures < E > 
- 
           Otherwise, let Sigs ... completion_signatures completion_signatures_of_t < S ', E > Sigs2 ... Sigs ... set - cpo Rest ... Sigs ... Sigs2 ... 
- 
           For each Sig2 i Sigs2 ... Vs i ... Sig2 i S3 i invoke_result_t < F , decay_t < Vs i >& ... > S3 i sender < S3 i , E > tag_invoke ( get_completion_signatures , s2 , e ) dependent_completion_signatures < E > 
- 
           Otherwise, let Sigs3 i ... completion_signatures completion_signatures_of_t < S3 i , E > tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures < Sigs3 0 ..., Sigs3 1 ..., ... Sigs3 n -1 . .., Rest ..., set_error_t ( exception_ptr ) > n sizeof ...( Sigs2 ) 
 
- 
           
 
- 
         
 If let - cpo ( s , f ) f set - cpo f s let - cpo ( s , f ) 
- 
       
10.8.5.10. execution :: bulk 
   - 
     execution :: bulk 
- 
     The name execution :: bulk s shape f S decltype (( s )) Shape decltype (( shape )) F decltype (( f )) S execution :: sender Shape integral execution :: bulk execution :: bulk ( s , shape , f ) - 
       tag_invoke ( execution :: bulk , get_completion_scheduler < set_value_t > ( s ), s , shape , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: bulk , s , shape , f ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 s2 out_r - 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) f ( i , args ...) i Shape 0 shape execution :: set_value ( out_r , args ...) execution :: set_error ( out_r , current_exception ()) 
- 
           When execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
           When execution :: set_stopped ( r ) execution :: set_stopped ( out_r , e ) 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 
- 
         Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
- 
         Given subexpressions s2 e s2 bulk S2 decltype (( s2 )) E decltype (( e )) S 'copy_cvref_t < S2 , S > nothrow - callable template < class ... As > using nothrow - callable = bool_constant < is_nothrow_invocable_v < decay_t < F >& , As ... >> ; - 
           If any of the types in the type - list value_types_of_t < S ', E , nothrow - callable , type - list > false_type tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < S ', E , completion_signatures < set_error_t ( exception_ptr ) >> 
- 
           Otherwise, the type of tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures_of_t < S ', E > 
 
- 
           
 
- 
         
- 
       If the function selected above does not return a sender which invokes f ( i , args ...) i Shape 0 shape args ... execution :: bulk ( s , shape , f ) 
 
- 
       
10.8.5.11. execution :: split 
   - 
     execution :: split 
- 
     Let split - env e get_stop_token ( e ) stop_token 
- 
     The name execution :: split s S decltype (( s )) execution :: sender < S , split - env > false,execution :: split execution :: split ( s ) - 
       tag_invoke ( execution :: split , get_completion_scheduler < set_value_t > ( s ), s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: split , s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 - 
         Creates an object sh_state stop_source s - 
           the operation state that results from connecting s r 
- 
           the sets of values and errors with which s exception_ptr 
 
- 
           
- 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) args ... sh_state sh_state execution :: set_error ( r , current_exception ()) 
- 
           When execution :: set_error ( r , e ) e sh_state sh_state 
- 
           When execution :: set_stopped ( r ) sh_state 
- 
           get_env ( r ) e split - env execution :: get_stop_token ( e ) get_token () sh_state 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 op_state2 sh_state 
- 
         When s2 out_r OutR op_state - 
           An object out_r 'OutR out_r 
- 
           A reference to sh_state 
- 
           A stop callback of type optional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >> stop - callback - fn struct stop - callback - fn { stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } }; 
 
- 
           
- 
         When execution :: start ( op_state ) - 
           If r Signal Signal ( out_r ', args2 ...) args2 ... sh_state Signal ( r , args ...) 
- 
           Otherwise, it emplace constructs the stop callback optional with the arguments execution :: get_stop_token ( get_env ( out_r ')) stop - callback - fn { stop - src } stop - src sh_state 
- 
           Then, it checks to see if stop - src . stop_requested () true. If so, it callsexecution :: set_stopped ( out_r ') 
- 
           Otherwise, it adds a pointer to op_state sh_state execution :: start ( op_state2 ) 
 
- 
           
- 
         When r op_state Signal r op_state Signal ( std :: move ( out_r '), args2 ...) args2 ... sh_state Signal ( r , args ...) 
- 
         Ownership of sh_state s2 op_state s2 
- 
         Given subexpressions s2 e s2 split S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , E , completion_signatures < set_error_t ( exception_ptr ) > , value - signatures , error - signatures > ; where value - signatures template < class ... Ts > using value - signatures = completion_signatures < set_value_t ( const decay_t < Ts >& ...) > ; and error - signatures template < class E > using error - signatures = completion_signatures < set_error_t ( const decay_t < E >& ) > ; 
- 
         Does not expose the sender queries get_completion_scheduler . 
 
- 
         
- 
       If the function selected above does not return a sender which sends references to values sent by s execution :: split ( s ) 
 
- 
       
10.8.5.12. execution :: when_all 
   - 
     execution :: when_all execution :: when_all_with_variant 
- 
     The name execution :: when_all s i ... S i ... decltype (( s i ))... execution :: when_all ( s i ...) - 
       If the number of subexpressions s i ... 
- 
       If any type S i execution :: sender 
 Otherwise, the expression execution :: when_all ( s i ...) - 
       tag_invoke ( execution :: when_all , s i ...) tag_invoke s i ... set_value execution :: when_all ( s i ...) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender w W w out_r OutR op_state - 
         For each sender s i r i - 
           If execution :: set_value ( r i , t i ...) r i op_state execution :: set_value ( out_r , t 0 ..., t 1 ..., ..., t n -1 . ..) n s i ... 
- 
           Otherwise, execution :: set_error execution :: set_stopped r i execution :: set_error ( r i , e ) request_stop op_state op_state execution :: set_error ( out_r , e ) 
- 
           Otherwise, request_stop op_state op_state execution :: set_stopped ( out_r ) 
- 
           For each receiver r i get_env ( r i ) e execution :: get_stop_token ( e ) get_token () op_state tag_invoke ( tag , e , args ...) tag ( get_env ( out_r ), args ...) args ... tag forwarding - env - query get_stop_token_t 
 
- 
           
- 
         For each sender s i execution :: connect ( s i , r i ) child_op i 
- 
         Returns an operation state op_state - 
           Each operation state child_op i 
- 
           A stop source of type in_place_stop_source 
- 
           A stop callback of type optional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >> stop - callback - fn struct stop - callback - fn { in_place_stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } }; 
 
- 
           
- 
         When execution :: start ( op_state ) - 
           Emplace constructs the stop callback optional with the arguments execution :: get_stop_token ( get_env ( out_r )) stop - callback - fn { stop - src } stop - src op_state 
- 
           Then, it checks to see if stop - src . stop_requested () execution :: set_stopped ( out_r ) 
- 
           Otherwise, calls execution :: start ( child_op i ) child_op i 
 
- 
           
- 
         Given subexpressions s2 e s2 when_all S2 decltype (( s2 )) E decltype (( e )) Ss ... when_all s2 e no_env WE no_env WE stop_token_of_t < WE > in_place_stop_token tag_invoke_result_t < Tag , WE , As ... > call - result - t < Tag , E , As ... > As ... Tag get_stop_token_t tag_invoke ( get_completion_signatures , s2 , e ) - 
           For each type S i Ss ... S 'i copy_cvref_t < S2 , S i > S 'i completion_signatures_of_t < S 'i , WE > completion_signatures tag_invoke ( get_completion_signatures , s2 , e ) dependent_completion_signatures < E > 
- 
           Otherwise, for each type S 'i Sigs i ... completion_signatures completion_signatures_of_t < S 'i , WE > C i Sigs i ... set_value_t C i tag_invoke ( get_completion_signatures , s2 , e ) dependent_completion_signatures < E > 
- 
           Otherwise, let Sigs2 i ... Sigs i ... set_value_t Ws ... [ Sigs2 0 ..., Sigs2 1 ..., ... Sigs2 n -1 . .., set_stopped_t ()] n sizeof ...( Ss ) C i 0 tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures < Ws ... > 
- 
           Otherwise, let V i ... Sigs i ... set_value_t tag_invoke ( get_completion_signatures , s2 , e ) completion_signatures < Ws ..., set_value_t ( decay_t < V 0 >&& ..., decay_t < V 1 >&& ..., ... decay_t < V n -1 >&& ...) > 
 
- 
           
 
- 
         
 
- 
       
- 
     The name execution :: when_all_with_variant s ... S decltype (( s )) S i S ... execution :: sender execution :: when_all_with_variant execution :: when_all_with_variant ( s ...) - 
       tag_invoke ( execution :: when_all_with_variant , s ...) tag_invoke R into - variant - type < S , env_of_t < R >> ... set_value execution :: when_all ( s i ...) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, execution :: when_all ( execution :: into_variant ( s )...) 
 
- 
       
- 
     Senders returned from adaptors defined in this subclause shall not expose the sender queries get_completion_scheduler < CPO > 
10.8.5.13. execution :: transfer_when_all 
   - 
     execution :: transfer_when_all execution :: transfer_when_all_with_variant 
- 
     The name execution :: transfer_when_all sch s ... Sch decltype ( sch ) S decltype (( s )) Sch scheduler S i S ... execution :: sender execution :: transfer_when_all execution :: transfer_when_all ( sch , s ...) - 
       tag_invoke ( execution :: transfer_when_all , sch , s ...) tag_invoke s ... set_value sch execution :: transfer_when_all ( sch , s ...) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, execution :: transfer ( execution :: when_all ( s ...), sch ) 
 
- 
       
- 
     The name execution :: transfer_when_all_with_variant sch s ... Sch decltype (( sch )) S decltype (( s )) S i S ... execution :: sender execution :: transfer_when_all_with_variant execution :: transfer_when_all_with_variant ( sch , s ...) - 
       tag_invoke ( execution :: transfer_when_all_with_variant , s ...) tag_invoke R into - variant - type < S , env_of_t < R >> ... set_value execution :: transfer_when_all_with_variant ( sch , s ...) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, execution :: transfer_when_all ( sch , execution :: into_variant ( s )...) 
 
- 
       
- 
     Senders returned from execution :: transfer_when_all get_completion_scheduler < CPO > get_completion_scheduler < CPO > CPO set_value_t set_stopped_t sch get_completion_scheduler < set_error_t > 
10.8.5.14. execution :: into_variant 
   - 
     execution :: into_variant 
- 
     The template into - variant - type execution :: into_variant template < class S , class E > requires sender < S , E > using into - variant - type = value_types_of_t < S , E > ; 
- 
     execution :: into_variant s S decltype (( s )) S execution :: sender execution :: into_variant ( s ) execution :: into_variant ( s ) s2 s2 out_r - 
       Constructs a receiver r - 
         If execution :: set_value ( r , ts ...) execution :: set_value ( out_r , into - variant - type < S , env_of_t < decltype (( r )) >> ( decayed - tuple < decltype ( ts )... > ( ts ...))) execution :: set_error ( out_r , current_exception ()) 
- 
         execution :: set_error ( r , e ) execution :: set_error ( out_r , e ) 
- 
         execution :: set_stopped ( r ) execution :: set_stopped ( out_r ) 
 
- 
         
- 
       Calls execution :: connect ( s , r ) op_state2 
- 
       Returns an operation state op_state op_state2 execution :: start ( op_state ) execution :: start ( op_state2 ) 
- 
       Given subexpressions s2 e s2 into_variant S2 decltype (( s2 )) E decltype (( e )) into - variant - set - value template < class S , class E > struct into - variant - set - value { template < class ... Args > using apply = set_value_t ( into - variant - type < S , E > ); }; Let into - variant - is - nothrow template < class S , class E > struct into - variant - is - nothrow { template < class ... Args > requires constructible_from < decayed - tuple < Args ... > , Args ... > using apply = bool_constant < noexcept ( into - variant - type < S , E > ( decayed - tuple < Args ... > ( declval < Args > ()...))) > ; }; Let INTO - VARIANT - ERROR - SIGNATURES ( S , E ) completion_signatures < set_error_t ( exception_ptr ) > type - list value_types_of_t < S , E , into - variant - is - nothrow < S , E >:: template apply , type - list > false_type completion_signatures <> The type of tag_invoke ( get_completion_signatures_t {}, s2 , e )) make_completion_signatures < S2 , E , INTO - VARIANT - ERROR - SIGNATURES ( S , E ), into - variant - set - value < S2 , E >:: template apply > 
 
- 
       
10.8.5.15. execution :: stopped_as_optional 
   - 
     execution :: stopped_as_optional 
- 
     The name execution :: stopped_as_optional s S decltype (( s )) get - env - sender connect r start execution :: set_value ( r , get_env ( r )) execution :: stopped_as_optional ( s ) execution :: let_value ( get - env - sender , [] < class E > ( const E & ) requires single - sender < S , E > { return execution :: let_stopped ( execution :: then ( s , [] < class T > ( T && t ) { return optional < decay_t < single - sender - value - type < S , E >>> { static_cast < T &&> ( t ) }; } ), [] () noexcept { return execution :: just ( optional < decay_t < single - sender - value - type < S , E >>> {}); } ); } ) 
10.8.5.16. execution :: stopped_as_error 
   - 
     execution :: stopped_as_error 
- 
     The name execution :: stopped_as_error s e S decltype (( s )) E decltype (( e )) S sender E movable - value execution :: stopped_as_error ( s , e ) execution :: stopped_as_error ( s , e ) execution :: let_stopped ( s , [] { return execution :: just_error ( e ); }) 
10.8.5.17. execution :: ensure_started 
   - 
     execution :: ensure_started 
- 
     Let ensure - started - env e get_stop_token ( e ) stop_token 
- 
     The name execution :: ensure_started s S decltype (( s )) execution :: sender < S , ensure - started - env > false,execution :: ensure_started ( s ) execution :: ensure_started ( s ) - 
       tag_invoke ( execution :: ensure_started , get_completion_scheduler < set_value_t > ( s ), s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: ensure_started , s ) - 
         Mandates: The type of the tag_invoke execution :: sender 
 
- 
         
- 
       Otherwise, constructs a sender s2 - 
         Creates an object sh_state stop_source - 
           the operation state that results from connecting s r 
- 
           the sets of values and errors with which s exception_ptr 
 s2 sh_state r 
- 
           
- 
         Constructs a receiver r - 
           When execution :: set_value ( r , args ...) args ... sh_state sh_state execution :: set_error ( r , current_exception ()) 
- 
           When execution :: set_error ( r , e ) e sh_state 
- 
           When execution :: set_stopped ( r ) 
- 
           get_env ( r ) e ensure - started - env execution :: get_stop_token ( e ) get_token () sh_state 
- 
           r sh_state s2 r sh_state 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state2 op_state2 sh_state execution :: start ( op_state2 ) 
- 
         When s2 out_r OutR op_state - 
           An object out_r 'OutR out_r 
- 
           A reference to sh_state 
- 
           A stop callback of type optional < stop_token_of_t < env_of_t < OutR >>:: callback_type < stop - callback - fn >> stop - callback - fn struct stop - callback - fn { stop_source & stop_src_ ; void operator ()() noexcept { stop_src_ . request_stop (); } }; 
 s2 sh_state op_state 
- 
           
- 
         When execution :: start ( op_state ) - 
           If r Signal r Signal ( out_r ', args2 ...) args2 ... sh_state Signal ( r , args ...) 
- 
           Otherwise, it emplace constructs the stop callback optional with the arguments execution :: get_stop_token ( get_env ( out_r ')) stop - callback - fn { stop - src } stop - src sh_state 
- 
           Then, it checks to see if stop - src . stop_requested () true. If so, it callsexecution :: set_stopped ( out_r ') 
- 
           Otherwise, it sets sh_state op_state r 
 
- 
           
- 
         When r op_state Signal r op_state Signal ( std :: move ( out_r '), args2 ...) args2 ... sh_state Signal ( r , args ...) 
- 
         [Note: If sender s2 r sh_state sh_state 
 
- 
         
- 
       Given subexpressions s2 e s2 ensure_started S2 decltype (( s2 )) E decltype (( e )) tag_invoke ( get_completion_signatures , s2 , e ) make_completion_signatures < copy_cvref_t < S2 , S > , ensure - started - env , completion_signatures < set_error_t ( exception_ptr && ) > , set - value - signature , error - types > where set - value - signature template < class ... Ts > using set - value - signature = completion_signatures < set_value_t ( decay_t < Ts >&& ...) > ; and error - types template < class E > using error - types = completion_signatures < set_error_t ( decay_t < E >&& ) > ; 
 If the function selected above does not return a sender that sends xvalue references to values sent by s execution :: ensure_started ( s ) 
- 
       
10.8.6. Sender consumers [exec.consumers]
10.8.6.1. execution :: start_detached 
   - 
     execution :: start_detached 
- 
     The name execution :: start_detached s S decltype (( s )) S execution :: sender execution :: start_detached execution :: start_detached ( s ) - 
       tag_invoke ( execution :: start_detached , execution :: get_completion_scheduler < execution :: set_value_t > ( s ), s ) - 
         Mandates: The type of the tag_invoke void 
 
- 
         
- 
       Otherwise, tag_invoke ( execution :: start_detached , s ) - 
         Mandates: The type of the tag_invoke void 
 
- 
         
- 
       Otherwise: - 
         Let R r R cr const R - 
           The expression set_value ( r ) 
- 
           For any subexpression e set_error ( r , e ) terminate () 
- 
           The expression set_stopped ( r ) 
- 
           The expression get_env ( cr ) empty - env {} 
 
- 
           
- 
         Calls execution :: connect ( s , r ) op_state execution :: start ( op_state ) op_state r 
 
- 
         
 If the function selected above does not eagerly start the sender s set_value set_stopped terminate () set_error execution :: start_detached ( s ) 
- 
       
10.8.6.2. this_thread :: sync_wait 
   - 
     this_thread :: sync_wait this_thread :: sync_wait_with_variant 
- 
     For any receiver r sync_wait sync_wait_with_variant get_scheduler ( get_env ( r )) get_delegatee_scheduler ( get_env ( r )) this_thread :: sync_wait sync_wait execution :: run_loop sync_wait 
- 
     The templates sync - wait - type sync - wait - with - variant - type this_thread :: sync_wait this_thread :: sync_wait_with_variant sync - wait - env get_env ( r ) r sync_wait template < sender < sync - wait - env > S > using sync - wait - type = optional < execution :: value_types_of_t < S , sync - wait - env , decayed - tuple , type_identity_t >> ; template < sender < sync - wait - env > S > using sync - wait - with - variant - type = optional < execution :: into - variant - type < S , sync - wait - env >> ; 
- 
     The name this_thread :: sync_wait s S decltype (( s )) execution :: sender < S , sync - wait - env > false, or the number of the argumentscompletion_signatures_of_t < S , sync - wait - env >:: value_types Variant this_thread :: sync_wait this_thread :: sync_wait - 
       tag_invoke ( this_thread :: sync_wait , execution :: get_completion_scheduler < execution :: set_value_t > ( s ), s ) - 
         Mandates: The type of the tag_invoke sync - wait - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise, tag_invoke ( this_thread :: sync_wait , s ) - 
         Mandates: The type of the tag_invoke sync - wait - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise: - 
         Constructs a receiver r 
- 
         Calls execution :: connect ( s , r ) op_state execution :: start ( op_state ) 
- 
         Blocks the current thread until a receiver completion-signal of r - 
           If execution :: set_value ( r , ts ...) sync - wait - type < S , sync - wait - env > { decayed - tuple < decltype ( ts )... > { ts ...}} sync_wait 
- 
           If execution :: set_error ( r , e ) E e E exception_ptr std :: rethrow_exception ( e ) E error_code system_error ( e ) e 
- 
           If execution :: set_stopped ( r ) sync - wait - type < S , sync - wait - env > {} 
 
- 
           
 
- 
         
 
- 
       
- 
     The name this_thread :: sync_wait_with_variant s S execution :: into_variant ( s ) execution :: sender < S , sync - wait - env > false,this_thread :: sync_wait_with_variant this_thread :: sync_wait_with_variant - 
       tag_invoke ( this_thread :: sync_wait_with_variant , execution :: get_completion_scheduler < execution :: set_value_t > ( s ), s ) - 
         Mandates: The type of the tag_invoke sync - wait - with - variant - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise, tag_invoke ( this_thread :: sync_wait_with_variant , s ) - 
         Mandates: The type of the tag_invoke sync - wait - with - variant - type < S , sync - wait - env > 
 
- 
         
- 
       Otherwise, this_thread :: sync_wait ( execution :: into_variant ( s )) 
 
- 
       
10.9. execution :: execute 
   - 
     execution :: execute 
- 
     The name execution :: execute sch f Sch decltype (( sch )) F decltype (( f )) Sch execution :: scheduler F invocable execution :: execute execution :: execute - 
       tag_invoke ( execution :: execute , sch , f ) tag_invoke f f sch std :: terminate execution :: execute - 
         Mandates: The type of the tag_invoke void 
 
- 
         
- 
       Otherwise, execution :: start_detached ( execution :: then ( execution :: schedule ( sch ), f )) 
 
- 
       
10.10. Sender/receiver utilities [exec.utils]
- 
     This section makes use of the following exposition-only entities: // [ Editorial note: copy_cvref_t as in [[P1450R3]] -- end note ] // Mandates: is_base_of_v<T, remove_reference_t<U>> is true template < class T , class U > copy_cvref_t < U && , T > c - style - cast ( U && u ) noexcept requires decays - to < T , T > { return ( copy_cvref_t < U && , T > ) std :: forward < U > ( u ); } 
- 
     [Note: The C-style cast in c-style-cast is to disable accessibility checks. -- end note] 
10.10.1. execution :: receiver_adaptor 
template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor ; 
- 
     receiver_adaptor tag_invoke 
- 
     If Base - 
       Let HAS - BASE false, and
- 
       Let GET - BASE ( d ) d . base () 
 otherwise, let: - 
       Let HAS - BASE true, and
- 
       Let GET - BASE ( d ) c - style - cast < receiver_adaptor < Derived , Base >> ( d ). base () 
 Let BASE - TYPE ( D ) GET - BASE ( declval < D > ()) 
- 
       
- 
     receiver_adaptor < Derived , Base > template < class - type Derived , receiver Base = unspecified > // arguments are not associated entities ([lib.tmpl-heads]) class receiver_adaptor { friend Derived ; public : // Constructors receiver_adaptor () = default ; template < class B > requires HAS - BASE && constructible_from < Base , B > explicit receiver_adaptor ( B && base ) : base_ ( std :: forward < B > ( base )) {} private : using set_value = unspecified ; using set_error = unspecified ; using set_stopped = unspecified ; using get_env = unspecified ; // Member functions template < class Self > requires HAS - BASE decltype ( auto ) base ( this Self && self ) noexcept { return ( std :: forward < Self > ( self ). base_ ); } // [exec.utils.rcvr_adptr.nonmembers] Non-member functions template < class ... As > friend void tag_invoke ( set_value_t , Derived && self , As && ... as ) noexcept ; template < class E > friend void tag_invoke ( set_error_t , Derived && self , E && e ) noexcept ; friend void tag_invoke ( set_stopped_t , Derived && self ) noexcept ; friend decltype ( auto ) tag_invoke ( get_env_t , const Derived & self ) noexcept ( see below ); template < forwarding - receiver - query Tag , class ... As > requires callable < Tag , BASE - TYPE ( const Derived & ), As ... > friend auto tag_invoke ( Tag tag , const Derived & self , As && ... as ) noexcept ( nothrow - callable < Tag , BASE - TYPE ( const Derived & ), As ... > ) -> call - result - t < Tag , BASE - TYPE ( const Derived & ), As ... > { return std :: move ( tag )( GET - BASE ( self ), std :: forward < As > ( as )...); } [[ no_unique_address ]] Base base_ ; // present if and only if HAS-BASE is true }; 
- 
     [Note: receiver_adaptor tag_invoke Derived receiver_adaptor 
- 
     [Example: using _int_completion = execution :: completion_signatures < execution :: set_value_t ( int ) > ; template < execution :: receiver_of < _int_completion > R > class my_receiver : execution :: receiver_adaptor < my_receiver < R > , R > { friend execution :: receiver_adaptor < my_receiver , R > ; void set_value () && { execution :: set_value ( std :: move ( * this ). base (), 42 ); } public : using execution :: receiver_adaptor < my_receiver , R >:: receiver_adaptor ; }; -- end example] 
10.10.1.1. Non-member functions [exec.utils.rcvr_adptr.nonmembers]
template < class ... As > friend void tag_invoke ( set_value_t , Derived && self , As && ... as ) noexcept ; 
- 
     Let SET - VALUE std :: move ( self ). set_value ( std :: forward < As > ( as )...) 
- 
     Constraints: Either SET - VALUE typename Derived :: set_value callable < set_value_t , BASE - TYPE ( Derived ), As ... > true.
- 
     Mandates: SET - VALUE 
- 
     Effects: Equivalent to: - 
       If SET - VALUE SET - VALUE 
- 
       Otherwise, execution :: set_value ( GET - BASE ( std :: move ( self )), std :: forward < As > ( as )...) 
 
- 
       
template < class E > friend void tag_invoke ( set_error_t , Derived && self , E && e ) noexcept ; 
- 
     Let SET - ERROR std :: move ( self ). set_error ( std :: forward < E > ( e )) 
- 
     Constraints: Either SET - ERROR typename Derived :: set_error callable < set_error_t , BASE - TYPE ( Derived ), E > true.
- 
     Mandates: SET - ERROR 
- 
     Effects: Equivalent to: - 
       If SET - ERROR SET - ERROR 
- 
       Otherwise, execution :: set_error ( GET - BASE ( std :: move ( self )), std :: forward < E > ( e )) 
 
- 
       
friend void tag_invoke ( set_stopped_t , Derived && self ) noexcept ; 
- 
     Let SET - STOPPED std :: move ( self ). set_stopped () 
- 
     Constraints: Either SET - STOPPED typename Derived :: set_stopped callable < set_stopped_t , BASE - TYPE ( Derived ) > true.
- 
     Mandates: SET - STOPPED 
- 
     Effects: Equivalent to: - 
       If SET - STOPPED SET - STOPPED 
- 
       Otherwise, execution :: set_stopped ( GET - BASE ( std :: move ( self ))) 
 
- 
       
friend decltype ( auto ) tag_invoke ( get_env_t , const Derived & self ) noexcept ( see below ); 
- 
     Constraints: Either self . get_env () typename Derived :: get_env callable < get_env_t , BASE - TYPE ( const Derived & ) > true.
- 
     Effects: Equivalent to: - 
       If self . get_env () self . get_env () 
- 
       Otherwise, execution :: get_env ( GET - BASE ( self )) 
 
- 
       
- 
     Remarks: The expression in the noexcept - 
       If self . get_env () noexcept ( self . get_env ()) 
- 
       Otherwise, noexcept ( execution :: get_env ( GET - BASE ( self ))) 
 
- 
       
10.10.2. execution :: completion_signatures 
   - 
     completion_signatures 
- 
     [Example: class my_sender { using completion_signatures = execution :: completion_signatures < execution :: set_value_t (), execution :: set_value_t ( int , float ), execution :: set_error_t ( exception_ptr ), execution :: set_error_t ( error_code ), execution :: set_stopped_t () > ; }; // Declares my_sender to be a sender that can complete by calling // one of the following for a receiver expression R: // execution::set_value(R) // execution::set_value(R, int{...}, float{...}) // execution::set_error(R, exception_ptr{...}) // execution::set_error(R, error_code{...}) // execution::set_stopped(R) -- end example] 
- 
     This section makes use of the following exposition-only concept: template < class Fn > concept completion - signature = see below ; - 
       A type Fn completion - signature - 
         set_value_t ( Vs ...) Vs 
- 
         set_error_t ( E ) E 
- 
         set_stopped_t () 
 
- 
         
- 
       Otherwise, Fn completion - signature 
 
- 
       
- 
template < completion - signature ... Fns > struct completion_signatures {}; 
- 
template < class S , class E = no_env , template < class ... > class Tuple = decayed - tuple , template < class ... > class Variant = variant - or - empty > requires sender < S , E > using value_types_of_t = see below ; - 
       Let Fns ... completion_signatures completion_signatures_of_t < S , E > ValueFns Fns execution :: set_value_t Values n n ValueFns Tuple Variant value_types_of_t < S , E , Tuple , Variant > Variant < Tuple < Values 0 ... > , Tuple < Values 1 ... > , ... Tuple < Values m -1 . .. >> m ValueFns 
 
- 
       
- 
template < class S , class E = no_env , template < class ... > class Variant = variant - or - empty > requires sender < S , E > using error_types_of_t = see below ; - 
       Let Fns ... completion_signatures completion_signatures_of_t < S , E > ErrorFns Fns execution :: set_error_t Error n n ErrorFns Variant error_types_of_t < S , E , Variant > Variant < Error 0 , Error 1 , ... Error m -1 > m ErrorFns 
 
- 
       
- 
template < class S , class E = no_env > requires sender < S , E > inline constexpr bool sends_stopped = see below ; - 
       Let Fns ... completion_signatures completion_signatures_of_t < S , E > sends_stopped < S , E > trueif at least one of the types inFns execution :: set_stopped_t () false.
 
- 
       
10.10.3. execution :: make_completion_signatures 
   - 
     make_completion_signatures execution :: completion_signatures 
- 
     [Example: // Given a sender S and an environment Env, adapt a S’s completion // signatures by lvalue-ref qualifying the values, adding an additional // exception_ptr error completion if its not already there, and leaving the // other signals alone. template < class ... Args > using my_set_value_t = execution :: completion_signatures < execution :: set_value_t ( add_lvalue_reference_t < Args > ...) > ; using my_completion_signals = execution :: make_completion_signatures < S , Env , execution :: completion_signatures < execution :: set_error_t ( exception_ptr ) > , my_set_value_t > ; -- end example] 
- 
     This section makes use of the following exposition-only entities: template < class ... As > using default - set - value = execution :: completion_signatures < execution :: set_value_t ( As ...) > ; template < class Err > using default - set - error = execution :: completion_signatures < execution :: set_error_t ( Err ) > ; 
- 
template < execution :: sender Sndr , class Env = execution :: no_env , valid - completion - signatures < Env > AddlSigs = execution :: completion_signatures <> , template < class ... > class SetValue = default - set - value , template < class > class SetError = default - set - error , valid - completion - signatures < Env > SetStopped = execution :: completion_signatures < set_stopped_t () >> requires sender < Sndr , Env > using make_completion_signatures = execution :: completion_signatures < /* see below */ > ; - 
       SetValue As ... SetValue < As ... > valid - completion - signatures < SetValue < As ... > , E > 
- 
       SetError Err SetError < Err > valid - completion - signatures < SetError < Err > , E > 
 Then: - 
       Let Vs ... type - list value_types_of_t < Sndr , Env , SetValue , type - list > 
- 
       Let Es ... type - list error_types_of_t < Sndr , Env , error - list > error - list error - list < Ts ... > type - list < SetError < Ts > ... > 
- 
       Let Ss completion_signatures <> sends_stopped < Sndr , Env > false; otherwise,SetStopped 
 Then: - 
       If any of the above types are ill-formed, then make_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped > 
- 
       Otherwise, if any type in [ AddlSigs , Vs ..., Es ..., Ss ] completion_signatures make_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped > dependent_completion_signatures < no_env > 
- 
       Otherwise, make_completion_signatures < Sndr , Env , AddlSigs , SetValue , SetError , SetStopped > completion_signatures < Sigs ... > Sigs ... completion_signatures [ AddlSigs , Vs ..., Es ..., Ss ] 
 
- 
       
10.11. Execution contexts [exec.ctx]
- 
     This section specifies some execution contexts on which work can be scheduled. 
10.11.1. run_loop 
   - 
     A run_loop run () run () 
- 
     A run_loop run_loop 
- 
     Concurrent invocations of the member functions of run_loop run pop_front push_back finish 
- 
     [Note: Implementations are encouraged to use an intrusive queue of operation states to hold the work units to make scheduling allocation-free. — end note] class run_loop { // [exec.run_loop.types] Associated types class run - loop - scheduler ; // exposition only class run - loop - sender ; // exposition only struct run - loop - opstate - base { // exposition only virtual void execute () = 0 ; run_loop * loop_ ; run - loop - opstate - base * next_ ; }; template < receiver_of R > using run - loop - opstate = unspecified ; // exposition only // [exec.run_loop.members] Member functions: run - loop - opstate - base * pop_front (); // exposition only void push_back ( run - loop - opstate - base * ); // exposition only public : // [exec.run_loop.ctor] construct/copy/destroy run_loop () noexcept ; run_loop ( run_loop && ) = delete ; ~ run_loop (); // [exec.run_loop.members] Member functions: run - loop - scheduler get_scheduler (); void run (); void finish (); }; 
10.11.1.1. Associated types [exec.run_loop.types]
class run - loop - scheduler ; 
- 
     run - loop - scheduler scheduler 
- 
     Instances of run - loop - scheduler run_loop 
- 
     Two instances of run - loop - scheduler run_loop 
- 
     Let sch run - loop - scheduler execution :: schedule ( sch ) run - loop - sender 
class run - loop - sender ; 
- 
     run - loop - sender sender_of sender_of < run - loop - sender > true. Additionally, the types reported by itserror_types exception_ptr sends_stopped true.
- 
     An instance of run - loop - sender execution :: run_loop 
- 
     Let s run - loop - sender r decltype ( r ) receiver_of C set_value_t set_stopped_t - 
       The expression execution :: connect ( s , r ) run - loop - opstate < decay_t < decltype ( r ) >> decay_t < decltype ( r ) > r 
- 
       The expression get_completion_scheduler < C > ( s ) run - loop - scheduler run - loop - scheduler s 
 
- 
       
template < receiver_of R > // arguments are not associated entities ([lib.tmpl-heads]) struct run - loop - opstate ; 
- 
     run - loop - opstate < R > run - loop - opstate - base 
- 
     Let o const run - loop - opstate < R > REC ( o ) const R r execution :: connect o - 
       The object to which REC ( o ) o 
- 
       The type run - loop - opstate < R > run - loop - opstate - base :: execute () o . execute () if ( execution :: get_stop_token ( REC ( o )). stop_requested ()) { execution :: set_stopped ( std :: move ( REC ( o ))); } else { execution :: set_value ( std :: move ( REC ( o ))); } 
- 
       The expression execution :: start ( o ) try { o . loop_ -> push_back ( & o ); } catch (...) { execution :: set_error ( std :: move ( REC ( o )), current_exception ()); } 
 
- 
       
10.11.1.2. Constructor and destructor [exec.run_loop.ctor]
run_loop :: run_loop () noexcept ; 
- 
     Postconditions: count is 0 
run_loop ::~ run_loop (); 
- 
     Effects: If count is not 0 terminate () 
10.11.1.3. Member functions [exec.run_loop.members]
run - loop - opstate - base * run_loop :: pop_front (); 
- 
     Effects: Blocks ([defns.block]) until one of the following conditions is true:- 
       count is 0 pop_front nullptr 
- 
       count is greater than 0 1 
 
- 
       
void run_loop :: push_back ( run - loop - opstate - base * item ); 
- 
     Effects: Adds item 1 
- 
     Synchronization: This operation synchronizes with the pop_front item 
run - loop - scheduler run_loop :: get_scheduler (); 
- 
     Returns: an instance of run - loop - scheduler run_loop 
void run_loop :: run (); 
- 
     Effects: Equivalent to: while ( auto * op = pop_front ()) { op -> execute (); } 
- 
     Precondition: state is starting. 
- 
     Postcondition: state is finishing. 
- 
     Remarks: While the loop is executing, state is running. When state changes, it does so without introducing data races. 
void run_loop :: finish (); 
- 
     Effects: Changes state to finishing. 
- 
     Synchronization: This operation synchronizes with all pop_front 
10.12. Coroutine utilities [exec.coro_utils]
10.12.1. execution :: as_awaitable 
   - 
     as_awaitable template < class S , class E > using single - sender - value - type = see below ; template < class S , class E > concept single - sender = sender < S , E > && requires { typename single - sender - value - type < S , E > ; }; template < class S , class P > concept awaitable - sender = single - sender < S , env_of_t < P >> && sender_to < S , awaitable - receiver > && // see below requires ( P & p ) { { p . unhandled_stopped () } -> convertible_to < coroutine_handle <>> ; }; template < class S , class P > class sender - awaitable ; - 
       Alias template single-sender-value-type is defined as follows: - 
         If value_types_of_t < S , E , Tuple , Variant > Variant < Tuple < T >> single - sender - value - type < S , E > T 
- 
         Otherwise, if value_types_of_t < S , E , Tuple , Variant > Variant < Tuple <>> Variant <> single - sender - value - type < S , E > void 
- 
         Otherwise, single - sender - value - type < S , E > 
 
- 
         
- 
       The type sender - awaitable < S , P > template < class S , class P > // arguments are not associated entities ([lib.tmpl-heads]) class sender - awaitable { struct unit {}; using value_t = single - sender - value - type < S , env_of_t < P >> ; using result_t = conditional_t < is_void_v < value_t > , unit , value_t > ; struct awaitable - receiver ; variant < monostate , result_t , exception_ptr > result_ {}; connect_result_t < S , awaitable - receiver > state_ ; public : sender - awaitable ( S && s , P & p ); bool await_ready () const noexcept { return false; } void await_suspend ( coroutine_handle < P > ) noexcept { start ( state_ ); } value_t await_resume (); }; - 
         awaitable - receiver struct awaitable - receiver { variant < monostate , result_t , exception_ptr >* result_ptr_ ; coroutine_handle < P > continuation_ ; // ... see below }; Let r awaitable - receiver cr const r vs ... Vs ... err Err - 
           If constructible_from < result_t , Vs ... > execution :: set_value ( r , vs ...) try { r . result_ptr_ -> emplace < 1 > ( vs ...); } catch (...) { r . result_ptr_ -> emplace < 2 > ( current_exception ()); } r . continuation_ . resume (); Otherwise, execution :: set_value ( r , vs ...) 
- 
           The expression execution :: set_error ( r , err ) r . result_ptr_ -> emplace < 2 > ( AS_EXCEPT_PTR ( err )); r . continuation_ . resume (); where AS_EXCEPT_PTR ( err ) - 
             err decay_t < Err > exception_ptr 
- 
             Otherwise, make_exception_ptr ( system_error ( err )) decay_t < Err > error_code 
- 
             Otherwise, make_exception_ptr ( err ) 
 
- 
             
- 
           The expression execution :: set_stopped ( r ) static_cast < coroutine_handle <>> ( r . continuation_ . promise (). unhandled_stopped ()). resume () 
- 
           tag_invoke ( tag , cr , as ...) tag ( as_const ( cr . continuation_ . promise ()), as ...) tag forwarding - receiver - query as ... 
 
- 
           
- 
         sender - awaitable :: sender - awaitable ( S && s , P & p ) - 
           Effects: initializes state_ connect ( std :: forward < S > ( s ), awaitable - receiver { & result_ , coroutine_handle < P >:: from_promise ( p )}) 
 
- 
           
- 
         value_t sender - awaitable :: await_resume () - 
           Effects: equivalent to: if ( result_ . index ()) == 2 ) rethrow_exception ( get < 2 > ( result_ )); if constexpr ( ! is_void_v < value_t > ) return static_cast < value_t &&> ( get < 1 > ( result_ )); 
 
- 
           
 
- 
         
 
- 
       
- 
     as_awaitable e p p E decltype (( e )) P decltype (( p )) as_awaitable ( e , p ) - 
       tag_invoke ( as_awaitable , e , p ) - 
         Mandates: is - awaitable < A > true, whereA tag_invoke 
 
- 
         
- 
       Otherwise, e is - awaitable < E > true.
- 
       Otherwise, sender - awaitable { e , p } awaitable - sender < E , P > true.
- 
       Otherwise, e 
 
- 
       
10.12.2. execution :: with_awaitable_senders 
   - 
     with_awaitable_senders In addition, it provides a default implementation of unhandled_stopped () execution :: set_stopped unhandled_stopped template < class - type Promise > struct with_awaitable_senders { template < OtherPromise > requires ( ! same_as < OtherPromise , void > ) void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept ; coroutine_handle <> continuation () const noexcept { return continuation_ ; } coroutine_handle <> unhandled_stopped () noexcept { return stopped_handler_ ( continuation_ . address ()); } template < class Value > see - below await_transform ( Value && value ); private : // exposition only [[ noreturn ]] static coroutine_handle <> default_unhandled_stopped ( void * ) noexcept { terminate (); } coroutine_handle <> continuation_ {}; // exposition only // exposition only coroutine_handle <> ( * stopped_handler_ )( void * ) noexcept = & default_unhandled_stopped ; }; 
- 
     void set_continuation ( coroutine_handle < OtherPromise > h ) noexcept - 
       Effects: equivalent to: continuation_ = h ; if constexpr ( requires ( OtherPromise & other ) { other . unhandled_stopped (); } ) { stopped_handler_ = []( void * p ) noexcept -> coroutine_handle <> { return coroutine_handle < OtherPromise >:: from_address ( p ) . promise (). unhandled_stopped (); }; } else { stopped_handler_ = default_unhandled_stopped ; } 
 
- 
       
- 
     call - result - t < as_awaitable_t , Value , Promise &> await_transform ( Value && value ) - 
       Effects: equivalent to: return as_awaitable ( static_cast < Value &&> ( value ), static_cast < Promise &> ( * this )); 
 
-