| Document #: | P3552R3 [Latest] [Status] | 
| Date: | 2025-06-20 | 
| Project: | Programming Language C++ | 
| Audience: | Concurrency Working Group (SG1) Library Evolution Working Group (LEWG) Library Working Group (LWG) | 
| Reply-to: | Dietmar Kühl (Bloomberg) <dkuhl@bloomberg.net> Maikel Nadolski <maikel.nadolski@gmail.com> | 
C++20 added support for coroutines that can improve the experience writing asynchronous code. C++26 added the sender/receiver model for a general interface to asynchronous operations. The expectation is that users would use the framework using some coroutine type. To support that, a suitable class needs to be defined and this proposal is providing such a definition.
Just to get an idea what this proposal is about: here is a simple
Hello, world written using the
proposed coroutine type:
#include <execution>
#include <iostream>
#include <task>
namespace ex = std::execution;
int main() {
    return std::get<0>(*ex::sync_wait([]->ex::task<int> {
        std::cout << "Hello, world!\n";
        co_return co_await ex::just(0);
    }()));
}lazy to
task based on SG1 feedback and
dropped the section on why lazy was
chosen.any_scheduler to
task_scheduler.task
specification.decay_t from the
specification.exception_ptr when unavailable.Context to
Environment to better reflect the
argument’s use.This proposal isn’t the first to propose a coroutine type. Prior proposals didn’t see any recent (post introduction of sender/receiver) update, although corresponding proposals were discussed informally on multiple occasions. There are also implementations of coroutine types based on a sender/receiver model in active use. This section provides an overview of this prior work, and where relevant, of corresponding discussions. This section is primarily for motivating requirements and describing some points in the design space.
The paper describes a
task/lazy
type (in P1056r0 the name was
task; the primary change for P1056r1 is changing the name to
lazy). The fundamental idea is to
have a coroutine type which can be
co_awaited:
the interface of lazy consists of
move constructor, deliberately no move assignment, a destructor, and
operator co_await().
The proposals don’t go into much detail on how to eventually use a
coroutine, but it mentions that there could be functions like sync_await(task<To>)
to await completion of a task (similar to execution::sync_wait(sender))
or a few variations of that.
A fair part of the paper argues why future.then()
is not a good approach to model coroutines and their results.
Using future requires allocation,
synchronisation, reference counting, and scheduling which can all be
avoided when using coroutines in a structured way.
The paper also mentions support for symmetric transfer and allocator support. Both of these are details on how the coroutine is implemented.
task isn’t move assignable
because there are better approaches than using containers to hold them.
It is move constructible as there are no issues with overwriting a
potentially live task.task
type for different needs.co_awaiting
which may resume on a different thread is hazardous. Static analysers
should be able to detect these cases.task.This paper is effectively restating what P1056 said with the primary change
being more complete proposed wording. Although sender/receiver were
discussed when the paper was written but std::execution
hadn’t made it into the working paper, the proposal did not
take a sender/receiver interface into account.
Although there were mails seemingly scheduling a discussion in LEWG, we didn’t manage to actually locate any discussion notes.
This library contains multiple coroutine types, algorithms, and some
facilities for asynchronous work. For the purpose of this discussion
only the task types are of interest. There are two task types cppcoro::task
and cppcoro::shared_task.
The key difference between task and
shared_task is that the latter can
be copied and awaited by multiple other coroutines. As a result
shared_task always produces an
lvalue and may have slightly higher costs due to the need to maintain a
reference count.
The types and algorithms are pre-sender/receiver and operate entirely
in terms for awaiters/awaitables. The interface of both task types is a
bit richer than that from P1056/P2506. Below
t is either a cppcoro::task<T>
or a cppcoro::shared_task<T>:
shared_task<T>
object can also be copy constructed and copy assigned.t.is_ready()
it can be queried if t has
completed.co_await t
awaits completion of t, yielding the
result. The result may be throwing an exception if the coroutine
completed by throwing.co_await t.when_ready()
allows synchronising with the completion of
t without actually getting the
result. This form of synchronisation won’t throw any exception.cpproro::shared_task<T>
also supports equality comparisons.In both cases, the task starts suspended and is resumed when it is
co_awaited. This
way a continuation is known when the task is resumed, which is similar
to start(op)ing
an operation state op. The coroutine
body needs to use
co_await or
co_return.
co_await
expects an awaitable or an awaiter as argument. Using
co_yield is
not supported. The implementation supports symmetric transfer but
doesn’t mention allocators.
The shared_task<T>
is similar to split(sender):
in both cases, the same result is produced for multiple consumers.
Correspondingly, there isn’t a need to support a separate shared_task<T>
in a sender/receiver world. Likewise, throwing of results can be avoid
by suitably rewriting the result of the
set_error channel avoiding the need
for an operation akin to
when_ready().
unifex is an earlier
implementation of the sender/receiver ideas. Compared to std::execution it
is lacking some of the flexibilities. For example, it doesn’t have a
concept of environments or domains. However, the fundamental idea of
three completion channels for success, failure, and cancellation and the
general shape of how these are used is present (even using the same
names for set_value and
set_error; the equivalent of
set_stopped is called
set_done).
unifex is in production use in
multiple places. The implementation includes a unifex::task<T>.
As unifex is
sender/receiver-based, its unifex::task<T>
is implemented such that
co_await can
deal with senders in addition to awaitables or awaiters. Also, unifex::task<T>
is scheduler affine: the coroutine code resumes on the same
scheduler even if a sender completed on a different scheduler. The
task’s scheduler is taken from the receiver it is
connected
to. The exception for rescheduling on the task’s scheduler is explicitly
awaiting the result of schedule(sched)
for some scheduler sched: the
operation changes the task’s scheduler to be
sched. The relevant treatment is in
the promise type’s await_transform():
sndr which is the
result of schedule(sched)
is
co_awaited,
the corresponding sched is installed
as the task’s scheduler and the task resumes on the context completing
sndr. Feedback from people working
with unifex suggests that this choice for changing the scheduler is too
subtle. While it is considered important to be able to explicitly change
the scheduler a task executes on, doing so should be more explicit.continues_on(sender,   scheduler).
The rescheduling is avoided when the sender is tagged as not changing
scheduler (using a static constexpr
member named blocking which is
initialized to blocking_kind::always_inline).co_awaited
it gets
connected to
a receiver provided by the task to form an awaiter holding an operation
state. The operation state gets
started by the awaiter’s
await_suspend. The receiver arranges
for a set_value completion to become
a value returned from await_resume,
a set_error completion to become an
exception, and a set_done completion
to resume a special “on done” coroutine handle rather than resuming the
task itself effectively behaving like an uncatchable exception (all
relevant state is properly destroyed and the coroutine is never
resumed).When
co_awaiting
a sender sndr there can be at most
one set_value completion: if there
are more than one set_value
completions the promise type’s
await_transform will just return
sndr and the result cannot be
co_awaited
(unless it is also given an awaitable interface). The result type of
co_await sndr
depends on the number of arguments to
set_value:
set_value then the type of co_await sndr will
be
void.T for
set_value then the type of co_await sndr will
be T.set_value then the type of co_await sndr will
be std::tuple<T1, T2, ...>
with the corresponding argument types.If a receiver doesn’t have a scheduler, it can’t be connect()ed
to a unifex::task<T>.
In particular, when using a unifex::async_scope scope
it isn’t possible to directly call scope.spawn(task)
with a unifex::task<T> task
as the unifex::async_scope
doesn’t provide a scheduler. The unifex::async_scope
provides a few variations of
spawn()
which take a scheduler as argument.
unifex provides some sender
algorithms to transform the sender result into something which may be
more suitable to be
co_awaited.
For example, unifex::done_as_optional(sender)
turns a successful completion for a type
T into an std::optional<T>
and the cancellation completion
set_done into a
set_value completion with a
disengaged std::optional<T>.
The unifex::task<T>
is itself a sender and can be used correspondingly. To deal with
scheduler affinity a type erased scheduler unifex::any_scheduler
is used.
The unifex::task<T>
doesn’t have allocator support. When creating a task multiple objects
are allocated on the heap: it seems there is a total of 6 allocations
for each unifex::task<T>
being created. After that, it seems the different
co_awaits
don’t use a separate allocation.
The unifex::task<T>
doesn’t directly guard against stack overflow. Due to rescheduling
continuations on a scheduler when the completion isn’t always inline,
the issue only arises when
co_awaiting
many senders with blocking_kind::always_inline
or when the scheduler resumes inline.
The exec::task
in stdexec is somewhat similar to the unifex task with some choices
being different, though:
exec::task<T, C>
is also scheduler affine. The chosen scheduler is unconditionally used
for every
co_await,
i.e., there is no attempt made to avoid scheduling, e.g., when the
co_awaited
sender completes inline.co_await just_error(e)
and co_await just_stopped(),
i.e., the sender isn’t required to have a
set_value_t completion.The exec::task<T, C>
also provides a context C.
An object of this type becomes the environment for receivers connect()ed
to
co_awaited
senders. The default context provides access to the task’s scheduler. In
addition an in_place_stop_token is
provides which forwards the stop requests from the environment of the
receiver which is connected to the task.
Like the unifex task exec::task<T, C>
doesn’t provide any allocator support. When creating a task there are
two allocations.
Also see sender/receiver issue 241.
Based on the prior work and discussions around corresponding coroutine support there is a number of required or desired features (listed in no particular order):
A coroutine task needs to be awaiter/awaitable friendly, i.e., it
should be possibly to
co_await
awaitables which includes both library provided and user provided ones.
While that seems obvious, it is possible to create an
await_transform which is deleted for
awaiters and that should be prohibited.
When composing sender algorithms without using a coroutine it is
common to adapt the results using suitable algorithms and the
completions for sender algorithms are designed accordingly. On the other
hand, when awaiting senders in a coroutine it may be considered annoying
having to transform the result into a shape which is friendly to a
coroutine use. Thus, it may be reasonable to support rewriting certain
shapes of completion signatures into something different to make the use
of senders easier in a coroutine task. See the section on the result type for
co_await
for a discussion.
A coroutine task needs to be sender friendly: it is expected that
asynchronous code is often written using coroutines awaiting senders.
However, depending on how senders are treated by a coroutine some
senders may not be awaitable. For example neither unifex nor stdexec
support
co_awaiting
senders with more than one set_value
completion.
It is possibly confusing and problematic if coroutines resume on
a different execution context than the one they were suspended on: the
textual similarity to normal functions makes it look as if things are
executed sequentially. Experience also indicates that continuing a
coroutine on whatever context a
co_awaited
operation completes frequently leads to issues. Senders could, however,
complete on an entirely different scheduler than where they started.
When composing senders (not using coroutines) changing contexts is
probably OK because it is done deliberately, e.g., using
continues_on, and the way to express
things is new with fewer attached expectations.
To bring these two views together a coroutine task should be scheduler affine by default, i.e., it should normally resume on the same scheduler. There should probably also be an explicit way to opt out of scheduler affinity when the implications are well understood.
Note that scheduler affinity does not mean that a task is always continuing on the same thread: a scheduler may refer to a thread pool and the task will continue on one of the threads (which also means that thread local storage cannot be used to propagate contexts implicitly; see the discussion on environments below).
When using coroutines there will probably be an allocation at
least for the coroutine frame (the HALO optimisations can’t always
work). To support the use in environments where memory allocations using
new/delete
aren’t supported the coroutine task should support allocations using
allocators.
Receivers have associated environments which can support an open
set of queries. Normally, queries on an environment can be forwarded to
the environment of a connect()ed
receiver. Since the coroutine types are determined before the
coroutine’s receiver is known and the queries themselves don’t specify a
result type that isn’t possible when a coroutine provides a receiver to
a sender in a
co_await
expression. It should still be possible to provide a user-customisable
environment from the receiver used by
co_await
expressions. One aspect of this environment is to forward stop requests
to
co_awaited
child operations. Another is possibly changing the scheduler to be used
when a child operation queries
get_scheduler from the receiver’s
environment. Also, in non-asynchronous code it is quite common to pass
some form of context implicitly using thread local storage. In an
asynchronous world such contexts could be forwarded using the
environment.
The coroutine should be able to indicate that it was canceled,
i.e., to get set_stopped()
called on the task’s receiver. std::execution::with_awaitable_senders
already provided this ability senders being
co_awaited
but that doesn’t necessarily extend to the coroutine
implementation.
Similar to indicating that a task got canceled it would be good if a task could indicate that an error occurred without throwing an exception which escapes from the coroutine.
In general a task has to assume that an exception escapes the
coroutine implementation. As a result, the task’s completion signatures
need to include set_error_t(std::exception_ptr).
If it can be indicated to the task that no exception will escape the
coroutine, this completion signature can be avoided.
When many
co_awaited
operations complete synchronously, there is a chance for stack overflow.
It may be reasonable to have the implementation prevent stack overflow
by using a suitable scheduler sometimes.
In some situations it can be useful to somehow schedule an asynchronous clean-up operation which is triggered upon coroutine exit. See the section on asynchronous clean-up below for more discussing
The task coroutine provided
by the standard library may not always fit user’s needs although they
may need/want various of the facilities. To avoid having users implement
all functionality from scratch task
should use specified components which can be used by users when building
their own coroutine. The components
as_awaitable and
with_awaitable_sender are two parts
of achieving this objective but there are likely others.
The algorithm std::execution::as_awaitable
does turn a sender into an awaitable and is expected to be used by
custom written coroutines. Likewise, it is intended that custom
coroutines use the CRTP class template std::execution::with_awaitable_senders.
It may be reasonable to adjust the functionality of these components
instead of defining the functionality specific to a task<...>
coroutine task.
It is important to note that different coroutine task implementations can live side by side: not all functionality has to be implemented by the same coroutine task. The objective for this proposal is to select a set of features which provides a coroutine task suitable for most uses. It may also be reasonable to provide some variations as different names. A future revision of the standard or third party libraries can also provide additional variations.
This section discusses various design options for achieving the listed objectives. Most of the designs are independent of each other and can be left out if the consensus is that it shouldn’t be used for whatever reason.
taskCoroutines can use
co_return to
produce a value. The value returned can reasonably provide the argument
for the set_value_t completion of
the coroutines. As the type of a coroutine is defined even before the
coroutine body is given, there is no way to deduce the result type. The
result type is probably the primary customisation and should be the
first template parameter which gets defaulted to
void for
coroutines not producing any value. For example:
int main() {
    ex::sync_wait([]->ex::task<>{
        int result = co_await []->ex::task<int> { co_return 42; }();
        assert(result == 42);
    }());
}The inner coroutines completes with set_value_t(int)
which gets translated to the value returned from
co_await
(see co_await
result type below for more details). The outer coroutine completes
with set_value_t().
Beyond the result type there are a number of features for a coroutine task which benefit from customisation or for which it may be desirable to disable them because they introduce a cost. As many template parameters become unwieldy, it makes sense to combine these into a [defaulted] context parameter. The aspects which benefit from customisation are at least:
The default context should be used such that any empty type provides the default behaviour instead of requiring a lot of boilerplate just to configure a particular aspect. For example, it should be possible to selectively enable allocator support using something like this:
struct allocator_aware_context {
    using allocator_type = std::pmr::polymorphic_allocator<std::byte>;
};
template <class T>
using my_task = ex::task<T, allocator_aware_context>;Using various different types for task coroutines isn’t a problem as
the corresponding objects normally don’t show up in containers. Tasks
are mostly
co_awaited
by other tasks, used as child senders when composing work graphs, or
maintained until completed using something like a counting_scope.
When they are used in a container, e.g., to process data using a range
of coroutines, they are likely to use the same result type and context
types for configurations.
task Completion SignaturesThe discussion above established that task<T, C>
can have a successful completion using set_value_t(T).
The coroutine completes accordingly when it is exited using a matching
co_return.
When T is
void the
coroutine also completes successfully using
set_value()
when floating off the end of the coroutine or when using a
co_return
without an expression.
If a coroutine exits with an exception completing the corresponding
operation with set_error(std::exception_ptr)
is an obvious choice. Note that a
co_await
expression results in throwing an exception when the awaited operation
completes with set_error(E)
(see below), i.e., the coroutine
itself doesn’t necessarily need to
throw an
exception itself.
Finally, a
co_await
expression completing with set_stoppped()
results in aborting the coroutine immediately (see below) and causing the
coroutine itself to also complete with set_stopped().
The coroutine implementation cannot inspect the coroutine body to
determine how the different asynchronous operations may complete. As a
result, the default completion signatures for task<T>
are
ex::completion_signatures<
    ex::set_value_t(T),  // or ex::set_value_t() if T == void
    ex::set_error_t(std::exception_ptr),
    ex:set_stopped_t()
>;Support for reporting an error without exception may modify the completion signatures.
task constructors and assignmentsCoroutines are created via a factory function which returns the
coroutine type and whose body uses one of the
co_*
function, e.g.
task<> nothing(){ co_return; }The actual object is created via the promise type’s
get_return_object function and it is
between the promise and coroutine types how that actually works: this
constructor is an implementation detail. To be valid senders the
coroutine type needs to be destructible and it needs to have a move
constructor. Other than that, constructors and assignments either don’t
make sense or enable dangerous practices:
Copy constructor and copy assignment don’t make sense because there is no way to copy the actual coroutine state.
Move assignment is rather questionable because it makes it easy to transport the coroutine away from referenced entities.
Previous papers P1056 and P2506 also argued against a move
assignment. However, one of the arguments doesn’t apply to the
task proposed here: There is no need
to deal with cancellation when assigning or destroying a
task object. Upon
start() of
task the coroutine handle is
transferred to an operation state and the original coroutine object
doesn’t have any reference to the object anymore.
If there is no assignment, a default constructed object doesn’t
make much sense, i.e., task also
doesn’t have a default constructor.
Based on experience with Folly the suggestion was
even stronger: task shouldn’t even
have move construction! That would mean that
task can’t be a sender or that there
would need to be some internal interface enabling the necessary
transfer. That direction isn’t pursued by this proposal.
The lack of move assignment doesn’t mean that
task can’t be held in a container:
it is perfectly fine to push_back
objects of this type into a container, e.g.:
std::vector<ex::task<>> cont;
cont.emplace_back([]->ex::task<> { co_return; }());
cont.push_back([]->ex::task<> { co_return; }());The expectation is that most of the time coroutines don’t end up in
normal containers. Instead, they’d be managed by a counting_scope
or hold on to by objects in a work graph composed of senders.
Technically there isn’t a problem adding a default constructor, move
assignment, and a
swap()
function. Based on experience with similar components it seems
task is better off not having
them.
co_awaitWhen
co_awaiting
a sender sndr in a coroutine,
sndr needs to be transformed to an
awaitable. The existing approach is to use execution::as_waitable(sndr)
[exex.as.awaitable]
in the promise type’s
await_transform and
task uses that approach. The
awaitable returned from as_awaitable(sndr)
has the following behaviour (rcvr is
the receiver the sender sndr is
connected to):
When sndr completes with
set_stopped(std::move(rcvr))
the function unhandled_stopped()
on the promise type is called and the awaiting coroutine is never
resumed. The unhandled_stopped()
results in task itself also
completing with set_stopped_t().
When sndr completes with
set_error(std::move(rcvr), error)
the coroutine is resumed and the co_await sndr
expression results in error being
thrown as an exceptions (with special treatment for std::error_code).
When sndr completes with
set_value(std::move(rcvr), a...)
the expression co_await sndr
produces a result corresponding the arguments to
set_value:
co_await sndr is
void.co_await sndr is
a....co_await sndr is
std::tuple(a...).Note that the sender sndr is
allowed to have no set_value_t
completion signatures. In this case the result type of the awaitable
returned from as_awaitable(sndr)
is declared to be
void but
co_await sndr
would never return normally: the only ways to complete without a
set_value_t completion is to
complete with set_stopped(std::move(rcvr)
or with set_error(std::move(rcvr), error),
i.e., the expression either results in the coroutine to be never resumed
or an exception being thrown.
Here is an example which summarises the different supported result types:
task<> fun() {
    co_await ex::just();                               // void
    auto v = co_await ex::just(0);                     // int
    auto[i, b, c] = co_await ex::just(0, true, 'c');   // tuple<int, bool, char>
    try { co_await ex::just_error(0); } catch (int) {} // exception
    co_await ex::just_stopped();                       // cancel: never resumed
}The sender sndr can have at most
one set_value_t completion
signature: if there are more than one
set_value_t completion signatures
as_awaitable(sndr)
is invalid and fails to compile: users who want to
co_await a
sender with more than one
set_value_t completions need to use
co_await into_variant(s)
(or similar) to transform the completion signatures appropriately. It
would be possible to move this transformation into as_awaitable(sndr).
Using effectively into_variant(s)
isn’t the only possible transformation if there are multiple
set_value_t transformations. To
avoid creating a fairly hard to use result object, as_awaitable(sndr)
could detect certain usage patterns and rather create a result which is
easier to use when being
co_awaited. An
example for this situation is the queue.async_pop()
operation for concurrent queues:
this operation can complete successfully in two ways:
set_value(std::move(rcvr), value).set_value(std::move(rcvr)).Turning the result of queue.async_pop()
into an awaitable using the current as_awaitable(queue.async_pop())
([exec.as.awaitable])
fails because the function accepts only senders with at most one
set_value_t completion. Thus, it is
necessary to use something like the below:
task<> pop_demo(auto& queue) {
    // auto value = co_await queue.async_pop(); // doesn't work
    std::optional v0 = co_await (queue.async_pop() | into_optional);
    std::optional v1 = co_await into_optional(queue.async_pop());
}The algorithm into_optional(sndr)
would determine that there is exactly one
set_value_t completion with
arguments and produce an std::optional<T>
if there is just one parameter of type
T and produce a std::optional<std::tuple<T...>>
if there are more than one parameter with types
T.... It
would be possible to apply this transformation when a corresponding set
of completions is detected. The proposal optional variants in sender/receiver
goes into this direction.
This proposal currently doesn’t propose a change to
as_awaitable ([exec.as.awaitable]).
The primary reason is that there are likely many different shapes of
completions each with a different desirable transformation. If these are
all absorbed into as_awaitable it is
likely fairly hard to reason what exact result is returned. Also, there
are likely different options of how a result could be transformed:
into_optional is just one example.
It could be preferable to turn the two results into an std::expected
instead. However, there should probably be some transformation
algorithms like into_optional,
into_expected, etc. similar to
into_variant.
Coroutines look very similar to synchronous code with a few
co-keywords sprinkled over the code.
When reading such code the expectation is typically that all code
executes on the same context despite some
co_await
expressions using senders which may explicitly change the scheduler.
There are various issues when using
co_await
naïvely:
co_awaited
senders calls a completion function code may execute some lengthy
operation on a context which is expected to keep a UI responsive or
which is meant to deal with I/O.co_awaiting
some work may be seen as unproblematic but may actually easily cause a
stack overflow if
co_awaited
work immediately completes (also see
below).co_awaiting
some work completes on a different context and later a blocking call is
made from the coroutine which also ends up
co_awaiting
some work from the same resource there can be a dead lock.Thus, the execution should normally be scheduled on the original scheduler: doing so can avoid the problems mentioned above (assuming a scheduler is used which doesn’t immediately complete without actually scheduling anything). This transfer of the execution with a coroutine is referred to as scheduler affinity. Note: a scheduler may execute on multiple threads, e.g., for a pool scheduler: execution would get to any of these threads, i.e., thread local storage is not guaranteed to access the same data even with scheduler affinity. Also, scheduling work has some cost even if this cost can often be fairly small.
The basic idea for scheduler affinity consists of a few parts:
A scheduler is determined when
starting an operation state which
resulted from
connecting a
coroutine to a receiver. This scheduler is used to resume execution of
the coroutine. The scheduler is determined based on the receiver
rcvr’s environment.
auto scheduler = get_scheduler(get_env(rcvr));The type of scheduler is
unknown when the coroutine is created. Thus, the coroutine
implementation needs to operate in terms of a scheduler with a known
type which can be constructed from
scheduler. The used scheduler type
is determined based on the context parameter
C of the coroutine type task<T, C>
using typename C::scheduler_type
and defaults to task_scheduler if
this type isn’t defined.
task_scheduler uses type-erasure to
deal with arbitrary schedulers (and small object optimisations to avoid
allocations). The used scheduler type can be parameterised to allow use
of task contexts where the scheduler
type is known, e.g., to avoid the costs of type erasure.
Originally task_scheduler was
called any_scheduler but there was
feedback from SG1 suggesting that a general
any_scheduler may need to cover
various additional properties. To avoid dealing with generalizing the
facility a different name is used. The name remains specified as it is
still a useful component, at least until an
any_scheduler is defined by the
standard library. If necessary, the type erased scheduler type used by
task can be unspecified.
When an operation which is
co_awaited
completes the execution is transferred to the held scheduler using
continues_on. Injecting this
operation into the graph can be done in the promise type’s
await_transform:
template <ex::sender Sender>
auto await_transform(Sender&& sndr) noexcept {
    return ex::as_awaitable_sender(
        ex::continues_on(std::forward<Sender>(sndr),
                         this->scheduler);
    );
}There are a few immediate issues with the basic idea:
get_scheduler(get_env(rcvr))
doesn’t exist?scheduler is incompatible with the
coroutine’s scheduler?task without
scheduler affinity.All of these issues can be addressed although there are different choices in some of these cases.
In many cases the receiver can provide access to a scheduler via the
environment query. An example where no scheduler is available is when
starting a task on a counting_scope.
The scope doesn’t know about any schedulers and, thus, the receiver used
by counting_scope when
connecting
to a sender doesn’t support the
get_scheduler query, i.e., this
example doesn’t work:
ex::spawn([]->ex::task<void> { co_await ex::just(); }(), token);Using
spawn() with
coroutines doing the actual work is expected to be quite common, i.e.,
it isn’t just a theoretical possibility that
task is used together with
counting_scope. The approach used by
unifex
is to fail compilation when trying to
connect a
Task to a receiver without a
scheduler. The approach taken by stdexec
is to keep executing inline in that case. Based on the experience that
silently changing contexts within a coroutine frequently causes bugs it
seems failing to compile is preferable.
Failing to construct the scheduler used by a coroutine with the
scheduler obtained from the receiver
is likely an error and should be addressed by the user appropriately.
Failing to compile is seems to be a reasonable approach in that case,
too.
It should be possible to avoid scheduler affinity explicitly to avoid
the cost of scheduling. Users should be very careful when pursuing this
direction but it can be a valid option. One way to achieve that is to
create an “inline scheduler” which immediately completes when it is
start()ed
and using this type for the coroutine. Explicitly providing a type
inline_scheduler implementing this
logic could allow creating suitable warnings. It would also allow
detecting that type in
await_transform and avoiding the use
of continues_on entirely.
When operations actually don’t change the scheduler there shouldn’t
be a need to schedule them again. In these cases it would be great if
the continues_on could be avoided.
At the moment there is no way to tell whether a sender will complete
inline. Using a sender query which determines whether a sender always
completes inline could avoid the rescheduling. Something like that is
implemented for unifex:
senders define a property blocking
which can have the value blocking_kind::always_inline.
The proposal A sender query for
completion behaviour proposes a get_completion_behaviour(sndr, env)
customisation point to address this need. The result can indicate that
the sndr returns synchronously
(using completion_behaviour::synchronous
or completion_behaviour::inline_completion).
If sndr returns synchronously there
isn’t a need to reschedule it.
In some situations it is desirable to explicitly switch to a
different scheduler from within the coroutine and from then on carry on
using this scheduler. unifex
detects the use of co_await schedule(scheduler);
for this purpose. That is, however, somewhat subtle. It may be
reasonable to use a dedicated awaiter for this purpose and use, e.g.
auto previous = co_await co_continue_on(new_scheduler);Using this statement replaces the coroutine’s scheduler with the
new_scheduler. When the
co_await
completes it is on new_scheduler and
further
co_await
operations complete on
new_scheduler. The result of
co_awaiting
co_continue_on is the previously
used scheduler to allow transfer back to this scheduler. In stdexec the corresponding
operation is called
reschedule_coroutine.
Another advantage of scheduling the operations on a scheduler instead of immediately continuing on the context where the operation completed is that it helps with stack overflows: when scheduling on a non-inline scheduler the call stack is unwound. Without that it may be necessary to inject scheduling just for the purpose of avoiding stack overflow when too many operations complete inline.
When using coroutines at least the coroutine frame may end up being
allocated on the heap: the HALO
optimisations aren’t always possible, e.g., when a coroutine becomes a
child of another sender. To control how this allocation is done and to
support environments where allocations aren’t possible
task should have allocator support.
The idea is to pick up on a pair of arguments of type std::allocator_arg_t
and an allocator type being passed and use the corresponding allocator
if present. For example:
struct allocator_aware_context {
    using allocator_type = std::pmr::polymorphic_allocator<std::byte>;
};
template <class...A>
ex::task<int, allocator_aware_context> fun(int value, A&&...) {
    co_return value;
}
int main() {
    // Use the coroutine without passing an allocator:
    ex::sync_wait(fun(17));
    // Use the coroutine with passing an allocator:
    using allocator_type = std::pmr::polymorphic_alloctor<std::byte>;
    ex::sync_wait(fun(17, std::allocator_arg, allocator_type()));
}The arguments passed when creating the coroutine are made available
to an operator new
of the promise type, i.e., this operator can extract the allocator, if
any, from the list of parameters and use that for the purpose of
allocation. The matching operator delete
gets passed only the pointer to release and the originally requested
size. To have access to the correct
allocator in operator delete
the allocator either needs to be stateless or a copy needs to be
accessible via the pointer passed to operator delete,
e.g., stored at the offset size.
To avoid any cost introduced by type erasing an allocator type as
part of the task definition the
expected allocator type is obtained from the context argument
C of task<T, C>:
using allocator_type = ex::allocator_of_t<C>;This
using alias
uses typename C::allocator_type
if present or defaults to std::allocator<std::byte>
otherwise. This allocator_type has
to be for the type
std::byte
(if necessary it is possible to relax that constraint).
The allocator used for the coroutine frame should also be used for
any other allocators needed for the coroutine itself, e.g., when type
erasing something needed for its operation (although in most cases a
small object optimisation would be preferable and sufficient). Also, the
allocator should be made available to child operations via the
respective receiver’s environment using the
get_allocator query. The arguments
passed to the coroutine are also available to the constructor of the
promise type (if there is a matching on) and the allocator can be
obtained from there:
struct allocator_aware_context {
    using allocator_type = pmr::polymorphic_allocator<std::byte>;
};
fixed_resource<2048> resource;
ex::sync_wait([](auto&&, auto* resource)
        -> ex::task<void, allocator_aware_context> {
    auto alloc = co_await ex::read_env(ex::get_allocator);
    use(alloc);
}(allocator_arg, &resource));When
co_awaiting
child operations these may want to access an environment. Ideally, the
coroutine would expose the environment from the receiver it gets
connected
to. Doing so isn’t directly possible because the coroutine types doesn’t
know about the receiver type which in turn determines the environment
type. Also, the queries don’t know the type they are going to return.
Thus, some extra mechanisms are needed to provide an environment.
A basic environment can be provided by some entities already known to the coroutine, though:
get_scheduler query should
provide the scheduler maintained for scheduler affinity whose type is
determined based on the coroutine’s context using ex::scheduler_of_t<C>.get_allocator query should
provide the coroutine’s allocator whose
type is determined based on the coroutine’s context using ex::allocator_of_t<C>.
The allocator gets initialized when constructing the promise type.get_stop_token query should
provide a stop token from a stop source which is linked to the stop
token obtained from the receiver’s environment. The type of the stop
source is determined from the coroutine’s context using ex::stop_source_of_t<C>
and defaults to ex::inplace_stop_source.
Linking the stop source can be delayed until the first stop token is
requested or omitted entirely if stop_possible()
returns
false or if
the stop token type of the coroutine’s receiver matches that of ex::stop_source_of_t<C>.For any other environment query the context
C of task<T, C>
can be used. The coroutine can maintain an instance of type
C. In many cases queries from the
environment of the coroutine’s
receiver need to be forwarded. Let
env be get_env(receiver)
and Env be the type of
env.
C gets optionally constructed with
access to the environment:
C::env_type<Env>
is a valid type the coroutine state will contain an object
own_env of this type which is
constructed with env. The object
own_env will live at least as long
as the C object maintained and
C is constructed with a reference to
own_env, allowing
C to reference type-erased
representations for query results it needs to forward.C(env)
is valid the C object is constructed
with the result of get_env(receiver).
Constructing the context with the receiver’s environment provides the
opportunity to store whatever data is needed from the environment to
later respond to queries as well.C is default
constructed. This option typically applies if
C doesn’t need to provide any
environment queries.Any query which isn’t provided by the coroutine but is available from
the context C is forwarded. Any
other query shouldn’t be part of the overload set.
For example:
struct context {
    int value{};
    int query(get_value_t const&) const noexcept { return this->value; }
    context(auto const& env): value(get_value(env)) {}
};
int main() {
    ex::sync_wait(
        ex::write_env(
            []->demo::task<void, context> {
                auto sched(co_await ex::read_env(get_scheduler));
                auto value(co_await ex::read_env(get_value));
                std::cout << "value=" << value << "\n";
                // ...
            }(),
            ex::make_env(get_value, 42)
        )
    );
}When a coroutine task executes the actual work it may listen to a
stop token to recognise that it got canceled. Once it recognises that
its work should be stopped it should also complete with set_stopped(rcvr).
There is no special syntax needed as that is the result of using just_stopped():
co_await ex::just_stopped();The sender just_stopped()
completes with set_stopped()
causing the coroutine to be canceled. Any other sender completing with
set_stopped() can
also be used.
The sender/receiver approach to error reporting is for operations to
complete with a call to set_error(rcvr, err)
for some receiver object rcvr and an
error value err. The details of the
completions are used by algorithms to decide how to proceed. For
example, if any of the senders of when_all(sndr...)
fails with a set_error_t completion
the other senders are stopped and the overall operation fails itself
forwarding the first error. Thus, it should be possible for coroutines
to complete with a set_error_t
completion. Using a set_value_t
completion using an error value isn’t quite the same as these are not
detected as errors by algorithms.
The error reporting used for unifex
and stdexec is to turn
an exception escaping from the coroutine into a set_error_t(std::exception_ptr)
completion: when unhandled_exception()
is called on the promise type the coroutine is suspended and the
function can just call set_value(r, std::get_current_exception()).
There are a few limitations with this approach:
set_error_t(std::exception_ptr).
While the thrown exception can represent any error type and
set_error_t completions from
co_awaited
operations resulting in the corresponding error being thrown it is
better if the other error types can be reported, too.std::exception_ptr
the exception has to be rethrown.task<T, C>
necessarily contain set_error_t(std::exception_ptr)
which is problematic when exceptions are unavailable: std::exception_ptr
may also be unavailable. Also, without exception as it is impossible to
decode the error. It can be desirable to have coroutine which don’t
declare such a completion signature.Before going into details on how errors can be reported it is
necessary to provide a way for task<T, C>
to control the error completion signatures. Similar to the return type
the error types cannot be deduced from the coroutine body. Instead, they
can be declared using the context type
C:
typename C::error_signatures
is used to declare the error types. This type needs to be a
specialisation of
completion_signatures listing the
valid set_error_t completions.completion_signatures<set_error_t(std::exception_ptr)>
is used as a default.The name can be adjusted and it would be possible to use a different type list template and listing the error types. The basic idea would remain the same, i.e., the possible error types are declared via the context type.
Reporting an error by having an exception escape the coroutine is
still possible but it doesn’t necessarily result in a
set_error_t: If an exception escapes
the coroutine and set_error_t(std::exception_ptr)
isn’t one of the supported the
set_error_t completions, std::terminate()
is called. If an error is explicitly reported somehow, e.g., using one
of the approaches described below, and the error type isn’t supported by
the context’s error_signatures, the
program is ill-formed.
The discussion below assumes the use of the class template with_error<E>
to indicate that the coroutine completed with an error. It can be as
simple as
template <class E> struct with_error{ E error; };The name can be different although it shouldn’t collide with already
use names (like error_code or
upon_error). Also, in some cases
there isn’t really a need to wrap the error into a recognisable class
template. Using a marker type probably helps with readability and
avoiding ambiguities in other cases.
Besides exceptions there are three possible ways how a coroutine can be exited:
The coroutine is exited when using
co_return,
optionally with an argument. Flowing off the end of a coroutine is
equivalent to explicitly using
co_return;
instead of flowing off. It would be possible to turn the use of
co_return with_error{err};into a set_error(std::move(rcvr), err)
completion.
One restriction with this approach is that for a task<void, C>
the body can’t contain co_return with_error{e};:
the void
result requires that the promise type contains a function return_void() and
if that is present it isn’t possible to also have a return_value(T).
When a coroutine uses
co_await a;
the coroutine is in a suspended state when await_suspend(...)
of some awaiter is entered. While the coroutine is suspended it can be
safely destroyed. It is possible to complete the coroutine in that state
and have the coroutine be cleaned up. This approach is used when the
awaited operation completes with set_stopped(). It
is possible to call set_error(std::move(rcvr), err)
for some receiver rcvr and error
err obtained via the awaitable
a. Thus, using
co_await with_error{err};could complete with set_error(std::move(rcvr), err).
Using the same notation for awaiting outstanding operations and
returning results from a coroutine is, however, somewhat surprising. The
name of the awaiter may need to become more explicit like
exist_coroutine_with_error if this
approach should be supported.
When a coroutine uses
co_yield v;
the promise member yield_value(T)
is called which can return an awaiter
a. When
a’s await_suspend() is
called, the coroutine is suspended and the operation can complete
accordingly. Thus, using
co_yield with_error{err};could complete with set_error(std::move(rcvr), err).
Using
co_yield for
the purpose of returning from a coroutine with a specific result seems
more expected than using
co_await.
There are technically viable options for returning an error from a coroutine without requiring exceptions. Whether any of them is considered suitable from a readability point of view is a separate question.
One concern which was raised with just not resuming the coroutine is that the time of destruction of variables used by the coroutine is different. The promise object can be destroyed before completing which might address the concern.
Using
co_await or
co_yield to
propagate error results out of the coroutine has a possibly interesting
variation: in both of these case the error result may be conditionally
produced, i.e., it is possible to complete with an error sometimes and
to produce a value at other times. That could allow a pattern (using
co_yield for
the potential error return):
auto value = co_yield when_error(co_await into_expected(sender));The subexpression into_expected(sender)
could turn the set_value_t and
set_error_t into a suitable std::expected<V, std::variant<E...>>
always reported using a set_value_t
completion (so the
co_await
doesn’t throw). The corresponding std::expected
becomes the result of the
co_await.
Using
co_yield
with when_error(exp)
where exp is an expected can then
either produce exp.value()
as the result of the
co_yield
expression or it can result in the coroutine completing with the error
from exp.error().
Using this approach produces a fairly compact approach to propagating
the error retaining the type and without using exceptions.
It is easy to use a coroutine to accidentally create a stack overflow because loops don’t really execute like loops. For example, a coroutine like this can easily result in a stack overflow:
ex::sync_wait(ex::write_env(
    []() -> ex::task<void> {
        for (int i{}; i < 1000000; ++i)
            co_await ex::just(i);
    }(),
    ex::make_env(ex::get_scheduler, ex::inline_scheduler{})
));The reason this innocent looking code creates a stack overflow is
that the use of
co_await
results in some function calls to suspend the coroutine and then further
function calls to resume the coroutine (for a proper explanation see,
e.g., Lewis Baker’s Understanding
Symmetric Transfer). As a result, the stack grows with each
iteration of the loop until it eventually overflows.
With senders it is also not possible to use symmetric transfer to
combat the problem: to achieve the full generality and composing
senders, there are still multiple function calls used, e.g., when
producing the completion signal. Using
get_completion_behaviour from the
proposal A sender query for completion
behaviour could allow detecting senders which complete
synchronously. In these cases the stack overflow could be avoided
relying on symmetric transfer.
When using scheduler affinity the transfer of control via a scheduler
which doesn’t complete immediately does avoid the risk of stack
overflow: even when the
co_awaited
work immediately completes as part of the
await_suspend call of the created
awaiter the coroutine isn’t immediately resumed. Instead, the work is
scheduled and the coroutine is suspended. The thread unwinds its stack
until it reaches its own scheduling and picks up the next entity to
execute.
When using sync_wait(sndr)
the run_loop’s scheduler is used and
it may very well just resume the just suspended coroutine: when there is
scheduling happening as part of scheduler affinity it doesn’t mean that
work gets scheduled on a different thread!
The problem with stack overflows does remain when the work resumes
immediately despite using scheduler affinity. That may be the case when
using an inline scheduler, i.e., a scheduler with an operation state
whose
start()
immediately completes: the scheduled work gets executed as soon as set_value(std::move(rcvr))
is called.
Another potential for stack overflows is when optimising the
behaviour for work which is known to not move to another scheduler: in
that case there isn’t really any need to use
continue_on to get back to the
scheduler where the operation was started! The execution remained on
that scheduler all along. However, not rescheduling the work means that
the stack isn’t unwound.
Since task uses scheduler
affinity by default, stack overflow shouldn’t be a problem and there is
no separate provision required to combat stack overflow. If the
implementation chooses to avoid rescheduling work it will need to make
sure that doing so doesn’t cause any problems, e.g., by rescheduling the
work sometimes. When using an inline scheduler the user will need to be
very careful to not overflow the stack or cause any of the various other
problems with executing immediately.
Asynchronous clean-up of objects is an important facility. Both unifex
and stdexec provide some
facilities for asynchronous clean-up in their respective coroutine task.
Based on the experience the recommendation is to do something
different!
The recommended direction is to support asynchronous resources
independent of a coroutine task. For example the async-object proposal is in this
direction. There is similar work ongoing in the context of Folly. Thus, there is
currently no plan to support asynchronous clean-up as part of the
task implementation. Instead, it can
be composed based on other facilities.
The use of coroutines introduces some issues which are entirely independent of how specific coroutines are defined. Some of these were brought up on prior discussions but they aren’t anything which can be solved as part of any particular coroutine implementation. In particular:
co_awaiting
the result of an operation (or
co_yielding
a value) may suspend a coroutine, there is a potential to introduce
problems when resources which are meant to be held temporarily are held
when suspending. For example, holding a lock to a mutex while suspending
a coroutine can result in a different thread trying to release the lock
when the coroutine is resumed (scheduler affinity will move the resumed
coroutine to the same scheduler but not to the same thread).While these issues are important this proposal isn’t the right place to discuss them. Discussion of these issues should be delegated to suitable proposals wanting to improve this situation in some form.
This section lists questions based on the design discussion above. Each one has a recommendation and a vote is only needed if there opinions deviating from the recommendation.
as_awaitable(sndr)
to support more than one set_value_t(T...)
completion? Recommendation: no.into_optional,
into_expected? Recommendation: no,
different proposals.task
support scheduler affinity? Recommendation: yes.get_scheduler()
query on the receiver’s environments? Recommendation: yes.inline_scheduler (using whatever
name) to support disabling scheduler affinity? Recommendation: yes.task
support allocators (default std::allocator<std::byte>)?
Recommendation: yes.co_yield when_error(expected)
be supported? Recommendation: yes (although weakly).An implementation of task as
proposed in this document is available from beman::task.
This implementation hasn’t received much use, yet, as it is fairly new.
It is setup to be buildable and provides some examples as a starting
point for experimentation.
Coroutine tasks very similar although not identical to the one proposed are used in multiple projects. In particular, there are three implementations in wide use:
The first one (Folly::Task)
isn’t based on sender/receiver. Usage experience from all three have
influenced the design of task.
We would like to thank Ian Petersen, Alexey Spiridonov, and Lee Howes for comments on drafts of this proposal and general guidance.
In [version.syn], add a row
#define __cpp_lib_task YYYMML // also in <execution>In [except.terminate] paragraph 1 add this bullet at the end of Note 1:
[ Note: These situations are
…
when unhandled_stopped is called on a with_awaitable_senders., or
std::execution::task which
doesn’t support a std::execution::set_error_t(std::execption_ptr)
completion.In 33.4 [execution.syn] add declarations for the new classes:
namespace std::execution {
  ...
  // [exec.with.awaitable.senders]
  template<class-type Promise>
    struct with_awaitable_senders;  // [exec.affine.on]
  struct affine_on_t { unspecified  };
  inline constexpr affine_on_t affine_on;
  // [exec.inline.scheduler]
  class inline_scheduler;
  // [exec.task.scheduler]
  class task_scheduler;
  // [exec.task]
  template <class T, class Environment>
  class task;}Add new subsections for the different classes at the end of 33 [exec]:
[ Drafting note: Evertyhing below is text meant to go to the end of the 33 [exec] section without any color highlight of what it being added. ]
execution::affine_on
[exec.affine.on]1
affine_on adapts a sender into one
that completes on the specified scheduler. If the algorithm determines
that the adapted sender already completes on the correct scheduler it
can avoid any scheduling operation.
2
The name affine_on denotes a
pipeable sender adaptor object. For subexpressions
sch and
sndr, if decltype((sch))
does not satisfy scheduler, or decltype((sndr))
does not satisfy sender, affine_on(sndr, sch)
is ill-formed.
3
Otherwise, the expression affine_on(sndr, sch)
is expression-equivalent to:
    transform_sender(get-domain-early(sndr), make-sender(affine_on, sch, sndr))except that sndr is evalutated
only once.
4
The exposition-only class template
impls-for is specialized
for affine_on_t as follows:
  namespace std::execution {
    template <>
    struct impls-for<affine_on_t>: default-impls {
      static constexpr auto get-attrs =
        [](const auto& data, const auto& child) noexcept -> decltype(auto) {
          return JOIN-ENV(SCHED-ATTRS(data), FWD-ENV(get_env(child)));
        };
    };
  }5
Let out_sndr be a subexpression
denoting a sender returned from affine_on(sndr, sch)
or one equal to such, and let
OutSndr be the type decltype((out_sndr)).
Let out_rcvr be a subexpression
denoting a receiver that has an environment of type
Env such that sender_in<OutSndr, Env>
is true. Let
op be an lvalue referring to the
operation state that results from connecting
out_sndr to
out_rcvr. Calling start(op)
will start sndr on the current
execution agent and execute completion operations on
out_rcvr on an execution agent of
the execution resource associated with
sch. If the current execution
resource is the same as the execution resource associated with
sch, the completion operation on
out_rcvr may be called before start(op)
completes. If scheduling onto sch
fails, an error completion on
out_rcvr shall be executed on an
unspecified execution agent.
execution::inline_scheduler
[exec.inline.scheduler]namespace std::execution {
  class inline_scheduler {
    class inline-sender; // exposition only
    template <receiver R>
    class inline-state;  // exposition only
  public:
    using scheduler_concept = scheduler_t;
    constexpr inline-sender schedule() noexcept { return {}; }
    constexpr bool operator== (const inline_scheduler&) const noexcept = default;
  };
}1
inline_scheduler is a class that
models scheduler [exec.scheduler]. All
objects of type inline_scheduler are
equal.
2
inline-sender is an
exposition-only type that satisfies
sender. The type
 completion_signatures_of_t<inline-sender>
is completion_signatures<set_value_t()>.
3
Let sndr be an expression
of type inline-sender, let
rcvr be an expression such
that receiver_of<decltype((rcvr)), CS>
is true
where CS is completion_signatures<set_value_t()>,
then:
connect(sndr, rcvr)
has type inline-state<remove_cvref_t<decltype((rcvr))>>
and is potentially-throwing if and only if ((void)sndr, auto(rcvr))
is potentially-throwing, andget_completion_scheduler<set_value_t>(get_env(sndr))
has type inline_scheduler and is
potentially-throwing if and only if get_env(sndr)
is potentially-throwing.4
Let o be a
non-const
lvalue of type inline-state<Rcvr>,
and let REC(o)
be a
non-const
lvalue reference to an object of type
Rcvr that was initialized with the
expression rcvr passed to
an invocation of
connect that
returned o, then:
REC(o)
refers remains valid for the lifetime of the object to which
o refers, andstart(o)
is equivalent to set_value(std::move(REC(o))).execution::task_scheduler
[exec.task.scheduler]namespace std::execution {
  class task_scheduler {
    class sender; // exposition only
    template <receiver R>
    class state;  // exposition only
  public:
    using scheduler_concept = scheduler_t;
    template <class Sch, class Allocator = allocator<byte>>
      requires (!same_as<task_scheduler, remove_cvref_t<Sch>>)
        && scheduler<Sch>
    explicit task_scheduler(Sch&& sch, Allocator alloc = {});
    sender schedule();
    friend bool operator== (const task_scheduler& lhs, const task_scheduler& rhs)
        noexcept;
    template <class Sch>
      requires (!same_as<task_scheduler, Sch>)
      && scheduler<Sch>
    friend bool operator== (const task_scheduler& lhs, const Sch& rhs) noexcept;
    private:
      shared_ptr<void> sch_; // exposition only
  };
}1
task_scheduler is a class that
models scheduler [exec.scheduler]. Given on
object s of type
task_scheduler, let SCHED(s)
be the object owned by
s.sch_.
template <class Sch, class Allocator = allocator<void>>
  requires(!same_as<task_scheduler, remove_cvref_t<Sch>>) && scheduler<Sch>
explicit task_scheduler(Sch&& sch, Allocator alloc = {});2
Effects: Initialize
sch_ with allocate_shared<remove_cvref_t<Sch>>(alloc, std::forward<Sch>(sch)).
3 Recommended practice: Implementations should avoid the use of dynamically allocated memory for small scheduler objects.
4
Remarks: Any allocations performed by construction of
sender or
state objects resulting
from calls on *this
are performed using a copy of
alloc.
sender schedule();5
Effects: Returns an object of type
sender containing a sender
initialized with schedule(SCHED(*this)).
bool operator== (const task_scheduler& lhs, const task_scheduler& rhs) noexcept;6
Effects: Equivalent to: return lhs == SCHED(rhs);
template <class Sch>
  requires (!same_as<task_scheduler, Sch>)
        && scheduler<Sch>
bool operator== (const task_scheduler& lhs, const Sch& rhs) noexcept;7
Returns:
false if
type of SCHED(lhs)
is not Sch, otherwise SCHED(lhs) == rhs;
class task_scheduler::sender { // exposition only
public:
  using sender_concept = sender_t;
  template <receiver R>
  state<R> connect(R&& rcvr);
};8
sender is an
exposition-only class that models
sender [exec.sender] and for which
completion_signatures_of_t<sender>
denotes:
completion_signatures<
  set_value_t(),
  set_error_t(error_code),
  set_error_t(exception_ptr),
  set_stopped_t()>9
Let sch be an object of type
task_scheduler and let
sndr be an object of type
sender obtained from schedule(sch).
Then get_completion_scheduler<set_value_t>(get_env(sndr)) == sch
is true. The
object SENDER(sndr)
is the sender object contained by
sndr or an object move constructed
from it.
template<receiver Rcvr>
state<Rcvr> connect(Rcvr&& rcvr);10
Effects: Let r be an object
of a type that models receiver and
whose completion handlers result in invoking the corresponding
completion handlers of rcvr or copy
thereof. Returns an object of type state<Rcvr>
containing an operation state object initialized with connect(SENDER(*this), std::move(r)).
template <receiver R>
class task_scheduler::state { // exposition only
public:
  using operation_state_concept = operation_state_t;
  void start() & noexcept;
};11
state is an exposition-only
class template whose specializations model
operation_state 33.8
[exec.opstate].
void start() & noexcept;12
Effects: Equivalent to start(st)
where st is the operation state
object contained by *this.
execution::task
[exec.task]task Overview [task.overview]1
The task class template represents a
sender that can be used as the return type of coroutines. The first
template parameter T defines the
type value completion datum (33.3
[exec.async.ops])
if T is not
void.
Otherwise, there are no value completion datums. Inside coroutines
returning task<T, E>
the operand of
co_return
(if any) becomes the argument of
set_value. The second template
parameter Environment is used to
customize the behavior of task.
namespace std::execution {
  template <class T, class Environment>
  class task {
    // [task.state]
    template <receiver R>
    class state; // exposition only
  public:
    using sender_concept = sender_t;
    using completion_signatures = see below;
    using allocator_type = see below;
    using scheduler_type = see below;
    using stop_source_type = see below;
    using stop_token_type = decltype(declval<stop_source_type>().get_token());
    using error_types = see below;
    // [task.promise]
    class promise_type;
    task(task&&) noexcept;
    ~task();
    template <receiver R>
    state<R> connect(R&& recv);
  private:
    coroutine_handle<promise_type> handle; // exposition only
  };
}1
task<T, E>
models sender 33.9
[exec.snd] if
T is
void, a
reference type, or an cv-unqualified non-array object type and
E is class type. Otherwise a program
that instantiates the definition of task<T, E>
is ill-formed.
2
The nested types of task template
specializations are determined based on the
Environment parameter:
allocator_type is Environment::allocator_type
if that qualified-id is valid and denotes a type, allocator<byte>
otherwise.scheduler_type is Environment::scheduler_type
if that qualified-id is valid and denotes a type,
task_scheduler otherwise.stop_source_type is Environment::stop_source_type
if that qualified-id is valid and denotes a type,
inplace_stop_source otherwise.error_types is Environment::error_types
if that qualified-id is valid and denotes a type, completion_signatures<set_error_t(exception_ptr)>
otherwise.3
A program is ill-formed if
error_types is not a specialization
of completion_signatures<ErrorSigs...>
or ErrorSigs contains an element
which is not of the form set_error_t(E)
for some type E.
4
The type alias completion_signatures
is a specialization of execution::completion_signatures
with the template arguments (in unspecified order):
set_value_t() if
T is
void, and
set_value_t(T)
otherwise;execution::completion_signatures
denoted by error_types; andset_stopped_t().5
allocator_type shall meet the
Cpp17Allocator requirements.
task(task&& other) noexcept;1
Effects: Initializes
handle with exchange(other.handle, {}).
~task();2 Effects: Equivalent to:
    if (handle)
      handle.destroy();template <receiver R>
state<R> connect(R&& recv);3
Preconditions: bool(handle)
is true.
4
Effects: Equivalent to: return state<R>(exchange(handle, {}), std::forward<R>(recv));
namespace std::execution {
  template <class T, class Environment>
    template <receiver R>
  class task<T, Environment>::state { // exposition only
  public:
    using operation_state_concept = operation_state_t;
    template <class Rcvr>
    state(coroutine_handle<promise_type> h, Rcvr&& rr);
    ~state();
    void start() & noexcept;
private:
    using own-env-t = see below;     // exposition only
    coroutine_handle<promise_type> handle;  // exposition only
    remove_cvref_t<R>              rcvr;    // exposition only
    own-env-t                      own-env; // exposition only
    Environment                    environment; // exposition only
  };
}1
The type own-env-t is Environment::template env_type<decltype(get_env(declval<R>()))>
if that qualified-id is valid and denotes a type,
env<>
otherwise.
template <class Rcvr>
state(coroutine_handle<promise_type> h, Rcvr&& rr);2 Effects: Initializes
handle with std::move(h);rcvr with std::forward<Rcvr>(rr);own-env with own-env-t(get_env(rcvr))
if that expression is valid and own-env-t()
otherwise; If neither of these expressions is valid, the program is
ill-formed.environment with Environment(own-env)
if that expression is valid, otherwise Environment(get_env(rcvr))
if this expression is valid, otherwise Environment(). If
neither of these expressions is valid, the program is ill-formed.~state();3 Effects: Equivalent to:
    if (handle)
      handle.destroy();void start() & noexcept;4
Effects: Let prom be the
object handle.promise().
Associates STATE(prom),
RCVR(prom),
and SCHED(prom)
with *this
as follows:
STATE(prom)
is *this.RCVR(prom)
is rcvr.SCHED(prom)
is the object initialized with scheduler_type(get_scheduler(get_env(rcvr)))
if that expression is valid and scheduler_type()
otherwise. If neither of these expressions is valid, the program is
ill-formed.Let st be get_stop_token(get_env(rcvr)).
Initializes prom.token
and prom.source
such that
prom.token.stop_requested()
returns st.stop_requested();prom.token.stop_possible()
returns st.stop_possible();
andFn and
Init such that both invocable<Fn>
and constructible_from<Fn, Init>
are modeled, stop_token_type::callback_type<Fn>
models stoppable-callback-for<Fn, stop_token_type, Init>.After that invokes handle.resume().
namespace std::execution {
  template <class E>
  struct with_error {
    using type = remove_cvref_t<E>;
    type error;
  };
  template <class E>
  with_error(E) -> with_error<E>;
  template <scheduler Sch>
  struct change_coroutine_scheduler {
    using type = remove_cvref_t<Sch>;
    type scheduler;
  };
  template <scheduler Sch>
  change_coroutine_scheduler(Sch) -> change_coroutine_scheduler<Sch>;
  template <class T, class Environment>
  class task<T, Environment>::promise_type {
  public:
    template <class... Args>
    promise_type(const Args&... args);
    task get_return_object() noexcept;
    auto initial_suspend() noexcept;
    auto final_suspend() noexcept;
    void uncaught_exception();
    coroutine_handle<> unhandled_stopped();
    void return_void(); // present only if is_void_v<T> is true;
    template <class V>
    void return_value(V&& value); // present only if is_void_v<T> is false;
    template <class E>
    unspecified yield_value(with_error<E> error);
    template <class A>
    auto await_transform(A&& a);
    template <class Sch>
    auto await_transform(change_coroutine_scheduler<Sch> sch);
    unspecified get_env() const noexcept;
    template <class... Args>
    void* operator new(size_t size, Args&&... args);
    void operator delete(void* pointer, size_t size) noexcept;
  private:
    using error-variant = see below; // exposition only
    allocator_type    alloc;  // exposition only
    stop_source_type  source; // exposition only
    stop_token_type   token;  // exposition only
    optional<T>       result; // exposition only; present only if is_void_v<T> is false;
    error-variant      errors; // exposition only
  };
}1
Let prom be an object of
promise_type and let
tsk be the
task object created by prom.get_return_object().
The description below refers to objects STATE(prom),
RCVR(prom),
and SCHED(prom)
associated with tsk during
evalutation of task::state<Rcvr>::start
for some receiver Rcvr [task.state].
2
error-variant is a variant<monostate, remove_cvref_t<E>...>,
with duplicate types removed, where
E... are
template arguments of the specialization of execution::completion_signatures
denoted by error_types.
template <class... Args>
promise_type(const Args&... args);3
Mandates: The first parameter of type
allocator_arg_t (if any) is not the
last parameter.
4
Effects: If Args contains
an element of type allocator_arg_t
then alloc is initialized
with the corresponding next element of
args. Otherwise,
alloc is initialized with
allocator_type().
task get_return_object() noexcept;5
Returns: A task object
whose member handle is
coroutine_handle<promise_type>::from_promise(*this).
auto initial_suspend() noexcept;6 Returns: An awaitable object of unspecified type ([expr.await]) whose member functions arrange for
SCHED(*this).auto final_suspend() noexcept;7
Returns: An awaitable object of unspecified type ([expr.await])
whose member functions arrange for the completion of the asynchronous
operation associated with STATE(*this)
by invoking:
set_error(std::move(RCVR(*this)),
std::move(e))
if errors.index()
is greater than zero and e is the
value held by errors,
otherwiseset_value(std::move(RCVR(*this)))
if is_void<T>
is true, and
otherwiseset_value(std::move(RCVR(*this)), *result).template <class Err>
auto yield_value(with_error<Err> err);8
Mandates std::move(err.error)
is convertible to exactly one of the
set_error_t argument types of
error_types. Let
Cerr be that type.
9
Returns: An awaitable object of unspecified type ([expr.await])
whose member functions arrange for the calling coroutine to be suspended
and then completes the asynchronous operation associated with STATE(*this)
by invoking set_error(std::move(RCVR(*this)), Cerr(std::move(err.error))).
template <sender Sender>
auto await_transform(Sender&& sndr) noexcept;10
Returns: If same_as<inline_scheduler, scheduler_type>
is true
returns as_awaitable(std::forward<Sender>(sndr),
*this);
otherwise returns as_awaitable(affine_on(std::forward<Sender>(sndr), SCHED(*this)), *this).
template <class Sch>
auto await_transform(change_coroutine_scheduler<Sch> sch) noexcept;11
Effects: Equivalent to: returns await_transform(just(exchange(SCHED(*this), scheduler_type(sch.scheduler))), *this);
void uncaught_exception();12
Effects: If the signature set_error_t(exception_ptr)
is not an element of error_types,
calls
terminate()
(14.6.2
[except.terminate]).
Otherwise, stores current_exception()
into errors.
coroutine_handle<> unhandled_stopped();13
Effects: Completes the asynchronous operation associated with
STATE(*this)
by invoking set_stopped(std::move(RCVR(*this))).
14
Returns: noop_coroutine().
unspecified get_env() const noexcept;15
Returns: An object env such
that queries are forwarded as follows:
env.query(get_scheduler)
returns scheduler_type(SCHED(*this)).env.query(get_allocator)
returns alloc.env.query(get_stop_token)
returns token.q and arguments
a... a call
to env.query(q, a...)
returns STATE(*this).environment.query(q, a...)
if this expression is well-formed and forwarding_query(q)
is well-formed and is
true.
Otherwise env.query(q, a...)
is ill-formed.template <class... Args>
void* operator new(size_t size, const Args&... args);16 If
there is no parameter with type
allocator_arg_t then let
alloc be
Allocator().
Let arg_next be the parameter
following the first allocator_arg_t
parameter (if any) and let alloc be
Allocator(arg_next).
Then PAlloc is allocator_traits<Allocator>::template rebind_alloc<U>
where U is an unspecified type whose
size and alignment are both __STDCPP_DEFAULT_NEW_ALIGNMENT__.
17 Mandates:
allocator_arg_t (if any) is not the
last parameter.Allocator(arg_next)
is a valid expression if there is a parameter of type
allocator_arg_t.allocator_traits<PAlloc>::pointer
is a pointer type.18
Effects: Initializes an allocator
palloc of type
PAlloc with
alloc. Uses
palloc to allocate storage for the
smallest array of U sufficient to
provide storage for a coroutine state of size
size, and unspecified additional
state necessary to ensure that operator delete
can later deallocate this memory block with an allocator equal to
palloc.
19 Returns: A pointer to the allocated storage.
void operator delete(void* pointer, size_t size) noexcept;20
Preconditions: pointer was
returned from an invocation of the above overload of operator new
with a size argument equal to
size.
21
Effects: Deallocates the storage pointed to by
pointer using an allocator equal to
that used to allocate it.