An execution context is a resource that represents the place where execution will happen. [...]

A scheduler is a lightweight handle that represents a strategy for scheduling work onto an execution context. [...] A scheduler is a solution to that problem: the scheduler concept is defined by a single sender algorithm, schedule, which returns a sender that will complete on an execution context determined by the scheduler. [...]

A sender is an object that describes work. Senders are similar to futures in existing asynchrony designs, but unlike futures, the work that is being done to arrive at the values they will send is also directly described by the sender object itself. A sender is said to send some values if a receiver connected to that sender will eventually receive said values.

Senders are composed using sender algorithms:

  • sender factories, algorithms that take no senders and return a sender.
  • sender adaptors, algorithms that take senders and return a sender.
  • sender consumers, algorithms that take senders and do not return a sender.

[...] Any given sender may have completion schedulers for some or all of the signals (value, error, or done) it completes with.

A single-shot sender can only be connected to a receiver at most once. [...]
A multi-shot sender can be connected to multiple receivers and can be launched multiple times. [...]

The split sender adaptor facilitates connecting to a sender multiple times, regardless of whether it is single-shot or multi-shot: [...]
when_all is a sender adaptor that returns a sender that completes when the last of the input senders completes. [...]

To facilitate an intuitive syntax for composition, most sender adaptors are pipeable; they can be composed (piped) together with operator|. This mechanism is similar to the operator| composition that C++ range adaptors support and draws inspiration from piping in *nix shells. [...]

2021-09-28 Joint Library Evolution and Concurrency Telecon

POLL: We must have a single async model for the C++ Standard Library.
Outcome: No consensus.
POLL: Knowing what we know today, we should continue considering shipping the Networking TS in C++23.
Outcome: No consensus.
POLL: Knowing what we know today, we should continue considering shipping P2300 std::execution in C++23.
Outcome: No consensus.


Over the Summer of 2021, Library Evolution reviewed P2300R0 (std::execution), a new streamlined Executors proposal from some of the authors of P0443R14 (A Unified Executors Proposal for C++). The aim of P2300 was to simplify Executors down to a minimal core and make the proposal more accessible and easier to understand. The new proposal was well received by Library Evolution, and we had consensus that we want to proceed with P2300 instead of P0443.

However, there is still disagreement between the proponents of P2300 and the proponents of P2444R0, the async model of the Networking TS. During our discussions, we have begun to question whether we need to pursue a single unified asynchronous model for both the Networking TS and structured concurrency.

It is unclear at this point whether P2300, or any other form of Executors, will be ready for C++23. While we have consensus on the design of P2300, there are still questions about field experience and maturity. Given that the window for C++23 closes at the end of 2021, the likelihood of including Executors in C++23 is rapidly diminishing.

The following is what we will work on delivering in the rest of the C++23 design cycle. This is by no means a guarantee that these facilities will make it into C++23.
We will commit significant time and resources to the following work, which has previously been identified as priorities:

P2300: std::execution

  • Plenary adopted priority (see P0592).
  • Library Evolution voted to continue working on this in the C++23 cycle.