Bit-precise integers
- Document number:
- P3666R4
- Date:
2026-05-12 - Audience:
- CWG, LEWG
- Project:
- ISO/IEC 14882 Programming Languages — C++, ISO/IEC JTC1/SC22/WG21
- Reply-to:
- Jan Schultke <janschultke@gmail.com>
- GitHub Issue:
- wg21.link/P3666/github
- Source:
- github.com/Eisenwave/cpp-proposals/blob/master/src/bitint.cow
Contents
Revision history
Changes since R3
Changes since R2
Changes since R1
Changes since R0
Introduction
C23
P3140R0 "std::int_least128_t "
P3639R0 "The _BitInt Debate"
Motivation
Computation beyond 64 bits
Cornerstone of standard library facilities
C ABI compatibility
Resolving issues with the current integer type system
Portable exact-width integers
Core design
Why not a class template?
LEWG is not convinced that a library type should be pursued
Full C compatibility requires fundamental types
Common spelling of unsigned _BitInt(N)
C compatibility would require an enormous amount of operator overloads etc.
Constructors cannot signal narrowing
Tiny integers are useful in C++
Special deduction rules
Special overload resolution rankings
Quality of implementation requires a fundamental type
Why the _BitInt keyword spelling?
Underlying type of enumerations
Should bit-precise integers be optional?
_BitInt(1)
Undefined behavior on signed integer overflow
Permissive implicit conversions
C compatibility
Difficulty of carving out exceptions in the language
Picking some low-hanging fruits
Conclusion on implicit conversions
Raising the BITINT_MAXWIDTH
Possible increased BITINT_MAXWIDTH values
Template argument deduction
No preprocessor changes, for better or worse
Padding in _BitInt
Library design
Preventing library support for _BitInt
Broadening is_integral
Reasons for making std::is_integral_v<_BitInt(N)> true
Conclusion
make_signed and make_unsigned
The problem of representing widths as int
Preventing ranges::iota_view ABI break
Preserving integer-class types
Bit-precise size_t , ptrdiff_t
Feature testing
Using bit-precise integers in <cmath> functions
Note on alias templates for _BitInt
Implementation experience
Impact on the standard
Impact on the core language
Impact on the standard library
Wording
Core
[lex.icon]
[basic.fundamental]
[conv.rank]
[conv.prom]
[dcl.type.general]
[dcl.type.simple]
[dcl.enum]
[temp.deduct.general]
[temp.deduct.type]
[cpp.predefined]
[diff.lex]
Library
[allocator.requirements.general]
[version.syn]
[support.types.byteops]
[cstdint.syn]
[climits.syn]
[intseq.intseq]
[meta.trans.sign]
[utility.intcmp]
[bit.byteswap]
[bit]
[stdbit.h.syn]
[container.reqmts]
[mdspan.extents.overview]
[mdspan.sub.overview]
[mdspan.sub.range.slices]
[iterator.concept.winc]
[iterator.iterators]
[common.iter.types]
[range.iota.view]
[alg.foreach]
[alg.search]
[alg.copy]
[alg.fill]
[alg.generate]
[numeric.ops.gcd]
[numeric.ops.lcm]
[numeric.ops.midpoint]
[numeric.sat.func]
[numeric.sat.cast]
[charconv.syn]
[format.formatter.spec]
[cmplx.over]
[rand.req.seedseq]
[rand.req.urng]
[rand.util.seedseq]
[cmath.syn]
[simd.expos]
[simd.expos.defn]
[simd.mask.overview]
[simd.mask.ctor]
[numerics.c.ckdint]
[time.duration.general]
[stream.types]
[atomics.ref.int]
Acknowledgements
References
1. Revision history
1.1. Changes since R3
During the 2026-03 meeting in Croydon, both EWG and LEWG saw the paper. EWG had only the following poll:
Forward P3666R3 to CWG and LEWG for C++29
SF F N A SA 17 29 3 2 0 Result: strong consensus in favor
The following changes were made based on feedback from both groups (where the design changes are all to the library part, since EWG forwarded with no changes):
-
mentioned the
deduction case in example in §4.9. Template argument deduction (no design change, just more discussion that matches the wording)auto -
added discussion on §4.11. Padding in
_BitInt -
included LEWG poll results from the 2026-03 Croydon meeting
and called for relitigation in §5.2. Broadening
is_integral -
removed the proposed
andstd :: bit_int alias templatesstd :: bit_uint -
removed all proposed library overloads for
, reducing library support to the bare minimum_BitInt -
removed the section on Education and Teaching principles;
it no longer added value because
is obviously a C compatibility feature, and this is evident from other sections of the paper (and the keyword) already_BitInt - mentioned that [N3705] has been adopted into C2y and rephrased parts of the paper accordingly
- rebased §8. Wording on [N5032] with the changes from the Croydon motions applied
- fixed various editorial problems
-
massively overhauled wording in §8.2. Library to prevent
support_BitInt
1.2. Changes since R2
- mentioned that [N3747] has been approved by WG14 for C2y
- removed
supportstd :: simd - removed
supportstd :: atomic - removed mentions of P3438R0 because
is nowto_string constexpr - removed changes to [utility.intcmp] because these are no longer necessary after editorial pull request #8616 was merged
- rebased §8. Wording on [N5032]
1.3. Changes since R1
- added SG22 and SG6 poll results to §4. Core design
-
changed the suggested alternative
in §4.8. Raising theBITINT_MAXWIDTH BITINT_MAXWIDTH fromto65 ' 535 32 ' 767 - expanded section on naming of the alias template after SG22 feedback
- added section on preserving integer class types and corresponding wording changes
- fixed missing return types in proposed
overload and throughout the paperabs - expanded section on passing
in functions with an observation about return types_BitInt - added §5.4. The problem of representing widths as
int -
added section on library policy for accepting bit-precise integers in templates,
and applied the proposed policy to
overload for bit-precise integersstd :: abs - updated reference from [N3699] to [N3747]
- also added
overloads toabs , instead of just<cstdlib> <cmath> -
added changes to [utility.intcmp]
(adding support to
was already intended, but potentially requires wording due to weird choice of constraints)cmp_less
1.4. Changes since R0
-
updated §4.5.
_BitInt(1) following the publication of [N3699] -
added §5.3.
make_signed andmake_unsigned and corresponding wording changes in § [meta.trans.sign] - permitted bit-precise integers as underlying types of enumerations as standardized in C2y by [N3705]; see §4.3. Underlying type of enumerations and § [dcl.enum]
- further changed § [conv.prom] wording, taking promotion of enumerations with underlying bit-precise integer type into account
-
added § [diff.lex] Annex C entry for the difference in the type of
0 wb - various minor wording tweaks and added notes
- converted green notes into aqua "editor's notes" to more clearly distinguish them from wording changes
2. Introduction
In distant history, there have been various attempts at standardizing multi-precision integers in C++, such as [N1692] "A Proposal to add the Infinite Precision Integer to the C++ Standard Library", [N1744] "Big Integer Library Proposal for C++0x", and [N4038] "Proposal for Unbounded-Precision Integer Types", all of which have been abandoned by the authors. However, there has always been some enthusiasm in the committee for such a feature.
I am picking up where they have left off. This effort has now converged on a C compatibility design based on fundamental types.
2.1. C23
Recently, WG14's
[N2763] introduced the set of types to the C23 standard,
and [N2775] further enhanced this feature with literal suffixes.
For example, this feature may be used as follows:
In short, the behavior of these bit-precise integers is as follows:
-
No integer promotion to
takes place.int - Mixed-signedness comparisons, implicit conversions, and other permissive features are supported.
-
They have lower conversion rank than standard integers,
so an operation between
and_BitInt ( 8 ) yieldsint , as does an operation withint where_BitInt ( N ) is the width ofN . They only have greater conversion rank when their width is greater.int -
Widths up to
are allowed, with padding bits being added if needed.BITINT_MAXWIDTH is at leastBITINT_MAXWIDTH .64
2.2. P3140R0 "std::int_least128_t "
In parallel,
I proposed [P3140R0] which would add 128-bit integers as
to the C++ standard.
It became apparent to me that standardizing just a single width of 128
and not solving the C compatibility problem would be futile,
so I've stepped away from the proposal.
However, the feedback and experience gained from P3140
made it well worth the time spent.
2.3. P3639R0 "The _BitInt Debate"
I've subsequently proposed [P3639R0] "The type,
and attempts to answer whether the set of types corresponding to
should be a class template or a family of fundamental types.
P3639R0 received much feedback in 2025.
First, from SG22:
The WG14 delegation to SG22 believes that the C++ type family that deliberately corresponds to _BitInt (perhaps via compatibility macros) should be... (Fundamental/Library)
SF F N L SL 8 1 1 0 0 WG21
SF F N L SL 4 5 0 0 0
The overall sentiment in SG22 was that a fundamental type is "inevitable". This is reflected in the polls. SG6 also saw the paper, but had no clear opinion on the fundamental/library problem. Last but not least, EWG also saw the paper in Sofia 2025, with the following two polls:
P3639R0: EWG prefers that _BitInt-like type be a FUNDAMENTAL TYPE (in some form) in C++.
SF F N A SA 13 9 9 5 4 Result: consensus
P3639R0: EWG prefers that _BitInt-like type be a LIBRARY TYPE (in some form) in C++.
SF F N A SA 8 9 14 8 3 Result: not consensus
3. Motivation
3.1. Computation beyond 64 bits
Computation beyond 64 bits, such as with 128-bit integers, is immensely useful. A large amount of motivation for 128-bit computation can be found in [P3140R0]. Computations in cryptography, such as RSA, require even 4096-bit integers.
Even when performing most operations using 64-bit integers,
there are certain use cases where temporarily, twice the width is needed.
For example, the implementation of
requires the user of 128-bit arithmetic,
as does arithmetic with 64-bit
fixed-point numbers (e.g.
3.2. Cornerstone of standard library facilities
There are various existing and possible future library facilities that would greatly benefit from an N-bit integer type:
-
As mentioned above,
the implementation of
requires the use of 128-bit integers.linear_congruential_engine < uint64_t > -
has constructors takingbitset and aunsigned long long member function that converts from/to integers. This is clunky and limited considering that bitsets can be much larger thanto_ullong . Bit-precise integers would be a superior alternative tounsigned long long here.unsigned long long -
[P3161R4] proposes library features such as
oradd_carry which produce a wider integer result than the operands. For example:mul_wide Proposals like these are arguably obsolete if the same operation can be expressed by simply casting the operands to an integer with double the width prior to the multiplication.template < class T > struct mul_wide_result { T low_bits ; T high_bits ; } ; template < class T > constexpr mul_wide_result < T > mul_wide ( T x , T y ) noexcept ;
3.3. C ABI compatibility
C++ currently has no portable way to call C functions such as:
While one could rely on the ABI of and
to be identical in the first overload,
there certainly is no way to portably invoke the second overload.
This compatibility problem is not a hypothetical concern either; it is an urgent problem.
There are already targets with supported by major compilers,
and used by C developers:
| Compiler | Targets | Languages | |
|---|---|---|---|
| clang 16+ | |
all | C & C++ |
| GCC 14+ | |
64-bit only | C |
| MSVC | ❌ | ❌ | ❌ |
3.4. Resolving issues with the current integer type system
as standardized in C solves multiple issues that
the standard integers ( etc.) have.
Among other problems,
integer promotion can result in unexpected signedness changes.
is a 32-bit signed integer (which it is on many platforms).
During the multiplication ,
is promoted to ,
and the result of the multiplication
is not representable as a 32-bit signed integer.
Therefore, signed integer overflow takes place given unsigned operands.
is an alias for
and gets promoted to .
Surprisingly, is not
because is promoted to in ,
so the subsequent right-shift by shifts one set bit into from the left.
Even more surprisingly, if we had used instead of for ,
would be ,
despite our code seemingly using only unsigned integers.
Overall, the current integer promotion semantics are extremely surprising
and make it hard to write correct code involving promotable unsigned integers.
Promotion also makes it hard to expose small integers (e.g. 10-bit unsigned integer)
that exist in hardware (e.g. FPGA) in the language,
since all operations would be performed using .
Unconventional hardware such as FPGAs are a pillar of the motivation for
laid out in [N2763].
3.5. Portable exact-width integers
There is no portable way to use an integer with exactly 32 bits in standard C++.
and may be wider,
and is an optional type alias
which only exists if such an integer type has no padding bits.
Having additional non-padding bits may be undesirable when implementing serialization,
networking, etc. where the underlying file format or network protocol is specified
using exact widths.
While most platforms support 32-bit integers as ,
their optionality is a problem for use in the standard library and other ultra-portable libraries.
There are many use cases where padding bits would be an acceptable sacrifice
in exchange for writing portable code,
and bit-precise integers fill that gap in the language.
4. Core design
The overall design strategy is as follows:
- The proposal is a C compatibility proposal first and foremost. Whenever possible, we match the behavior of the C type.
-
The goal is to deliver a minimal viable product (MVP)
which can be integrated into the standard as quickly as possible.
This gives us plenty of time to add standard library support wherever desirable over time,
as well as other convenience features surrounding
._BitInt
The first of these points was discussed in great detail in SG22 and SG6, and has unanimous support from both groups; feedback from SG22 was given 2025-10-09 during a telecon:
/Poll/: Do you agree with the author's position on fundamental types being better than class template for _BitInt>?
Any objections to unanimous consent? /None//Poll/: Do you agree with allowing 0wb = _BitInt(1) and enum E : _BitInt(N), assuming C adopts N3699 and N3705?
Any objections to unanimous consent? /None//Poll/: Do you agree with keeping UB on signed integer overflow for _BitInt?
Any objections to unanimous consent? /None//Poll/: Do you agree that WG21 keep all implicit conversions for _BitInt?
Any objections to unanimous consent? /None//Poll/: Do you agree that WG21 keep the lower limit on the value of BITINT_MAXWIDTH from C?
Any objections to unanimous consent? /None//Poll/: Do you agree that WG21 should add a _BitInt keyword?
Any objections to unanimous consent? /None/[…]
Group agrees that we want to pursue compatibility between C and C++ with regards to _BitInt
Both directions mentioned in that poll have since been adopted by C2y, via [N3747] and [N3705].
SG6 had concerns regarding the standard library impact of bit-precise integers, but agreed with the core design strategy during the Kona 2025 meeting:
POLL: Let
_BitInt have the exact same semantics as in C.
SF F N A SA 7 2 0 0 0
- Author Position: SF
- Outcome: Strong consensus in favor
and bit-precise underlying enumeration types
were presented to SG6,
and SG6 seemed to agree with the author's choices
once it was clear that C2y is heading in this direction anyway.
Overall, both SG22 and SG6 agree that in C++ should match the C design,
and keeping it in sync with C2y's changes since C23 is necessary for that.
EWG then reaffirmed every decision with resounding consensus during the 2026-03 Croydon meeting:
Forward P3666R3 to CWG and LEWG for C++29
SF F N A SA 17 29 3 2 0 Result: strong consensus in favor
4.1. Why not a class template?
[P3639R0] explored in detail whether to make it a fundamental type or a library type.
Furthermore, feedback given by SG22 and EWG was to make it a fundamental type, not a library type.
This boils down to two plausible designs
(assuming is already supported by the compiler), shown below.
| 𝔽 – Fundamental type | 𝕃 – Library type |
|---|---|
The reasons why we should prefer the left side are described in the following subsections.
4.1.1. LEWG is not convinced that a library type should be pursued
During the 2026-03 Croydon meeting, the following poll was taken:
POLL: We should provide
andstd :: bit_int as class templates (not necessarily as part of P3666)std :: bit_uint
SF F N A SA 2 4 8 4 0 Author's Position: WA
Outcome: No consensus
was already accepted as a fundamental type by EWG,
so the result is asking whether both a fundamental type and a library type should be pursued,
rather than putting the options against each other.
4.1.2. Full C compatibility requires fundamental types
in C can be used as the type of a bit-field, among other places:
Since C++ does not support the use of class types in bit-fields,
such a could not be passed from C++ to a C API.
A developer would face severe difficulties
when porting C code which makes use of these capabilities to C++
and if bit-precise integers were a class type in C++.
4.1.3. Common spelling of unsigned _BitInt(N)
If bit-precise integers were class types in C++,
this would cause a serious problem with a common spelling
that can be used in both C and C++ headers,
even if there was a compatibility macro.
There are some workarounds to the problem, but they all seem unattractive:
-
Permitting
andsigned to be combined with class types in general, perhaps with the effect of applyingunsigned andstd :: make_signed . This would lead to a bifurcation of the language where both a builtin feature and a type trait achieve the same effect.std :: make_unsigned -
"Blessing" the
std :: bit_int < N > type-name so it can be combined with. This would be a highly unusual special case in the language.unsigned -
Making
expand to an unspecified construct that can be combined with_BitInt(...) andsigned . This means there needs to be a fundamental type, although that fundamental type only acts as a proxy for theunsigned class type. Once again, this comes off as an unusual special case.std :: bit_int -
Introducing a
macro for unsigned bit-precise integers, and insisting that both C and C++ developers use this for interoperability. This feels like an unnecessary burden for C developers considering that their spelling works perfectly fine and that we have other design options which keep C code intact._BitUint(...)
4.1.4. C compatibility would require an enormous amount of operator overloads etc.
Integer types can be used in a large number of places within the language.
If we wanted a class type to be used in the same places
(which would be beneficial for C-interoperable code),
we would have to add a significant amount of operator overloads
and user-defined conversion functions:
- There are conversion to/from floating-point types and other integral types.
- There are conversion to/from enumeration types.
-
There are conversion to/from pointers,
at least for
s of the same width as_BitInt .uintptr_t - Integers can be used to add offsets onto pointers, and by proxy, in the subscript operator of builtin arrays.
-
Arithmetic operators can be used to operate between any mixture of arithmetic types,
such as
._BitInt ( 32 ) + float
Any discrepancies would lead to some code using bit-precise integers behaving differently in C and C++, which is undesirable.
Furthermore, the
is fairly complicated to implement as a library feature
because the resulting type depends on the numeric value of the literal.
This means it would presumably be implemented like:
Seeing that properly emulating C's behavior for (and its suffixes)
requires a mountain of complicated operator overload sets,
user-defined conversion functions,
converting constructors, and
user-defined literals,
it seems unreasonable to go this direction.
A major selling point of a library type is that library types have more teachable interfaces,
since the user simply needs to look at the declared members of the class
to understand how it works.
If the interface is a record-breaking convoluted mess,
this benefit is lost.
If we choose not to add all this functionality,
then we lose a large portion of C compatibility.
Either option is bad, and making a fundamental type
seems like the only way out.
4.1.5. Constructors cannot signal narrowing
Some C++ users prefer list initialization because it prevents narrowing conversion. This can prevent some mistakes/questionable code:
This would not be feasible if was a library type
because narrowing cannot be signaled by constructors.
Consider that and
should have a non-explicit constructor (template)
accepting (and other integral types) to enable compatibility in situations like:
If such a constructor existed,
the following C++ code would not raise any errors:
This code simply calls a constructor,
and while the initialization of is spiritually narrowing,
no narrowing conversion actually takes place.
In conclusion,
if was a library type,
C++ users who use this style would lose what they consider a valuable safety guarantee.
4.1.6. Tiny integers are useful in C++
In some cases, tiny types may be useful as the underlying type of an enumeration:
By using rather than ,
every possible value has an enumerator.
If we used e.g. instead,
there would be 252 other possible values that simply have no name,
and this may be detrimental to compiler optimization of statements etc.
4.1.7. Special deduction rules
While this proposal focuses on the minimal viable product (MVP), a possible future extension would be new deduction rules allowing the following code:
Being able to make such a call to is immensely useful because it would allow
for defining a single function template which may be called with every possible
signed integer type,
while only producing a single template instantiation
for , , and ,
as long as those three have the same width.
The prospect of being able to write bit manipulation utilities that simply accept
is quite appealing.
If was instead surfaced only through a class type,
this would not work because template argument deduction would fail,
even if there existed an implicit conversion sequence from
to that class type.
These kinds of deduction rules may be shutting the door on this mechanism forever.
4.1.8. Special overload resolution rankings
Yet another possible future extension would be rankings for overload resolution that take integer width into account.
This could be valid if was considered a better match
on the basis that its width is closer to that of .
Further disambiguation could be applied if and had the same width.
These overload ranking rules would be difficult or impossible to define using a class type. Of course, they are not proposed, and it's not certain whether such rules are desirable to have, but it would be unfortunate to shut the door on these possible features forever.
4.1.9. Quality of implementation requires a fundamental type
While a library type gives the implementation
the option to provide no builtin support for bit-precise integers,
to achieve high-quality codegen,
a fundamental type is inevitably needed anyway.
,
it can be optimized to a fixed-point multiplication,
which is much cheaper:
For this operation, Clang emits the following assembly:
Basically, the result is rewritten as .
This optimization is called strength reduction
and may lead to dramatically faster code,
especially when the hardware has no direct support for integer division.
Similarly, multiplication can be strength-reduced to bit-shifting
when a factor is a power of two,
remainder operations can be reduced to bitwise AND when the divisor is a power of two, etc.
Performing strength reduction requires the compiler to be aware that a division is taking place, and this fact is lost when division is implemented in software, as a loop which expands to hundreds of IR instructions when unrolled.
Furthermore, the compiler frontend needs to understand certain operations
to warn about obvious mistakes such as division by zero,
shifting by an overly large amount,
producing signed integer overflow unconditionally, etc.
Use of on e.g. cannot be used to achieve this
because numerics code needs to have no hardened preconditions and no contracts,
for performance reasons.
Last but not least,
a fundamental type is needed to speed up constant evaluation.
Something like integer division between two
may be much faster as a compiler-builtin operation
compared to constant-evaluating a "software division" loop with 128 iterations
necessary to implement binary division.
If we accept the premise that a fundamental type is needed anyway (possibly as an implementation detail of a class template), then the class template is actively harmful bloat:
-
Any arithmetic operation needs to go through overload resolution,
competing with countless other
s (there are many in the standard library already). Even if implementers special-case these operations to circumvent the (usually awful) diagnostic quality of a failed call tooperator + , there remains substantial cost: overload resolution is expensive.operator + -
Every distinct
andbit_int < N > would be a separate instantiation of a relatively large class template, which would undoubtedly add compilation cost.bit_uint < N > - Invocations of member functions or operator overloads may add cost to debug builds and constant evaluation.
4.2. Why the _BitInt keyword spelling?
I also propose to standardize the keyword spelling
and .
When the alias template was still proposed,
I considered this a "C compatibility spelling" rather than the preferred one
which is taught to C++ developers.
Now, it is the only spelling of bit-precise integers in this paper,
which should be motivation alone.
While a similar approach could be taken
as with the compatibility macro,
macros cannot be exported from modules,
and macros needlessly complicate the problem compared to a keyword.
Furthermore, to enable compiling shared C and C++ headers, all of the spellings
, and need to be valid.
This goes far beyond the capabilities that a compatibility macro like
can provide without language support.
If the macro simply expanded to ,
this may result in the ill-formed code .
The most plausible fix would be to create an exposition-only
spelling to enable ,
which makes our users raise the question:
Why is there a compatibility macro for an exposition-only keyword spelling?! Why are we making everything more complicated by not just copying the keyword from C?! Why is this exposition-only when it's clearly useful for users to spell?!
The objections to a keyword spelling are that it's not really necessary, or that it "bifurcates" the language by having two spellings for the same thing, or that those ugly C keywords should not exist in C++. Ultimately, it's not the job of WG21 to police code style; the keyword spelling should be standardized for interoperability.
The spelling is useful for writing C/C++-interoperable code,
and C compatibility is an important design goal.
Even if compatibility macros exist in some code bases, the proposal itself should standardize the keyword spelling. Since there is no clear technical benefit to a macro, the keyword is the only logical choice.
keyword spelling as a compiler extension,
so this is standardizing existing practice.
4.3. Underlying type of enumerations
The following C code is not valid C23, but is valid in C2y following acceptance of [N3705].
There is no obvious reason why must not be a valid underlying type,
neither in C nor in C++.
For C++, it seems better to simply allow bit-precise integers in this context
because it is useful; see §4.1.6. Tiny integers are useful in C++.
Also note that as adopted in [N3705],
bit-precise integers should only be the underlying types of enumerations
when the user explicitly specifies this with :
As adopted in [N3705] and as in the case of bit-precise bit-fields, integer promotion should not take place for enumerations whose underlying type is bit-precise. If the implementation-defined underlying type of enumerations could be chosen to be bit-precise, this would make it implementation-defined whether integer promotion takes place, by proxy. It would also be a compatibility pitfall; C requires bit-precise underlying types to be specified explicitly, so any choice the implementation makes could interfere with future standardization.
4.4. Should bit-precise integers be optional?
As in C, is only required to support
of at least , which has a minimum of .
This makes a semi-optional feature,
and it is reasonable to mandate its existence, even in freestanding platforms.
Of course, this has the catch that may be completely useless
for tasks like 128-bit computation.
As unfortunate as that is, the MVP should include no more than C actually mandates.
Mandating a greater minimum width could be done in a future proposal.
4.5. _BitInt(1)
C23 does not permit but does permit ,
mostly for historical reasons
(C did not always require two's complement representation for signed integers).
This is an irregularity that could make generic programming harder in C++.
However, this restriction is being lifted in C2y;
see [N3747] "Integer Sets, v5".
That proposal has been approved but not yet merged into the C2y draft at the time of writing.
It makes a valid type,
and is changed to be of type rather than .
It also contains some practical motivation for why
a single-bit should be permitted.
was allowed,
it would be able to represent the values and ,
just like an bit-field.
4.6. Undefined behavior on signed integer overflow
I propose to perpetuate bit-precise integers having undefined behavior
on signed integer overflow, just like , etc.
This has a few reasons:
- bit-precise integers have undefined overflow in C, so this is what users are used to.
- "Solving" signed integer overflow for bit-precise integers is not part of the MVP. Undefined behavior can always be defined to do something else, so there is no urgent need for this paper to address this issue, rather than solving it in a follow-up paper.
- Signed integer overflow having undefined behavior is a much broader issue that should be looked at in general, for all integer types, not just bit-precise integer types. Perhaps hardened implementations could have wrapping overflow with erroneous behavior. In any case, the problem exceeds the scope of the paper.
- It is highly unusual that users would expect signed integer overflow to be well-behaved, such as having wrapping behavior. Adding two positive numbers and obtaining a negative number is not typically useful.
-
The undefined behavior here is useful.
It allows for optimizations such as converting
intox + 3 < 0 .x < - 3
That being said, much of the feedback surrounding bit-precise integers revolved around signed integer overflow. If we were to make signed integer overflow not undefined for bit-precise integers, there are two options that may find consensus:
- Make signed integer overflow wrapping. In other words, most operations would be performed as if by casting to the corresponding unsigned type, performing the operation, and casting back.
- Make signed integer overflow wrapping and erroneous. This is mirroring Rust's behavior, and would typically be implemented by detecting overflow on debug builds and in constant evaluation, but ignoring it and letting it wrap in release builds.
4.7. Permissive implicit conversions
Just like any other integral type, the proposal makes bit-precise integers quite permissive when it comes to implicit conversions. This is disappointing to anyone who wants bit-precise integers to be a much "stricter" or "safer" alternative to standard integers, but it is arguably the better design for various reasons.
4.7.1. C compatibility
Firstly, the point of perpetuating implicit conversions is to mirror the C semantics as closely as possible, which leads to few or no surprises when porting code between the languages, or when writing C-interoperable headers.
If we look at how C users use ,
GitHub code search for
// mixing signed and unsigned bit-precise integers unsigned _BitInt ( 128 ) max128s = 0x 7FFF ' FFFF ' FFFF ' FFFF ' FFFF ' FFFF ' FFFF ' FFFF wb ; // mixing bit-precise and standard integers unsigned _BitInt ( 4 ) a = 1 u ; // mixing bit-precise and standard integers of different signedness unsigned _BitInt ( total ) bit = 1 ; // ... including cases where initialization does not preserve values unsigned _BitInt ( 3 ) max3u = - 1 ;
If we were to make implicit conversions much more restrictive on the C++ side, it would become very easy to slip up and accidentally write a header that does not also compile in C++.
4.7.2. Difficulty of carving out exceptions in the language
Writing C++ code involving bit-precise integers would be quite annoying and "flag" many harmless cases if the rules were too strict.
to was unconditionally ill-formed.
is "incorrectly signed" for ,
and the conversion from to is not value-preserving generally,
but writing code like this is perfectly reasonable.
The workaround would be to use correct literals, such as:
To combat this problem, it would be necessary to carve out various special cases. For example, permitting value-preserving conversions with constant expressions would prevent the example above from being flagged.
However, such special cases are insufficient to cover all harmless cases.
Even though is not a constant expression,
will "just work" no matter what integer type accepts.
Existing C++ code bases that have not used flags such as
Furthermore, discrepancies between the standard integers and bit-precise integers would make it much harder to write generic code:
,
but the instantiation would be ill-formed for
with restrictive implicit conversions:
Literally every statement of this template may fail to compile when ,
depending on how strict implicit conversions are.
I conjecture that there are vast amounts of templates like .
To accommodate bit-precise integers in this function, a rewrite is necessary:
:
Even if is signed instead of unsigned,
produces a mathematically identical result.
4.7.3. Picking some low-hanging fruits
While conversions between bit-precise integers and other signed or unsigned integer could be difficult to restrict due to the reasons above, other conversions are much more rare and could more easily be restricted:
- Conversions between bit-precise integers and
.bool - Conversions between bit-precise integers and character types.
- Conversions between bit-precise integers and floating-point types.
It would be reasonable to ban these conversions unconditionally because they are likely to be category errors.
I was fixing a couple of minor bugs in a program I've been working on, when I made the mistake of typing
instead ofcout << string ( ' \ n ' , 1 ) ; cout << string ( 1 , ' \ n ' ) ; I didn't get any compile errors and the programs reaction gave me a bit of a laugh. Instead of the blank line I wanted to put in, I got
:):):):):):):):):):) (10 of them). It just made me wonder as a relative C++ beginner what other "easter eggs" are there that people might feel like sharing.
It turns out that is not an "easter egg",
it just results in the Windows terminal displaying a as " overload is called,
and since the and can be converted to and
without any change in value,
compilers generally don't raise a warning, even with .
The least harmful of these conversions is a value-preserving conversion from a bit-precise integer to a floating-point type. However, at best, these lack clarity of intent.
When the user calls with an integer operand,
are we sure that this decision was made intentionally?
Is the author unaware that there is a separate function giving the integer results,
or do they actually need the fractional part,
and that is why they called the overload?
Even if the author wrote ,
this could plausibly be done due to performance considerations.
Similarly, calling with an integer operand could be a major performance bug
on a 32-bit platform with 32-bit and 64-bit ,
considering that this is equivalent to calling .
Perhaps calling was intended.
Conversely, if the author called ,
the → conversion may be value-preserving,
but this call is almost certainly a mistake.
The author likely expected to obtain , judging by the operand.
4.7.4. Conclusion on implicit conversions
In conclusion, discrepancies between the standard integers and bit-precise
integers are undesirable;
they introduce a lot of unnecessary problems.
There are many harmless operations like and
where mixing signedness is okay,
and not every user wants to have warnings, let alone errors for these.
Especially errors would make it hard to write headers that compile both in C and in C++.
The final nail in the coffin is that if the user wants implicit conversions to be restricted, they have the freedom to add those restrictions via compiler warnings and linter checks. Having these restrictions standardized in the language robs the user of choice. If C++26 profiles make progress, it is likely that C++ will have profiles which restrict implicit conversions, giving users a standard way to opt into diagnostics.
This revision keeps implicit conversions permissive. If desired, conversions described in §4.7.3. Picking some low-hanging fruits can still be restricted in a follow-up paper.
4.8. Raising the BITINT_MAXWIDTH
The proposal currently does not seek to increase the
beyond what C offers.
That is, may be as low as 64.
I do not consider an increase of the maximum to be part of the MVP.
It's something that can always be done later, if desirable,
without any breaking changes.
It also should be stated that increasing the is not really
within the power of WG21 and not even within the power of compiler vendors.
of up to ,
but only enables this for certain ABIs.
For example, the x86-64 psABI
defines an ABI for any bit-precise integer width,
so the full width is available.
However, the "Basic" C ABI for WebAssembly (which Clang uses at the time of writing) has the following limitation:
types are supported up to width 128 and are represented as the smallest same-signedness Integer type with at least as many bits._BitInt ( N )
Consequently, is set to when compiling
with
WG21 can define the as whatever they want to;
it is of no consequence because compiler vendors are not going to make that
width available when there is no platform ABI for .
If compiler vendors did that,
there would be a risk of a massive future ABI break in order to comply with the system ABI,
once defined.
Without a single platform ABI, there would also be no portable way for code generated
by different compilers to interoperate,
such as compiling a C library with GCC and using it from Clang-compiled C++ code.
An increase to the is political posturing.
That does not mean that it's entirely pointless.
If C++ defined the minimum to be, say, ,
this would motivate platforms to define an ABI for large bit-precise integers.
4.8.1. Possible increased BITINT_MAXWIDTH values
Firstly, it should be noted that [P3140R0] got substantial criticism
just for attempting to standardize 128-bit integers for embedded developers.
As a compromise, it may be reasonable to increase the
only for hosted implementations, not for freestanding implementations.
That being said, there are two plausible increased minimums:
-
. Many platform ABIs (see example above) already define an ABI for128 . 128-bit integers have been provided by compilers for a long time now, at least by GCC and Clang (_BitInt ( 128 ) ). There are heaps of motivation (see [P3140R0]) for 128-bit computation. The calling conventions are also relatively obvious for 64-bit platforms: pass via pair of 64-bit integers.__int128 -
. Both GCC and Clang already support this width. Some cryptographic use cases like future-proof RSA computations need 8192 bits of key size, and at least double that for modular arithmetic. It is unlikely that a cryptographic library needs 4096 bits but does not need 8192 bits at any point, but likely that 32767 is sufficiently large, even in the next few years. Any more than 32767 becomes problematic for the standard library because32 ' 767 is no longer capable of representing the width on 16-bit platforms; this breaks functions such asint (which returnsstd :: popcount ). Major design adjustments would be needed to address this problem.int
Beyond that, may be tricky to use.
When working with Clang's ,
a single operation could result in stack overflow because the result is 1 MiB large.
The user would have to carefully ensure that all objects (including temporaries)
have static or dynamic storage duration (i.e. use or global variables).
For these extreme sizes, a dynamically sized integer is more ergonomic.
Therefore, setting the minimum to millions feels unmotivated.
4.9. Template argument deduction
The following code should be valid:
This would be a consequence of deduction from being valid:
This behavior is already implemented by Clang as a C++ compiler extension,
and makes deduction behave identically to deducing sizes of arrays.
In general, the aim is to make the deduction of widths
as similar as possible to arrays because users are already familiar with the latter.
It is also clearly useful because it allows writing templates
that can accept of any width.
This behavior is part of the core design,
and it would be quite surprising to users if such deduction was not possible.
If deducing from is possible,
why would it not be possible to deduce from ?
One thing deliberately not allowed is:
This shorthand construct (which is similar to class template argument deduction) is not part of the MVP and if desired, should be proposed separately.
4.10. No preprocessor changes, for better or worse
To my understanding, no changes to the preprocessor are required.
[N2763] did not make any changes to the C preprocessor either.
In most contexts, integer literals in the preprocessor are simply a
Within the controlling constant expression of an directive,
all signed and unsigned integer types
behave like and ([cpp.cond]),
which may be surprising.
is a 64-bit signed integer (which it is on many platforms):
is ill-formed
because the integer literal is of type ,
which behaves like within .
Since does not fit within ,
the literal is ill-formed ([lex.icon] paragraph 4).
The current behavior could be seen as suboptimal because it makes bit-precise integers dysfunctional within the preprocessor. However, the preprocessor is largely "owned" by C, and any fix should go through WG14. In any case, fixing the C preprocessor is not part of the MVP.
4.11. Padding in _BitInt
It is worth mentioning that types may have padding bits,
which the implementation can avoid for standard integer types
by choosing padding-free widths for them.
This is a known fact,
and there is no desire to prohibit widths that would have padding bits.
A possible future direction could be to mandate sign extension for those padding bits.
The standard currently does not mandate padding bits to have specific values in most cases,
but that may be useful for .
could be correctly converted to
using if sign extension in the padding bits was guaranteed.
That is, the conversion would have the same effect as .
The problem with such a guarantee is that C does not provide it,
so when calling an function that takes e.g. as a parameter,
it would be impossible for a C++ program to guarantee that the padding bits
have been correctly set by the C program.
This guarantee can only be provided by C and C++ in tandem,
is fairly ambitious,
requires ABI changes,
and needs to be proposed separately.
5. Library design
In summary,
the design for the standard library is
to support where C already supports it, and
to support it in .
Anywhere else, support is prevented.
5.1. Preventing library support for _BitInt
When discussing library design,
it is important to understand that the vast majority of support for bit-precise integers
"sneaks" into the standard without any explicit design changes or wording changes.
This happens because bit-precise integers are proposed to be signed and unsigned integer types,
so they would be supported by any facility that supports integer types (e.g. ).
Even if the wording effort to support bit-precise integers is minimal in some cases,
and even if the implementation effort boils down to adjusting a template constraint,
such implicit support results in an explosion of the test matrix.
For example, may be implemented generically for all integer types,
but tests still need to be written to ensure that it works for any .
Furthermore, R3 of this paper hit several design problems,
like whether to still pass huge by value
and how to extend functions like or
that are not templates in the standard library,
but overload sets of non-template functions.
Trying to word and implement support for all in one paper is simply too much.
LEWG addressed this concern as follows:
POLL: We should prevent library support for _BitInt
SF F N A SA 5 9 2 3 1 Outcome: consensus in favor
Since LEWG voted to prevent library support for bit-precise integers,
we must alter the constraints of various library components to disallow bit-precise integers.
To be clear,
this does not mean that we prevent
or anything else that works generically for copyable types, movable types, etc.
Such a restriction would be totally toothless anyway because it can be bypassed
by wrapping in a .
Only numeric constraints are adjusted.
The interesting question is where we do support bit-precise integers despite the LEWG vote:
-
isstd :: is_integral_v < _BitInt ( N ) > , directly contradicting an LEWG polltrue -
andstd :: make_signed support bit-precise integersstd :: make_unsigned -
is specializedstd :: numeric_limits < _BitInt ( N ) > -
functions support bit-precise integers as arguments<cmath> -
functions support bit-precise integers in those cases where C23 supports them too<stdbit.h>
throughout the standard library in the future.
The strategy for C++29 can still be to add support
like , , , etc. paper-by-paper.
5.2. Broadening is_integral
One controversial aspect of this paper
is that is proposed to be .
LEWG voted for the opposite during the 2026-03 Croydon meeting:
POLL:
std::is_integral_v<_BitInt(N)> should befalse
SF F N A SA 7 4 5 2 0 Outcome: consensus in favor
This decision was motivated by the fact that otherwise,
large amounts of existing code
(not just the standard library, but also user code)
would implicitly opt into supporting bit-precise integers,
despite never being written with that intent.
It is even theoretically possible that this results in correctness regression
due to instantiating templates with e.g.
and running into signed integer overflow
that would have been prevented by promotion to .
Nonetheless, this poll result caused widespread backlash from
members of the committee, Clang implementers, and the C++ community at large.
Some described the decision as absurd
.
There are also many practical problems not discussed during the LEWG session
which likely would have prevented consensus from being reached.
5.2.1. Reasons for making std::is_integral_v<_BitInt(N)> true
- First and foremost, it is intuitive for bit-precise integers to be integral types. This is what users expect.
-
A substantial amount of committee members don't believe LEWG is entitled to a decision
on this type trait in the first place.
That is because the type trait arguably just exposes
the fact that
is an integral type within the core language, so only EWG, not LEWG can decide the result of_BitInt ; LEWG does not decide what is an integral type in the core language. I personally don't believe that LEWG is powerless in this regard, but the LEWG decision does cause procedural controversy.std :: is_integral -
Classifying
as an integral type is symmetrical with the taxonomy of integer types in C, where bit-precise integer types are integer types._BitInt -
Any code exclusively constrained on
is likely under-constrained anyway because it opts intostd :: is_integral_v ,char32_t , and other types that might not match the user's expectation ofconst volatile bool integral types
.would make this problem worse, but it does not create a new problem._BitInt -
libc++ already makes
std :: is_integral_v < _BitInt ( N ) > , so what LEWG wants is silently altering the behavior of existing traits. If LLVM implementers have already shipped this behavior in production code, despite the potential impact on code constrained ontrue , why is LEWG effectively claiming it's not feasible to be shipped due to user impact?std :: is_integral_v -
For the most common use cases of bit-precise integers,
such as
, there really isn't a problem caused by implicit support in the standard library or in third-party code. In fact,_BitInt ( 128 ) has already been decided to be true by libstdc++ and libc++, despite the fact that this could break some third-party code that assumes that integers are no wider than 64 bits and no wider thanstd :: is_integral_v < __int128 > . The real problems arise only for small signed bit-precise integers such asstd :: intmax_t and for huge bit-precise integers such as (possibly_BitInt ( 1 ) )unsigned ._BitInt ( 32 ' 768 ) -
Making
std :: is_integral_v < _BitInt ( N ) > is ineffective when considering that by design, the category of integral types has always been extendable using extended integer types. These are not compiler extensions, but rather an open set of optional types that can even be exposed using aliases such asfalse . Even more extremely, the implementation can decide to add a set of extended integer types such asstd :: int128_t with essentially identical behavior to_ExtInt ( N ) , and the standard would require_BitInt ( N ) to bestd :: is_integral_v < _ExtInt ( N ) > . In fact,true is currently an alternative spelling for_ExtInt ( N ) in Clang. It would be inconsistent to say it's fine for an arbitrary open set of implementation-specific types to be integral while saying that this is not fine for_BitInt ( N ) ._BitInt ( N ) -
If
was to receive more standard library support in the future to a point where_BitInt ( N ) beingstd :: is_integral_v < _BitInt ( N ) > becomes undesirable from LEWG's perspective, it would be a breaking change to make itfalse later. We can decide to make it whatever we want right now, but this decision is likely irreversible. In the long term, it would make the language design terribly confusing iftrue had extensive core language and standard library support, but was not considered an integral type. This is the worst possible design outcome._BitInt -
C++ has always reserved the right to extend type sets and continues to do so.
For example, the set of floating-point types was extended
to include extended floating-point types such as
andstd :: float128_t , which could also break user expectations regarding floating-point formats. It is also not unlikely that decimal floating-point types will receive a standard alias in the future, such asstd :: bfloat16_t to matchstd :: decimal64_t in C. Are we then going to say that decimal floating-point types are not floating-point types?! Note that the implementation can already provide_Decimal64 as an extended floating-point type right now, so this situation is exactly the same as with providing_Decimal64 as an extended integer type: it can technically be done already without being classified as a compiler extension, and the type traits match the user expectations in that case._ExtInt ( N )
5.2.2. Conclusion
Given the above reasons,
I cannot with good conscience support the LEWG decision
to make .
It requires relitigation,
especially since much of the rationale above was not discussed in Croydon,
such as the existing libc++ behavior.
5.3. make_signed and make_unsigned
To prevent breaking existing code,
the behavior of and needs to be
made future-proof:
The rank of is greater than the rank of
(assuming those have the same width; see [conv.rank]).
Therefore, would need to be ,
since it produces ([meta.trans.sign])
unsigned integer type with smallest rank ([conv.rank]) for which
sizeof ( T ) == sizeof ( type )
Furthermore, the current wording would give the user an implementation-defined type in the following scenario:
could be either or an extended integer type
with lower conversion rank than .
However, for simplicity, and
should always produce a bit-precise integer type when they are fed a bit-precise integer type
or an enumeration whose underlying type is a bit-precise integer.
Overall, can be made future-proof with the following set of rules:
- For signed integers and unsigned integers, it does the "obvious thing".
-
For types whose underlying type is a bit-precise integer,
it behaves like
. This only affects enumeration types, since integral types likemake_signed_t < underlying_type_t < T >> are currently specified not to have a bit-precise underlying type.char32_t -
For any other integral type (
, other enumerations, etc.), it denotes the smallest standard or extended signed integer type that fits that integral type.char32_t
should behave correspondingly.
5.4. The problem of representing widths as int
A pre-existing and prolific issue in the C++ standard library is the use of
to represent properties of integers,
such as
orint std :: numeric_limits :: digits .int std :: popcount ( T )
This has never been a practical issue before,
it is now theoretically possible that an implementation
may want to provide or wider.
is only guaranteed to have the range of a 16-bit signed integer,
so it may not be able to represent such huge widths.
The easiest solution is to ignore the problem;
this is proposed.
It would require substantial design changes to to fix the issue.
Furthermore, the practical utility of and wider
is somewhat questionable,
especially on 16-bit architectures (which are typically embedded architectures).
On 32-bit architectures and above, is typically 32-bit,
so this problem doesn't exist.
5.5. Preventing ranges::iota_view ABI break
Due to the current wording in [range.iota.view] paragraph 1,
adding bit-precise integers or extended integers of greater width than
potentially forces the implementation to redefine
.
Changing the type would be an ABI break.
This problem is similar to historical issues with ,
where adding 128-bit integers would force the implementation to redefine the former type.
To prevent this, the proposal tweaks the wording in § [range.iota.view] so that new extended or bit-precise integers may be added. Dealing with extended integer types extends slightly beyond the scope of the MVP, but it would be silly to leave the wording in an undesirable state, where adding a 128-bit extended integer still forces an ABI break.
5.6. Preserving integer-class types
Another very similar wording issue to the one in the previous section
arises for the so-called "integer-class types"
in the standard library, in [iterator.concept.winc] paragraph 3.
Signed-integer-like types are either signed integral types,
or signed-integer-class types.
Integer-class types are required to be wider than every integral type of the same signedness,
so introducing bit-precise integers such as means that e.g.
Microsoft's is no longer an integer-class type,
and may no longer be used in .
5.7. Bit-precise size_t , ptrdiff_t
As in C,
the proposal allows for and to be bit-precise integers,
which is a consequence of and pointer subtraction
potentially yielding a bit-precise integer.
We don't need to explicitly disallow this;
it is effectively disallowed because the lack of support
in the standard library would result in a dysfunctional implementation
if , or any s were bit-precise integers.
5.8. Feature testing
After consulting with some LWG and SG10 experts, I have opted to add only two feature-test macros: one for the core feature, and one for the standard library. While more granular feature-testing could be useful considering that the feature is quite large, there seems to be little enthusiasm for it.
5.9. Using bit-precise integers in <cmath> functions
The proposal adds support for using bit-precise integers in all
This is done simply for consistency with C:
after some consulting with WG14 members,
I am under the impression that C's functions deliberately
support all integer types (including bit-precise integers),
not just as the result of defective wording.
Consequently, can be passed both to
the type-generic macro
as well as to the regular function.
5.10. Note on alias templates for _BitInt
LEWG decided against these templates, so they are no longer part of this paper:
POLL: We should provide alias templates for
(i.e._BitInt ( ) andstd :: bit_int – possibly under a different name e.g.std :: bit_uint )std :: cbit_int
SF F N A SA 0 3 8 3 5 Outcome: No consensus
6. Implementation experience
, formerly known as , has been a compiler extension
in Clang for several years now.
The core language changes are essentially standardizing that compiler extension.
When compiling using Clang and libstdc++, one gets virtually the proposed behavior. That is, just the core language feature, with minimal standard library support.
is .
7. Impact on the standard
7.1. Impact on the core language
The core language changes essentially boil down to adding the
type and the
7.2. Impact on the standard library
As explained in §5. Library design, various constraints are added throughout the standard library to prevent support for bit-precise integers.
8. Wording
The following changes are relative to [N5032] with the changes from Croydon motions applied.
8.1. Core
quoted
(prose)
spelling of bit-precise integer types should be.
The current spelling is e.g. “ of width ”,
which is fairly similar to other code-heavy spellings like
.
However, this is questionable because is not valid C++ in itself;
is.
An alternative would be a pure prose spelling, like
bit-precise unsigned integer of width
,
which is a bit more verbose.
There is no strong author preference.
[lex.icon]
In [lex.icon], change the grammar as follows:
integer-suffix :- unsigned-suffix long-suffixopt
- unsigned-suffix long-long-suffixopt
- unsigned-suffix size-suffixopt
- unsigned-suffix bit-precise-int-suffixopt
- long-suffix unsigned-suffixopt
- long-long-suffix unsigned-suffixopt
- size-suffix unsigned-suffixopt
- bit-precise-int-suffix unsigned-suffixopt
unsigned-suffix : one ofu U
long-suffix : one ofl L
long-long-suffix : one ofll LL
size-suffix : one ofz Z
bit-precise-int-suffix : one ofwb WB
Change table [tab:lex.icon.type] as follows:
| none |
|
|
or |
|
|
or |
|
|
Both or and or |
|
|
Both or and or |
|
|
or |
the signed integer type corresponding to
the type named by
([support.types.layout])
|
the signed integer type corresponding to
the type named by
the type named by
|
Both or and or |
the type named by
|
the type named by
|
or |
“ of width ”,
where is the lowest integer
so that the value of the literal can be represented by the type
|
“ of width ”,
where is the lowest integer
so that the value of the literal can be represented by the type
|
Both or and or
|
“ of width ”,
where is the lowest integer
so that the value of the literal can be represented by the type
|
“ of width ”,
where is the lowest integer
so that the value of the literal can be represented by the type
|
quoted
spellings of types
like “ of width ”
in core wording
instead of the
Change [lex.icon] paragraph 4 as follows:
Except for
[Note:
An or suffix
is ill-formed if it cannot be represented by .
An or suffix
is ill-formed if it cannot be represented by any bit-precise integer type
because the necessary width is greater than
([climits.syn]).
— end note]
[basic.fundamental]
Change [basic.fundamental] paragraph 1 as follows:
There are five standard signed integer types:
,
,
,
, and
.
In this list,
each type provides at least as much storage as those
preceding it in the list.
There is also a distinct bit-precise signed integer type
“ of width ”
for each ([climits.syn]).
There may also be implementation-defined
extended signed integer types.
The standard, bit-precise, and extended signed integer types are collectively called
signed integer types.
The range of representable values for a signed integer type is
to
(inclusive),
where is called the width of the type.
[Note:
Plain s are intended to have
the natural width suggested by the architecture of the execution environment;
the other signed integer types are provided to meet special needs.
— end note]
,
but may allow it following [N3699].
Change [basic.fundamental] paragraph 2 as follows:
For each of the standard signed integer types,
there exists a corresponding (but different)
standard unsigned integer type:
,
,
,
, and
.
For each bit-precise signed integer type
“ of width ”,
there exists a corresponding bit-precise unsigned integer type
“ of width ”.
Likewise, for For each of the extended signed integer types,
there exists a corresponding extended unsigned integer type.
The standard, bit-precise, and extended unsigned integer types
are collectively called unsigned integer types.
An unsigned integer type has the same width
as the corresponding signed integer type.
The range of representable values for the unsigned type is
to
(inclusive);
arithmetic for the unsigned type is performed modulo .
[Note: Unsigned arithmetic does not overflow. Overflow for signed arithmetic yields undefined behavior ([expr.pre]). — end note]
Change [basic.fundamental] paragraph 5 as follows:
[…]
The standard signed integer types and standard unsigned integer types
are collectively called the standard integer types, and the
. The bit-precise signed integer types and bit-precise unsigned integer types
are collectively called the bit-precise integer types. The
extended signed integer types and extended
unsigned integer types are collectively called the
extended integer types.
[conv.rank]
Change [conv.rank] paragraph 1 as follows:
Every integer type has an integer conversion rank defined as follows:
-
No two signed integer types other than
andchar (ifsigned char char is signed) have the same rank, even if they have the same representation. - The rank of a signed integer type is greater than the rank of any signed integer type with a smaller width.
-
The rank of
is greater than the rank oflong long int , which is greater than the rank oflong int , which is greater than the rank ofint , which is greater than the rank ofshort int .signed char - The rank of any unsigned integer type equals the rank of the corresponding signed integer type.
- The rank of any standard integer type is greater than the rank of any bit-precise integer type with the same width and of any extended integer type with the same width.
-
The rank of
equals the rank ofchar andsigned char .unsigned char -
The rank of
is less than the rank of all standard integer types.bool -
The ranks of
,char8_t ,char16_t , andchar32_t equal the ranks of their underlying types ([basic.fundamental]).wchar_t - The rank of any extended signed integer type relative to another extended signed integer type with the same width and relative to a bit-precise signed integer type with the same width is implementation-defined, but still subject to the other rules for determining the integer conversion rank.
-
For all integer types
,T1 , andT2 , ifT3 has greater rank thanT1 andT2 has greater rank thanT2 , thenT3 has greater rank thanT1 .T3
[Note: The integer conversion rank is used in the definition of the integral promotions ([conv.prom]) and the usual arithmetic conversions ([expr.arith.conv]). — end note]
[conv.prom]
Change [conv.prom] paragraph 2 as follows:
A prvalue that
- is not a converted bit-field
and, -
has an integer type other than
a bit-precise integer type,
,bool ,char8_t ,char16_t , orchar32_t , andwchar_t -
whose integer conversion rank ([conv.rank])
is less than the rank of
int
can be converted to
a prvalue of type
if can represent all the values of the source type;
otherwise, the source prvalue can be converted to
a prvalue of type .
Change [conv.prom] paragraph 3 as follows:
A prvalue of an unscoped enumeration type whose underlying type
is not fixed1
can be converted to a prvalue of the first of the following types
that can represent all the values of the enumeration ([dcl.enum]):
,
,
,
,
, or
.
If none of the types in that list can represent all the values of the enumeration,
a prvalue of an unscoped enumeration type
whose underlying type is not a bit-precise integer type
can be converted
to a prvalue of the extended integer type with lowest integer conversion rank ([conv.rank])
greater than the rank of
in which all the values of the enumeration can be represented.
If there are two such extended types, the signed one is chosen.
1) This promotion rule excludes bit-precise integers because the implementation cannot choose a bit-precise integer type as the underlying type of an enumeration with no fixed underlying type ([dcl.enum]).
Change [conv.prom] paragraph 4 as follows:
A prvalue of an unscoped enumeration type whose underlying type is fixed ([dcl.enum]) can be converted to a prvalue of its underlying type. Moreover, if integral promotion can be applied to its underlying type, a prvalue of an unscoped enumeration type whose underlying type is fixed can also be converted to a prvalue of the promoted underlying type.
[Note: A converted bit-field of enumeration type is treated as any other value of that type for promotion purposes. — end note]
[Note: If the underlying type is a bit-precise integer type, conversion to a prvalue of that type is possible, but integral promotion cannot be applied to the underlying type. — end note]
Change [conv.prom] paragraph 5 as follows:
A converted bit-field of integral type
other than a bit-precise integer type
can be converted to a prvalue of type
if can represent all the values of the bit-field;
otherwise, it can be converted to
if can represent all the values of the bit-field.
[dcl.type.general]
Change [dcl.type.general] paragraph 2 as follows:
As a general rule,
at most one
can be combined with any type specifier except itself.const can be combined with any type specifier except itself.volatile -
orsigned can be combined withunsigned ,char ,long ,short or, or aint bit-precise-int-type-specifier ([dcl.type.simple]). orshort can be combined withlong .int can be combined withlong .double can be combined withlong .long
[dcl.type.simple]
Change [dcl.type.simple] paragraph 1 as follows:
The simple type specifiers are
simple-type-specifier :- nested-name-specifieropt type-name
- nested-name-specifier
simple-template-idtemplate - computed-type-specifier
- placeholder-type-specifier
- bit-precise-int-type-specifier
- nested-name-specifieropt template-name
char char8_t char16_t char32_t wchar_t bool short int long signed unsigned float double void type-name :- class-name
- enum-name
- typedef-name
computed-type-specifier :- decltype-specifier
- pack-index-specifier
- splice-type-specifier
bit-precise-int-type-specifier :_BitInt constant-expression( )
Change table [tab:dcl.type.simple] as follows:
| Specifier(s) | Type |
|---|---|
| the type named | |
| the type as defined in [temp.names] | |
| the type as defined in [dcl.type.decltype] | |
| the type as defined in [dcl.type.pack.index] | |
| the type as defined in [dcl.spec.auto] | |
| the type as defined in [dcl.type.class.deduct] | |
| the type as defined in [dcl.type.splice] | |
|
“ of width ” |
|
“ of width ” |
|
“ of width ” |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
Immediately following [dcl.type.simple] paragraph 3, add a new paragraph as follows:
Within a ([expr.const]).
Its value specifies the width
of the bit-precise integer type ([basic.fundamental]).
The program is ill-formed unless
([climits.syn]).
.
[dcl.enum]
from implicitly
being the underlying type of enumerations,
matching the proposed restrictions in [N3705].
See §4.3. Underlying type of enumerations.
Change [dcl.enum] paragraph 5 as follows:
[…] If the underlying type is not fixed, the type of each enumerator prior to the closing brace is determined as follows:
-
If an initializer is specified for an enumerator,
the
constant-expression shall be an integral constant expression ([expr.const]) whose type is not a bit-precise integer type. If the expression has unscoped enumeration type, the enumerator has the underlying type of that enumeration type, otherwise it has the same type as the expression. -
If no initializer is specified for the first enumerator,
its type is an unspecified signed
integralinteger type other than a bit-precise integer type. - Otherwise, the type of the enumerator is the same as that of the preceding enumerator, unless the incremented value is not representable in that type, in which case the type is an unspecified integral type other than a bit-precise integer type sufficient to contain the incremented value. If no such type exists, the program is ill-formed.
Change [dcl.enum] paragraph 7 as follows:
For an enumeration whose underlying type is not fixed, the underlying type is an integral type that can represent all the enumerator values defined in the enumeration. If no integral type can represent all the enumerator values, the enumeration is ill-formed. It is implementation-defined which integral type is used as the underlying type, except that
- the underlying type shall not be a bit-precise integer type and
-
the underlying type shall not be larger than
unless the value of an enumerator cannot fit in anint orint .unsigned int
If the
[temp.deduct.general]
Add a bullet to [temp.deduct.general] paragraph 11 as follows:
[Note: Type deduction can fail for the following reasons:
- Attempting to instantiate a pack expansion containing multiple packs of differing lengths.
-
Attempting to create an array with an element type that is
, a function type, or a reference type, or attempting to create an array with a size that is zero or negative.void
[Example: — end example]template < class T > int f ( T [ 5 ] ) ; int I = f < int > ( 0 ) ; int j = f < void > ( 0 ) ; // invalid array -
Attempting to create a bit-precise integer type of invalid width ([basic.fundamental]).
[Example: — end example]template < int N > void f ( _BitInt ( N ) ) ; f < 0 > ( 0 ) ; // invalid bit-precise integer - […]
— end note]
[temp.deduct.type]
Change [temp.deduct.type] paragraph 2 as follows:
[…] The type of a type parameter is only deduced from an array bound or bit-precise integer width if it is not otherwise deduced.
Change [temp.deduct.type] paragraph 3 as follows:
A given type can be composed from a number of other types,
templates, and constant template argument values:
- A function type includes the types of each of the function parameters, the return type, and its exception specification.
- A pointer-to-member type includes the type of the class object pointed to and the type of the member pointed to.
-
A type that is a specialization of a class template (e.g.,
) includes the types, templates, and constant template argument values referenced by the template argument list of the specialization.A < int > - An array type includes the array element type and the value of the array bound.
- A bit-precise integer type includes the integer width.
Change [temp.deduct.type] paragraph 5 as follows:
The non-deduced contexts are:
- […]
-
A constant template argument
or, an array bound, or a bit-precise integer width, in any of which a subexpression references a template parameter.
[Example: — end example]template < size_t N > void f ( _BitInt ( N ) ) ; template < size_t N > void g ( _BitInt ( N + 1 ) ) ; f ( 100 wb ) ; // OK , N = 8 g ( 100 wb ) ; // error: no argument for deduced N - […]
Change [temp.deduct.type] paragraph 8 as follows:
A type template argument ,
a constant template argument ,
a template template argument denoting a class template or an alias template,
or a template template argument denoting a variable template or a concept
can be deduced if and have one of the following forms:
where […]
Do not change [temp.deduct.type] paragraph 14; it is included here for reference.
The type of
N in the typeT[N] is .std::size_t [Example:
template<typename T> struct S; template<typename T, T n> struct S<int[n]> { using Q = T; }; using V = decltype(sizeof 0); using V = S<int[42]>::Q; // OK; T was deduced asstd::size_t from the typeint[42] — end example]
Immediately following [temp.deduct.type] paragraph 14, insert a new paragraph:
The type of in the type is .
[Example:
T std :: size_t _BitInt ( 1 ) — end example]
Change [temp.deduct.type] paragraph 20 as follows:
If has a form that contains ,
and if the type of differs from the type of the corresponding template parameter
of the template named by the enclosing has a form that contains
or ,
and if the type of is not an integral type, deduction fails.
If has a form that includes
and the type of is not , deduction fails.
[cpp.predefined]
Add a feature-test macro to the table in [cpp.predefined] as follows:
[diff.lex]
In [diff.lex], add a new entry:
Affected subclause:
[lex.icon]
Change:
The type of is changed from to .
Rationale:
It is expected that a future C standard makes the same change,
as part of making a valid type.
Effect on the original feature:
Change to semantics of well-defined feature.
Difficulty of converting:
Usually, no changes are required
because the type of is inconsequential.
How widely used:
Seldom.
8.2. Library
[allocator.requirements.general]
Change [allocator.requirements.general] as follows:
Result: An standard unsigned or extended unsigned integer type that can represent the size of the largest object in the allocation model.
Remarks:
Default:
Result: A standard signed or extended signed integer type that can represent the difference between any two pointers in the allocation model.
Remarks:
Default:
[version.syn]
Add the following feature-test macro to [version.syn]:
[support.types.byteops]
Change [support.types.byteops] as follows:
Constraints:
is . is an integral type
other than a possibly cv-qualified bit-precise integer type.
Effects:
Equivalent to:
Constraints:
is . is an integral type
other than a possibly cv-qualified bit-precise integer type.
Effects:
Equivalent to:
Constraints:
is . is an integral type
other than a possibly cv-qualified bit-precise integer type.
Effects:
Equivalent to:
Constraints:
is . is an integral type
other than a possibly cv-qualified bit-precise integer type.
Effects:
Equivalent to:
[…]
Constraints:
is . is an integral type
other than a possibly cv-qualified bit-precise integer type.
Effects:
Equivalent to:
[cstdint.syn]
Change [cstdint.syn] paragraph 2 as follows:
The header defines all types and macros the same as the C standard library header and
are not required to be able to represent all values of
bit-precise integer types or of
extended integer types wider than
“ and
“,
respectively.
Change [cstdint.syn] paragraph 3 as follows:
All types that use the placeholder N
are optional when N
is not , , , or .
The exact-width types
and
for N = , , , and
are also optional;
however, if an implementation defines integer types
other than bit-precise integer types
with the corresponding width and no padding bits,
it defines the corresponding
[Note:
The macros and
correspond to the and ,
respectively.
— end note]
[climits.syn]
In [climits.syn],
add a new line below the definition of :
Change the synopsis in [climits.syn] paragraph 1 as follows:
The header ,
except that it does not define the macro .
[intseq.intseq]
Change [intseq.intseq] as follows:
Mandates:
is an integer type
other than a possibly cv-qualified bit-precise integer type.
[meta.trans.sign]
Change table [tab:meta.trans.sign] as follows:
| Template | Comments |
|---|---|
|
|
Specializations have an alias member determined as follows:
is an integral or enumeration type other than .
|
|
|
Specializations have an alias member determined as follows:
is an integral or enumeration type other than .
|
[utility.intcmp]
Change [utility.intcmp] as follows:
Mandates:
Each of and is a
signed or unsigned
standard or extended
integer type ([basic.fundamental]).
Effects: […]
Mandates:
Each of and is a
signed or unsigned
standard or extended
integer type ([basic.fundamental]).
Effects: […]
Mandates:
Each of and is a
signed or unsigned
standard or extended
integer type ([basic.fundamental]).
Effects: […]
[bit.byteswap]
Change [bit.byteswap] as follows:
Constraints:
models
is an integral type other than a possibly cv-qualified bit-precise integer type.
[…]
[bit]
Change the Constraints element attached to each of the function templates
,
,
,
,
,
,
,
,
,
, and
as follows:
Constraints:
is an unsigned integer type
other than a bit-precise integer type ([basic.fundamental]).
[stdbit.h.syn]
Change [stdbit.h.syn] paragraph 2 as follows:
Mandates:
is an unsigned integer type
- a standard unsigned integer type,
- an extended unsigned integer type, or
- a bit-precise unsigned integer type whose width matches a standard or extended integer type.
[container.reqmts]
Change [container.reqmts] as follows:
Result:
A signed integer type
other than a possibly cv-qualified bit-precise integer type,
identical to the difference type of and .
Result:
An unsigned integer type
other than a possibly cv-qualified bit-precise integer type,
that can represent any non-negative value of .
[mdspan.extents.overview]
Change [mdspan.extents.overview] as follows:
Mandates:
-
is aIndexType signed or unsignedstandard or extended integer type, and -
each element of
is either equal toExtents , or is representable as a value of typedynamic_extent .IndexType
[mdspan.sub.overview]
Change [mdspan.sub.overview] paragraph 2, [mdspan.sub.overview] paragraph 3, and [mdspan.sub.overview] paragraph 4 as follows:
Given a
signed or unsigned
standard or extended
integer type
[…]
[mdspan.sub.range.slices]
Change [mdspan.sub.range.slices] as follows:
[…]
[…]
Mandates:
,
,
,
, and
are
signed or unsigned
standard or extended
integer types, or
model .
[Note: […] — end note]
[iterator.concept.winc]
Change [iterator.concept.winc] as follows:
[…]
The width of an integer-class type is greater than
that of every integral standard integer type of the same signedness.
[iterator.iterators]
Change [iterator.iterators] paragraph 2 as follows:
A type meets the Cpp17Iterator requirements if
- […]
-
is a signed integer type other than a bit-precise integer type oriterator_traits < X > ::difference_type, andvoid - […]
[common.iter.types]
Change [common.iter.types] paragraph 1 as follows:
The nested
of the specialization of for
is declared if and only if is
an integral type other than a possibly cv-qualified bit-precise integer type.
[range.iota.view]
Change [range.iota.view] paragraph 1 as follows:
Let be defined as follows:
-
If
is not an integral type, or if it is an integral type andW is greater thansizeof ( iter_difference_t < W > ) , thensizeof ( W ) denotesIOTA-DIFF-T ( W ) .iter_difference_t < W > -
Otherwise,
is a standard signed integer type of width greater than the width ofIOTA-DIFF-T ( W ) if such a type exists.W -
Otherwise,
is an unspecified signed-integer-like ([iterator.concept.winc]) type of width not less than the width ofIOTA-DIFF-T ( W ) .W
[alg.foreach]
Change [alg.foreach]
Mandates:
The type is convertible to an integral type
other than a bit-precise integer type
([conv.integral], [class.conv]).
[…]
Mandates:
The type is convertible to an integral type
other than a bit-precise integer type
([conv.integral], [class.conv]).
[…]
It is not reasonable to expect millions of additional overloads, and a template that can handle bit-precise integers in bulk could not interoperate with user-defined conversion function templates.
[alg.search]
Change [alg.search] paragraph 5 as follows:
Mandates:
The type is convertible to an integral type
other than a bit-precise integer type
([conv.integral], [class.conv]).
[alg.copy]
Change [alg.copy] paragraph 15 as follows:
Mandates:
The type is convertible to an integral type
other than a bit-precise integer type
([conv.integral], [class.conv]).
[alg.fill]
Change [alg.fill] paragraph 2 as follows:
Mandates:
The expression is writable ([iterator.requirements.general])
to the output iterator.
The type is convertible to an integral type
other than a bit-precise integer type
([conv.integral], [class.conv]).
[alg.generate]
Change [alg.generate] paragraph 2 as follows:
Mandates:
is convertible to an integral type
other than a bit-precise integer type
([conv.integral], [class.conv]).
[numeric.ops.gcd]
Change [numeric.ops.gcd] as follows:
Mandates:
and both are integer types other than
or a possibly cv-qualified bit-precise integer type.
[…]
[numeric.ops.lcm]
Change [numeric.ops.lcm] as follows:
Mandates:
and both are integer types other than
or a possibly cv-qualified bit-precise integer type.
[…]
[numeric.ops.midpoint]
Change [numeric.ops.midpoint] as follows:
Constraints:
is an arithmetic type other than
or a possibly cv-qualified bit-precise integer type.
[…]
[numeric.sat.func]
Change the Constraints element underneath all function templates in [numeric.sat.func] as follows:
Constraints:
is a
signed or unsigned
standard or extended
integer type ([basic.fundamental]).
[numeric.sat.cast]
Change [numeric.sat.cast] as follows:
Constraints:
and are
signed or unsigned
standard or extended
integer types ([basic.fundamental]).
Returns: […]
[charconv.syn]
Change [charconv.syn] paragraph 1 as follows:
When a function is specified with a type placeholder of ,
the implementation provides overloads for and all
cv-unqualified signed and unsigned integer types
standard and extended integer types
in lieu of .
When a function is specified with a type placeholder of ,
the implementation provides overloads for all
cv-unqualified floating-point types ([basic.fundamental])
in lieu of .
[format.formatter.spec]
Change [format.formatter.spec] paragraph 2, bullet 3 as follows:
For each ,
for each that is either a
signed or unsigned
standard or extended integer type or ,
a constexpr-enabled specialization
[cmplx.over]
Change [cmplx.over] paragraph 2 as follows:
The additional constexpr overloads are sufficient to ensure:
-
If the argument has a floating-point type
, then it is effectively cast toT .complex < T > -
Otherwise, if the argument has integral type
other than a possibly cv-qualified bit-precise integer type,
then it is effectively cast to
.complex < double >
Change [cmplx.over] paragraph 3 as follows:
Function template has additional constexpr overloads sufficient to ensure,
for a call with one argument of type and
the other argument of type or ,
both arguments are effectively cast to ,
where is if is an integer type and otherwise.
If is not well-formed
or if is a possibly cv-qualified bit-precise integer type,
then the program is ill-formed.
[rand.req.seedseq]
In [tab:rand.req.seedseq],
change the Pre-post-condition
column corresponding to
as follows:
is an unsigned integer type
other than a bit-precise integer type
of at least 32 bits.
[rand.req.urng]
Change [rand.req.urng] paragraph 3 as follows:
A class meets the uniform random bit generator requirements if
-
modelsG ,uniform_random_bit_generator -
is an unsigned integer type other than a bit-precise integer type ([basic.fundamental]), andinvoke_result_t < G & > -
provides a nestedG typedef-name that denotes the same type asresult_type .invoke_result_t < G & >
[rand.util.seedseq]
Change [rand.util.seedseq] as follows:
[…]
Constraints:
is an integer type
other than a possibly cv-qualified bit-precise integer type.
Effects:
Same as .
Mandates:
is an integer type
other than a possibly cv-qualified bit-precise integer type.
[…]
Mandates:
is an unsigned integer type
other than a possibly cv-qualified bit-precise integer type,
capable of accommodating 32-bit quantities.
[…]
[cmath.syn]
may be passed to e.g. .
See §5.9. Using bit-precise integers in
[simd.expos]
Change [simd.expos] as follows:
[simd.expos.defn]
Immediately following the declaration of ,
add the following declaration:
The concept is satisfied and modeled
if and only if is a bit-precise integer type ([basic.fundamental]).
[simd.mask.overview]
Change [simd.mask.overview] as follows:
[simd.mask.ctor]
Change [simd.mask.ctor] as follows:
Effects: […]
[numerics.c.ckdint]
Change [numerics.c.ckdint] as follows:
Mandates:
Each of the types
, , and
is a
cv-unqualified signed or unsigned
standard or extended
integer type.
Remarks: Each function template has the same semantics as the corresponding type-generic macro with the same name specified in ISO/IEC 9899:2024, 7.20.
[time.duration.general]
Change [time.duration.general] paragraph 2 as follows:
shall be an arithmetic type
other than a bit-precise integer type
or a class emulating an arithmetic type.
If
a specialization of is instantiated with a cv-qualified type or
a specialization of as the argument for the template parameter ,
the program is ill-formed.
[stream.types]
Change [stream.types] as follows:
The type is a synonym for
one of the signed basic integral types
a standard signed integer type
of sufficient size to represent the maximum possible file size for the operating system.
The type is a synonym for
one of the signed basic integral types
a standard signed integer type.
It is used to represent the number of characters
transferred in an I/O operation, or the size of I/O buffers.
[atomics.ref.int]
Change [atomics.ref.int] paragraph 1 as follows:
There are specializations of the atomic_ref class template for all integral types
except for possibly cv-qualified bit-precise integer types and
except for .
[…]
9. Acknowledgements
I thank Jens Maurer and Christof Meerwald for reviewing and correcting the proposal's wording.
I thank Erich Keane and other LLVM contributors for implementing most of the proposed core changes in Clang's C++ frontend, giving this paper years' worth of implementation experience in a major compiler without any effort by the author.
I thank Erich Keane, Jiang An, Bill Seymour, Howard Hinnant, JeanHeyd Meneide, Lénárd Szolnoki, Brian Bi, Peter Dimov, Aaron Ballman, Pete Becker, Jens Maurer, Matthias Kretz, Jonathan Wakely, Jeff Garland, Ville Voutilainen, Peter Dimov, Luigi Ghiron, and many others for providing early feedback on this paper, prior papers such as [P3639R0], and the discussion surrounding bit-precise integers as a whole. The paper would not be where it is today without hundreds of messages worth of valuable feedback.