Doc. No.: N1310
Date: 2008-05-01
Reply to: Clark Nelson
Phone: +1-503-712-8433
Email: clark.nelson@intel.com

Requiring signed char to have no padding bits

It is clear from the standard (specifically 6.2.6.2p1) that unsigned integer types in general are not allowed to have trap representations, and that unsigned char is not allowed to have any padding bits. From these requirements it follows that the object representation of any type can be copied, examined and modified by treating the object as an array of unsigned char. However, 6.2.6.2p2 says that any signed integer type can have padding bits — rather conspicuously not excluding signed char.

On the other hand, the effective type rules and the type-based aliasing rules (6.5p6,7) imply that that any character type (including signed char) is a reasonable way to examine and even modify arbitrary memory. But if signed char has padding bits, that is at the very least a problematic way to access an object of unsigned char type.

Two additional data points:

Consequently, it would appear to make sense for the C standard to add the requirement (and the guarantee) that signed char has no padding bits either. Since char is required to have the same representation as either signed char or unsigned char, this will be sufficient to guarantee that no character type has any padding bits in its representation.

Therefore, change 6.2.6.2p2:

For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; signed char shall not have any padding bits. There there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then MN). If the sign bit is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be modified in one of the following ways: