CERT, C, and Swift

Swift does not do implicit type conversions between integers of different sizes and signages. By contrast, Java does implicit conversion, but with less risk of unexpected effects due to the lack of unsigned integer types. C, that dinosaur, has a strange and secret show behind every elementary school math operation.

Why is type conversion confusing? I’ll use C as my example. CERT has some of the best low-level down-and-dirty descriptions of the language, so I’ll be borrowing heavily from their examples.

We’ll start with an easy one: Promotion. During arithmetic, everything is automatically promoted up to int or uint (32 bits on most platforms) in order to prevent overflow. It is curious that they don’t promote to something bigger (long), but x86 assembly uses 32-bit registers most of the time, so it’s a natural fit.

        // Example Apple : Integer Promotions
        unsigned char appleCharOp1, appleCharOp2, appleCharOp3;
        unsigned char appleCharResult;
        unsigned char appleCharWrongResult;

        appleCharOp1 = 80;
        appleCharOp2 = 70;
        appleCharOp3 = 100;
        appleCharResult = appleCharOp1 * appleCharOp2 / appleCharOp3;
        appleCharWrongResult = (unsigned char)(appleCharOp1 * appleCharOp2) / appleCharOp3;

If we run this in the debugger, using the lldb command type format add –format “unsigned dec” “unsigned char” for clarity, we see the following:

(lldb) frame variable

(unsigned char) appleCharOp1 = 80

(unsigned char) appleCharOp2 = 70

(unsigned char) appleCharOp3 = 100

(unsigned char) appleCharResult = 56

(unsigned char) appleCharWrongResult = 2

The correct answer is the result of being able to hold the intermediate product, 5600, in a 32-bit variable. If you truncate the intermediate product to 8 bits, you get 0xE0, which is 224. Divide this by 2 gives you after rounding.

Mind you, the end result is that you get the expected answer as long as you don’t stick a weird cast in the middle like I did there. But here is an example where you get the wrong answer without any extra work:

        // Example Cantaloupe
        unsigned int cantaloupeInt1 = UINT_MAX / 4;
        unsigned int cantaloupeInt2 = UINT_MAX / 8;
        unsigned int cantaloupeInt3 = UINT_MAX / 16;
        unsigned long long cantaloupeLongLong1 = cantaloupeInt1 * cantaloupeInt2 / cantaloupeInt3;

Results are

(unsigned int) cantaloupeInt1 = 1073741823

(unsigned int) cantaloupeInt2 = 536870911

(unsigned int) cantaloupeInt3 = 268435455

(unsigned long long) cantaloupeLongLong1 = 10

That’s the wrong answer. You get the right answer if you cast each term to (unsigned long long).

There are many other factors to C integer conversion, such as precision and rank, but these examples suffice to show why it’s a tricky subject.

Enter Swift’s design choice. If you try the same basic code in Swift :

let cantaloupeOp1 : UInt32 = UInt32.max / 4;
let cantaloupeOp2 : UInt32 = UInt32.max / 8;
let cantaloupeOp3 : UInt32 = UInt32.max / 16;
var cantaloupeResult : UInt64 = cantaloupeOp1 * cantaloupeOp2 / cantaloupeOp3

It simply refuses to compile. The operations on the right are legal, but the assignment is not.