One of the ways JavaScript differs from most programming languages is the absence of integer types. Every numeric value is a *Number* which is based off the IEEE754 floating point representation. You can still have ‘integer’ values and it is technically possible to approximate this behaviour using UInt*Array types but that’s a discussion for another day.

This means you can’t leverage inherent type support to truncate fractional values. For example, let’s compare the results in two languages:

**JavaScript**

```
let value = 5 / 2;
console.log(value); // 2.5
```

**C#**

```
int value = 5 / 2;
Console.WriteLine(value); // 2
double value = 5 / 2.0;
Console.WriteLine(value); // 2.5
```

So how do you ensure integer results with JavaScript? There are a couple of tricks but be careful – you can get burnt.

**1. ParseInt**

This will parse the integer value out of the string or return a NaN if you give it an invalid non-parseable string. Remember to pass the radix to avoid weird bugs.

```
let value = 5 / 2;
console.log(value); // 2.5
console.log(parseInt(value, 10)); // 2
```

**2. Math.floor vs Math.ceil**

You can also use Math.floor to round down and Math.ceil to round up.

```
let value = 5 / 2;
console.log(Math.floor(value)); // 2
console.log(Math.ceil(value)); // 3
```

**3. ~~ Bitwise operator**

Another trick is doing a double bitwise NOT operation on the value (something similar to the !! trick for forcing conversion to boolean). The only snag is that this approach only works for integer values that can be represented within 31 bits (the 32nd bit is used for the sign +/-; 2’s complement arithmetic).

Some examples

```
let value = 5 /2;
console.log(~~value); // 2
```

Now let’s see what happens when you use ~~ with very large values.

```
let bigValue = 9999999999;
console.log(~~bigValue); // 1410065407
```

How is 1410065407 related to 9999999999?!

**The Bitwise NOT operator (~)**

The bitwise NOT operator according to the spec does this:

Notice that it'llThe production UnaryExpression : ~ UnaryExpression is evaluated as follows:

- Let expr be the result of evaluating UnaryExpression.
- Let oldValue be
ToInt32(GetValue(expr)).- Return the result of applying bitwise complement to oldValue. The result is a signed 32-bit integer.
alwaysconvert its input to a 32-bit signed integer.

The bitwise NOT of a value *x *will invert every bit in x and this has the same outcome as the *-(x + 1).*

~0000000000000000000000000000001

is

11111111111111111111111111111110

This is a signed 32-bit value and is the Two’s complement representation for -2.

If you do another BITWISE NOT on this result, what you are effectively doing is represented below:

```
~~x = ~(-(x+1))
= ~(-x - 1)
= -((-x - 1) + 1)
= x + 1 - 1
= x
```

You get back your *x *but this time it’s a signed 32-bit integer! This can be very useful if performance is very important and you know for sure you only want 32-bit integers . The flip side though is the loss of code readability.

**Analyzing the 9999999999 to 1410065407 conversion**

9999999999 converts to 1001010100000010111110001111111111 in binary a 34-bit value. Consequently, the 2 Most significant bits (MSBs) are chopped off giving us ~~10~~01010100000010111110001111111111.

01010100000010111110001111111111 in decimal is, (well you guessed it!), 1410065407.

Mystery solved.

## One thought on “Ensuring Integer results in JavaScript”