This means you can’t leverage inherent type support to truncate fractional values. For example, let’s compare the results in two languages:
let value = 5 / 2; console.log(value); // 2.5
int value = 5 / 2; Console.WriteLine(value); // 2 double value = 5 / 2.0; Console.WriteLine(value); // 2.5
This will parse the integer value out of the string or return a NaN if you give it an invalid non-parseable string. Remember to pass the radix to avoid weird bugs.
let value = 5 / 2; console.log(value); // 2.5 console.log(parseInt(value, 10)); // 2
2. Math.floor vs Math.ceil
You can also use Math.floor to round down and Math.ceil to round up.
let value = 5 / 2; console.log(Math.floor(value)); // 2 console.log(Math.ceil(value)); // 3
3. ~~ Bitwise operator
Another trick is doing a double bitwise NOT operation on the value (something similar to the !! trick for forcing conversion to boolean). The only snag is that this approach only works for integer values that can be represented within 31 bits (the 32nd bit is used for the sign +/-; 2’s complement arithmetic).
let value = 5 /2; console.log(~~value); // 2
Now let’s see what happens when you use ~~ with very large values.
let bigValue = 9999999999; console.log(~~bigValue); // 1410065407
How is 1410065407 related to 9999999999?!
The Bitwise NOT operator (~)
The bitwise NOT operator according to the spec does this:
The production UnaryExpression : ~ UnaryExpression is evaluated as follows:
- Let expr be the result of evaluating UnaryExpression.
- Let oldValue be ToInt32(GetValue(expr)).
- Return the result of applying bitwise complement to oldValue. The result is a signed 32-bit integer.
Notice that it’ll always convert its input to a 32-bit signed integer.
The bitwise NOT of a value x will invert every bit in x and this has the same outcome as the -(x + 1).
This is a signed 32-bit value and is the Two’s complement representation for -2.
If you do another BITWISE NOT on this result, what you are effectively doing is represented below:
~~x = ~(-(x+1))
= ~(-x – 1)
= -((-x – 1) + 1)
= x + 1 – 1
You get back your x but this time it’s a signed 32-bit integer! This can be very useful if performance is very important and you know for sure you only want 32-bit integers . The flip side though is the loss of code readability.
Analyzing the 9999999999 to 1410065407 conversion
9999999999 converts to 1001010100000010111110001111111111 in binary a 34-bit value. Consequently, the 2 Most significant bits (MSBs) are chopped off giving us
01010100000010111110001111111111 in decimal is, (well you guessed it!), 1410065407.