Why you should not use isNaN in JavaScript


I was on a video streaming site recently and moved the play point to the far right. It was amusing to see the hover details show NaN:NaN – ahah, some mathematical operation had NaN-ed and the code didn’t cater for that.

If you have read Why JavaScript ‘seems’ to get addition wrong; you would have seen that some operations do result in a NaNNaN is a value with its roots in the IEEE754 standard definition.

What is NaN?

Nan literally means Not a Number. Yes, it means that value is not a number and occurs when you try to coerce a non-mathematical value (e.g. string) into a number.

How do you check if a value is NaN

How do you know if some value is NaN? Turns out this is not so straightforward.

For numbers, we typically compare to the expected value and that is usually true; however the case for NaN is different.

let two = 2;
two === 2; // true
two == 2; // true

// NaN
let x = NaN;
x === NaN; // false
x == NaN; // false
x === x; // false ???

Unequal ‘equalities’ in Maths and JavaScript

You might be scratching your head and wondering if there are other values that can be unequal. Yes, there is one – Infinity. In Mathematics, infinite values are not equal even if most operations assume this for simplicity.

Imagine two containers of water – a large jug and a small cup. Both contain infinite amounts of atoms right? Yet, it is obvious that the infinite amount of atoms in the large jug is greater than the infinite amount of atoms present in the small cup. The inability to determine a specific value doesn’t automatically make all infinite values equal.

Thus, even though the result of 1 * ∞ and 10 * ∞ are both ∞ in most languages; we can argue the latter is a ‘larger’ type of ∞. It might not matter so much given that computers have finite storage limits. For a more in-depth discussion of this, read Jeremy Kun’s excellent post.

Let’s see if JavaScript obeys this Maths law.

let infinity = Infinity;
infinity === Infinity; // true

(2 * Infinity) === (10 * Infinity); // true

So JavaScript coalesces all Infinity values and makes them ‘equal’. But NaN is exempt from this as shown earlier.

The good thing is that this special quality of NaN stands out. According to the IEEE754 standard, NaN cannot be equal to anything (even itself). Thus to determine if a value is NaN, you can check if that value is not equal to itself.

let nan = NaN;
nan === nan; // false
nan !== nan; // true

The Issue with JavaScript’s isNaN

JavaScript exposes the isNaN method for checking for NaN values. The snag however is that it behaves unreliably with varying operand types.

isNaN(NaN); // true
isNaN(2); // false
isNaN('a'); // true
isNaN(); // true
isNaN(null); // false
isNaN(true); // false

Surprised? Again, this is the exhibition of one of JavaScript’s quirks. The spec reads thus:

Returns true if the argument coerces to NaN, and otherwise returns false.

And what’s the toNumber coercion table?

Value Numeric value
null 0
undefined NaN
true 1
false 0
123 123
[] 0
{} NaN

So you now know why isNaN() and isNaN({a: 1}) are both true even though isNaN([]) is false. Even though arrays are objects, their toNumber coercion is not NaN (as shown in the table above). Similarly since the boolean primitives coerce to numbers; calling isNaN(true) or isNaN(false) will give a false outcome.

Reliably verifying NaN values

There are two fixes to this

1. Prior to ES6, the only way is to check if the value is not equal to itself.

function isReliablyNaN(x) {
    return x !== x;
}

2. ES6 introduces the Number.isNaN method which avoids the inherent toNumber coercion of isNaN. This ensures that only NaN returns true.

Number.isNaN(NaN); // true

// All work as expected now
Number.isNaN(2); // false
Number. isNaN('a'); // false
Number. isNaN(); // false
Number. isNaN(null); // false
Number.isNaN(true); // false

Conclusion

If you are using isNaN in your code; you most likely have a bug waiting to happen some day some time.

You should switch to Number.isNaN which is already supported by all major browsers except IE11 and also add a polyfill fallback (just in case). You should also know that isFinite uses isNaN internally and consequently suffers the same flaws. Use Number.isFinite instead.

I would have wanted a reliable isNaN implementation but alas the special characteristic has now become a ‘feature’ and can’t even be fixed for backwards compatibility reasons.

Related

If you enjoyed this post and wanted to learn more; here are a couple of posts explaining various quirky behaviours in JavaScript.

  1. Why JavaScript ‘seems’ to get addition wrong
  2. Why JavaScript has two zeros: -0 and +0
  3. JavaScript has no Else If
  4. Quirky Quirky JavaScript: Episode One
Advertisements

Faking goto in JavaScript


What if I told you JavaScript had a limited form of the infamous goto statement? Surprised? Read on.

Labeled Statements

It is possible to add label identifiers to JavaScript statements and then use these identifiers with the break and continue statements to manage program flow.

While it might be better to use functions instead of labels to jump around, it is worth seeing how to jump around or interrupt loops using these. Let’s take an example:

// print only even numbers
loop:
for(let i = 0; i < 10; i++){
    if(i % 2) {
        continue loop;
    }
    console.log(i);
}
//0, 2, 4, 6, 8

// print only values less than 5
loop:
for(let i = 0; i < 10; i++){
    if(i > 5) {
        break loop;
    }
    console.log(i);
}
// 0, 1, 2, 3, 4, 5

There is a subtle difference between when labels can be used:

  • break statements can apply to any label identifier
  • continue statements can only apply to labels identifying loops

Because of this, it is possible to have the sample code below (yes it’s valid JavaScript too!)

var i = 0;
block: {
     while(true){
         console.log(i);
         i++;
         if(i == 5) {
             break block;
             console.log('after break');
         }
     } 
}
console.log('outside block');
// 0, 1, 2, 3, 4, outside block

Note

  1. continue wouldn’t work in the above scenario since the block label applies to a block of code and not a loop.
  2. The {} after the block identifier signify a block of code. This is valid JavaScript and you can define any block by wrapping statements inside {}. See an example below
{
let i = 5;
console.log(i);
}

// 5

Should I use this?

This is an arcane corner of JavaScript and I personally have not seen any code using this. However if you have a good reason to use this, please do add comments and references to the documentation. Spare the next developer after you some effort…

Related

  1. What you didn’t know about JSON.Stringify
  2. Why JavaScript has two zeros: -0 and +0
  3. JavaScript has no Else If

What you didn’t know about JSON.Stringify


JSON, the ubiquitous data format that has become second nature to engineers all over the world. This post shows you how to achieve much more with JavaScript’s native JSON.Stringify method.

A quick refresher about JSON and JavaScript:

  • Not all valid JSON is valid JavaScript
  • JSON is a text-only format, no blobs please
  • Numbers are only base 10.

1. JSON.stringify

This returns the JSON-safe string representation of its input parameter. Note that non-stringifiable fields will be silently stripped off as shown below:

let foo = { a: 2, b: function() {} };
JSON.stringify(foo);
// "{ "a": 2 }"

What other types are non-stringifiable? 

Circular references

Since such objects point back at themselves, it’s quite easy to get into a non-ending loop. I once ran into a similar issue with memq in the past.

let foo = { b: foo };
JSON.stringify(foo);
// Uncaught TypeError: Converting circular structure to JSON

// Arrays
foo = [foo];
JSON.stringify(foo);
// Uncaught TypeError: Converting circular structure to JSON

Symbols and undefined

let foo = { b: undefined };
JSON.stringify(foo);
// {}
// Symbols
foo.b = Symbol();
JSON.stringify(foo);
// {}

Exceptions

Arrays containing non-stringifiable entries are handled specially though.

let foo = [Symbol(), undefined, function() {}, 'works']
JSON.stringify(foo);
// "[null,null,null,'works']"

Non-stringifiable fields get replaced with null in arrays and dropped in objects. The special array handling helps ‘preserve’ the shape of the array. In the example above, if the array entries were dropped as occurs in objects, then the output would have been [‘works’]. A single element array is very much different from a 4 element one.

I would argue for using null in objects too instead of dropping the fields. That way, we get a consistent behaviour and a way to know fields have been dropped.

Why aren’t all values stringifiable?

Because JSON is a language agnostic format.

For example, let us assume JSON allowed exporting functions as strings. With JavaScript, it would be possible to eval such strings in some scenarios. But what context would such eval-ed functions be evaluated in? What would that mean in a C# program?  And would you even represent some language-specific values (e.g. JavaScript Symbols)?

The ECMAScript standard highlights this point succinctly:

It does not attempt to impose ECMAScript’s internal data representations on other programming languages. Instead, it shares a small subset of ECMAScript’s textual representations with all other programming languages.

2. Overriding toJSON on object prototypes

One way to bypass the non-stringifiable fields issue in your objects is to implement the toJSON method. And since nearly every AJAX call involves a JSON.stringify call somewhere, this can lead to a very elegant trick for handling server communication.

This approach is similar to toString overrides that allow you to return representative strings for objects. Implementing toJSON enables you to sanitize your objects of non-stringifiable fields before JSON.stringify converts them.

function Person (first, last) {
    this.firstName = first;
    this.last = last;
}

Person.prototype.process = function () {
   return this.firstName + ' ' +
          this.lastName;
};

let ade = new Person('Ade', 'P');
JSON.stringify(ade);
// "{"firstName":"Ade","last":"P"}"

As expected, the instance process function is dropped. Let’s assume however that the server only wants the person’s full name. Instead of writing a dedicated converter function to create that format, toJSON offers a more scalable alternative.

Person.prototype.toJSON = function () {
    return { fullName: this.process(); };
};

let ade = new Person('Ade', 'P');
JSON.stringify(ade);
// "{"fullName":"Ade P"}"

The strength of this lies in its reusability and stability. You can use the ade instance with virtually any library and anywhere you want. You control exactly the data you want serialized and can be sure it’ll be created just as you want.

// jQuery
$.post('endpoint', ade);

// Angular 2
this.httpService.post('endpoint', ade)

Point: toJSON doesn’t create the JSON string, it only determines the object it’ll be called with. The call chain looks like this: toJSON -> JSON.stringify.

3. Optional arguments

The full signature stringify is JSON.stringify(value, replacer?, space?). I am copying the TypeScript ? style for identifying optional values. Now let’s dive into the replacer and space options.

4. Replacer

The replacer is a function or array that allows selecting fields for stringification. It differs from toJSON by allowing users to select choice fields rather than manipulate the entire structure.

If the replacer is not defined, then all fields of the object will be returned – just as JSON.stringify works in the default case.

Arrays

For arrays, only the keys present in the replacer array would be stringified.

let foo = {
 a : 1,
 b : "string",
 c : false
};
JSON.stringify(foo, ['a', 'b']);
//"{"a":1,"b":"string"}"

Arrays however might not be as flexible as desired,  let’s take a sample scenario involving nested objects.

let bar = {
 a : 1,
 b : { c : 2 }
};
JSON.stringify(bar, ['a', 'b']);
//"{"a":1,"b":{}}"

JSON.stringify(bar, ['a', 'b', 'c']);
//"{"a":1,"b":{"c":2}}"

Even nested objects are filtered out. Assuming you want more flexibility and control, then defining a function is the way out.

Functions

The replacer function is called for every key value pair and the return values are explained below:

  • Returning undefined drops that field in the JSON representation
  • Returning a string, boolean or number ensures that value is stringified
  • Returning an object triggers another recursive call until primitive values are encountered
  • Returning non-stringifiable valus (e.g. functions, Symbols etc) for a key will result in the field being dropped.
let baz = {
 a : 1,
 b : { c : 2 }
};

// return only values greater than 1
let replacer = function (key, value) {
    if(typeof value === 'number') {
        return value > 1 ? value: undefined;
    }
    return value;
};

JSON.stringify(baz, replacer);
// "{"b":{"c":2}}"

There is something to watch out for though, the entire object is passed in as the value in the first call; thereafter recursion begins. See the trace below.

let obj = {
 a : 1,
 b : { c : 2 }
};

let tracer = function (key, value){
  console.log('Key: ', key);
  console.log('Value: ', value);
  return value;
};

JSON.stringify(obj, tracer);
// Key:
// Value: Object {a: 1, b: Object}
// Key: a
// Value: 1
// Key: b
// Value: Object {c: 2}
// Key: c
// Value: 2

5. Space

Have you noticed the default JSON.stringify output? It’s always a single line with no spacing. But what if you wanted to pretty format some JSON, would you write a function to space it out?

What if I told you it was a one line fix? Just stringify the object with the tab(‘\t’) space option.

let space = {
 a : 1,
 b : { c : 2 }
};

// pretty format trick
JSON.stringify(space, undefined, '\t');
// "{
//  "a": 1,
//  "b": {
//   "c": 2
//  }
// }"

JSON.stringify(space, undefined, '');
// {"a":1,"b":{"c":2}}

// custom specifiers allowed too!
JSON.stringify(space, undefined, 'a');
// "{
//  a"a": 1,
//  a"b": {
//   aa"c": 2
//  a}
// }"

Puzzler: why does the nested c option have two ‘a’s in its representation – aa”c”?

Conclusion

This post showed a couple of new tricks and ways to properly leverage the hidden capabilities of JSON.stringify covering:
  • JSON expectations and non-serializable data formats
  • How to use toJSON to define objects properly for JSON serialization
  • The replacer option for filtering out values dynamically
  • The space parameter for formatting JSON output
  • The difference between stringifying arrays and objects containing non-stringifiable fields
Feel free to check out related posts, follow me on twitter or share your thoughts in the comments!

Related

  1. Why JavaScript has two zeros: -0 and +0
  2. JavaScript has no Else If
  3. Deep dive into JavaScript Property Descriptors

Why JavaScript has two zeros: -0 and +0


Do you know there are two valid zero representations in JavaScript?

posZero = +0;
negZero = -0;

In pure mathematics, zero means nothing and its sign doesn’t matter. +0 = -0 = 0. Computers can’t represent value well enough and mostly use the IEEE 754 standard.

Most languages have two zeros!

The IEEE 754 standard for floating point numbers allows for signed zeros, thus it is possible to have both -0 and +0.  Correspondingly, 1 / +0 = +∞ while 1 / -0 = -∞ and these are values at opposite ends of the number line.

  • They can be viewed as vectors with zero magnitude pointing in opposite directions.
  • In the mathematical field of limits, negative and positive zeros show how zero was reached.

These two zeros can lead to issues as shown with the disparate ∞ results.

Why two zeros occur in IEEE 754

There is a bit representing the sign of each numeric value independent of its magnitude. Consequently if the magnitude of a number goes to zero without its sign changing then it becomes a -0.

So why does this matter? Well, JavaScript implements the IEEE 754 standard and this post goes into some of the details.

Keep in mind, the default zero value in JavaScript (and most languages) is actually the signed zero (+0).

The zeros in JavaScript

1. Representation

let a = -0;
a; // -0

let b = +0;
b; // 0

2. Creation

All mathematical operations give a signed zero result (+0 or -0) that depends on the operand values.

The only exception to this rule involves addition and subtraction involving +0 and -0.

  • Adding two -0 values will always be -0
  • Subtracting a 0 from -0 will also be -0

Any other combination of zero values gives a +0. Another thing to note is that negative zeros cannot be created as a result of addition or subtraction of non-zero operands.  Thus -3 + 3 = 3 – 3 = +0.

The code below shows some more examples.

// Addition and Subtraction
 3 - 3  // 0
-3 + 3  // 0

// Addition of zero values
-0 + -0; // -0
-0 -  0; // -0
 0 -  0; //  0
 0 + -0; //  0

// Multiplication
3 *  0  //  0
3 * -0  // -0

// Division
 3  / Infinity  //  0
-3  / Infinity  // -0

// Modulus
 6 % 2  //  0
-6 % 2  // -0

3. The issue with zero strings

There is a minor niggle with stringifying -0. Calling toString will always give the result “0”. On the flip side, parseInt and parseFloat parse negative zero values.

Consequently, there is a loss of information in the stringify -> parseInt transformation. For example, if you convert values to strings (for example, via JSON.stringify), POST to some server and then retrieve those strings later.

let a = '-0';
a.toString(); // '0'

parseInt('-0', 10);   // -0
parseFloat('-0', 10); // -0

4. Differentiating between +0 and -0

How would you tell one zero value apart from the other? Let’s try comparison.

-0 === 0;  // true
-0..toString(); // '0'
0..toString();  // '0'

-0 <  0; // false
 0 < -0; // false

0..toString() is valid JavaScript. Read this to know why.

ES2015’s Object.is method works

Object.is(0, -0); //false

The ES2015’s Math.sign method for checking the sign of a number is not of too much help since it returns 0 and -0 for +0 and -0 respectively.

Since ES5 has no such helper we can use the difference in behaviour of +0 and -0 to write a helper.

function isNegativeZero(value) {
    value = +value; // cast to number
    if(value) {
        return false;
    }
    let infValue = 1 / value;
    return infValue < 0;
}

isNegativeZero(0);    // false
isNegativeZero(-0);   // true
isNegativeZero('-0'); // true

5. Applications

What is the use of knowing all this?

1. One example could be say for example if you are doing some machine learning and need to differentiate between positive and negative values for branching. If a -0 result gets coerced into a positive zero; then this could lead to a tricky branching bug.

2. Another usage scenario would be for people who write compilers and try to optimize code. Expressions that would result in zero e.g. x * 0 cannot be optimized as the result now depends on the value of x. Optimizing such expressions and replacing them with a 0 will lead to a bug.

3. And know too that there are lots of languages that support  IEEE 754. Let’s take C# and Java for example:

// Java
System.out.print(1.0 / 0.0);  // Infinity
System.out.print(1.0 / -0.0); // -Infinity
// C#
Console.WriteLine(1.0 / 0.0);  // Infinity
Console.WriteLine(1.0 / -0.0); // -Infinity;

Try it in your language too!

6. IEEE specifications

The IEEE specifications lead to the following results

Math.round(-0.4); // -0
Math.round(0.4);  //  0

Math.sqrt(-0);  // -0
Math.sqrt(0);   //  0

1 / -Infinity;  // -0
1 /  Infinity;  //  0

Rounding -0.4 leads to -0 because it is viewed as the limit of a value as it approaches 0 from the negative direction.

The square root rule is one I find  strange; the specification says: “Except that squareRoot(–0) shall be –0, every valid squareRoot shall have a positive sign.”. If you are wondering, IEEE 754 is the same reason why 0.1 + 0.2 != 0.3 in most languages; but that’s another story.

Thoughts? Do share them in the comments.

Related

Understanding JavaScript Property Descriptors 3


If this is your first time here, you should read the part 1 and part 2 of this series. Then come back to this to continue.

Now that we know the basics, this post covers the JavaScript methods for setting and modifying object property descriptors.

1. Object.preventExtensions()

This blocks the addition of new properties to an object. Literally, it prevents extending the object in any way (pun intended) and returns the object.

This is a one-way switch, once an object is made inextensible, there is no way to undo the action. Just recreate the object. Another thing to note too is that once an object becomes inextensible, its protoype object automatically becomes closed to extensions too ; so be careful especially if ‘inheriting’ or ‘delegating’ to parent types.

There is also the object.isExtensible method for checking if an object has been made inextensible. This comes in handy because trying to extend such objects in strict mode would cause a TypeError.

let obj = { a : 1 };
Object.preventExtensions(obj);
// can't add new properties
obj.b = 3;
obj; // { a : 1 }

// can still change existing properties
obj.a = 3;
obj.a; // 3

Object.isExtensible(obj); // false

Object.getOwnPropertyDescriptor(obj, 'a');
// Object {
//     value: 3,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

2. Object.seal()

Calling Object.seal on an object achieves the following:

  1. Marks every existing property on the object as non-configurable
  2. Then call Object.preventExtensions to prevent adding new properties

Once an object is sealed, then you can’t add new properties or modify the existing ones. All the rules of non-configurability described in earlier posts apply.

Note however that this still leaves writable so it should be possible to change the value of the property (both ways, direct access or using Object.defineProperty). However since configurable is false, you can’t delete it.

The Object.isSealed method also exists for checking sealed objects.

let sealedObj = { a : 1 };
Object.seal(sealedObj);
// non-configurable
delete sealedObj.a; // false
sealedObj.a; // 1 

// can still write
sealedObj.a = 2;
sealedObj.a; // 2

//Check properties
Object.getOwnPropertyDescriptor(sealedObj, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isSealed(sealedObj); // true
Object.isExtensible(sealedObj); // false

As shown above, the configurable property descriptor is now false. All properties of the object would have configurable set as false.

3. Object.freeze()

Similar to seal, calling Object.freeze on an object does the following:

  1. Mark every existing property on the object as non-writable
  2. Invokes Object.seal to prevent adding new properties and marks existing properties as non-configurable

Freeze is the highest level of immutability possible using these methods. Properties are now closed to changes due to the false configurable and writable attribute values. And yes, there is the expected Object.isFrozen method too.

let frozenObj = { a : 1 };
Object.freeze(frozenObj);

// non writable
frozenObj.a = 2;
frozenObj.a; // 1

// non configurable
delete frozenObj.a; // false
frozenObj.a; // 1

Object.getOwnPropertyDescriptor(frozenObj, 'a');
// Object {
//     value: 1,
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isFrozen(frozenObj); // true
Object.isSealed(frozenObj); // true
Object.isExtensible(frozenObj); // false

4. Shallow nature

A very important caveat to know while using these methods occurs when using them on properties that are reference values. These data descriptor properties and methods are all shallow and would not update the properties inside the referenced values.

So if you freeze an object containing another object, then the contained object properties are not automatically frozen; rather you’d have to write your own recursive implementation to handle that.

let shallow = {
    inner: {
        a : 1
    }
};

Object.freeze(shallow);
shallow.inner = null; // fails
shallow; // { inner : { a : 1 } }

// inner properties not frozen
shallow.inner.a = 2;
shallow.inner.a; // 2

Object.getOwnPropertyDescriptor(shallow, 'inner');
// Object {
//     value: {a : 1},
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

Object.getOwnPropertyDescriptor(shallow.inner, 'a');
// Object {
//     value: 1,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.isFrozen(shallow); // true
Object.isFrozen(shallow.inner); // false

As the property descriptors above show, the inner object is frozen however its own properties are not.

Conclusion

Well, that about wraps it up! I hope you enjoyed the series and learnt a lot. Do let me know your thoughts and continue reading!

  1. Deep dive into JavaScript Property Descriptors
  2. Understanding JavaScript Property Descriptors 2

Understanding JavaScript Property Descriptors 2


If this is your first time here, you should read the first post in this series. Then come back to this to continue.

Continuing with the dive into property descriptors, this post goes deeply into the properties, what they mean and how they can be used.

1. Modifying existing properties

The defineProperty method allows users to create and modify properties. When the property exists, defineProperty will modify that object’s properties.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: true
});
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar',
//     writable: true,
//     enumerable: false,
//     configurable: false
// }

Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: false
});
obj1.foo; // bar
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar', // unchanged
//     writable:false, // updated
//     enumerable: false,
//     configurable: false
// }

Now that we know how to modify properties, let’s dive into the nitty-gritty. Take a deep breath, ready, set, go!

2. Writable

If this flag is true, then the value of the property can be changed. Otherwise, changes would be rejected. And if you are using strict mode (and you should!), you’ll get a TypeError.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
  value: 'bar',
  writable: true
});
obj1.foo; // bar

// change value
obj1.foo = 'baz';
obj1.foo; // baz

This can be used to set up ‘constant’ properties that you don’t want people to overwrite. You might ask, what happens if they just flip the writable flag? Someone might try to brute force the overwrite. Let’s see what happens in that scenario.

Re-using the same obj1 with writable already set to false.

Object.defineProperty(obj1, 'foo', {
    writable: false
});
obj1.foo; // baz
obj1.foo = 'bar'; // TypeError in strict mode
obj1.foo; // baz

// Try making property writable again
Object.defineProperty(obj1, 'foo', {
    writable: true
});
// Uncaught TypeError:
// Cannot redefine property: foo(…)

So you see, that’s safe! Once writable is false, it can’t be reset to true ever again. It’s a one way switch!

Wait a bit; there is still a hitch. If the property is still configurable, then there is a bypass to this. Let’s explore the configurable property.

2. Configurable

Setting writable to false only prevents changing the value however it doesn’t mean the property is not modifiable. To bypass the write-block, a user can just delete the property and then recreate it. Let’s see how.

let obj2 = {};
Object.defineProperty(obj2, 'foo', {
  value: 'bar',
  writable: false,
  configurable: true
});

//bypass
delete obj2.foo;
obj2.foo = 'CHANGED!';
obj2.foo; //CHANGED

So if you don’t want someone changing your object properties, how would you go about that? The way to prevent third-party consumers from making changes to your properties is via setting configurable to false. Once set, it prevents the following:

  • Deleting that object property
  • Changing any other descriptor attributes. The only exception to this rule is that writable can be set to false if it was hitherto true. Otherwise, every call to defineProperty will throw a TypeError. Setting the same value doesn’t throw an error but that makes no difference any way.

And just like the writable flag, this change is a one-way switch. Once configurable is set to false, you can’t reset it to true afterwards.

let obj3 = {};
Object.defineProperty(obj3, 'foo', {
  value: 'bar',
  writable: true,
  configurable: false
});

Object.defineProperty(obj3, 'foo', {
    enumerable: false
});
// TypeError: Cannot redefine property: foo

// bypass fails now
delete obj3.foo; // false non-configurable
obj3.foo; // bar

// Can change writable to false
Object.defineProperty(obj3, 'foo', {
    writable: false
});
obj3.foo = 8;
obj3.foo; // bar

So to create immutable properties on Objects, you would consider setting both writable and configurable fields to false.

3. Enumerable

This determines if the properties show up when enumerating object properties. For example, when using for..in loops or Object.keys. However, it has no impact on whether you can use the property or not.

But why would you want to make properties non-enumerable?

1. JSON serialization

Usually, we build objects based off JSON data retrieved over XHR calls. These objects are then enhanced with a couple of new properties. When POSTing the data back, developers create a new object with extracted properties.

If those property enhancements are non-enumerable, then calling JSON.stringify on the object would automatically drop them. Since JSON.stringify also drops functions; this might be an easy way to serialize data accurately.

2. Mixins

Another application could be mixins which add extra behaviour to objects. If a mixin has an enumerable getter accessor property; then that calculated property will automatically show up in Object.keys and for..in loops. The getter will behave just like any property. Pretty neat and reminds me of Ember’s  computed properties and I wouldn’t be surprised if it’s the same thing under the hood. On the flip side, you could set enumerable to false to turn off this behaviour.

Unlike writable and configurable, enumerable is a two-way switch. You can set it back to true if it was false before.

Some code examples:

let obj4 = {
    name: 'John',
    surname: 'Smith'
};
Object.defineProperty(obj4, 'fullName', {
  get: function() {
      return this.name + ' ' + this.surname;
  },
  enumerable: true,
  configurable: true
});

let keys = Object.keys(obj4);
//['name', 'surname', 'fullName']

keys.forEach(k =&gt; console.log(obj4[k]));
// John, Smith, John Smith

JSON.stringify(obj4);
// "{"name":"John",
//   "surname":"Smith",
//   "fullName":"John Smith"}"

// can reset to false
Object.defineProperty(obj4, 'fullName', {
    enumerable: false
});
Object.keys(obj4);
// ["name", "surname"]

JSON.stringify(obj4);
// "{"name":"John","surname":"Smith"}"

4. Value, Get and Set

  1. An object property cannot have both the value and getter/setter descriptors. You’ve got to choose one.
  2. Value can be pretty much anything – primitives or built-in types. It can even be a function.
  3. You can use the getter and setters to mock read-only properties. You can even have the setter throw Exceptions when users try to set it.

5. Extras

  1. These properties are all shallow and not deep. You probably have to roll your own recursive helper for deep property setting.
  2. You can examine built in types and modify some of their properties. For example, you can delete the fromCharCode method of string. Don’t know why you would want that though…string
  3. The propertyIsEnumerable method checks if a property is enumerable. No, there are no propertyIsWritable or propertyIsConfigurable methods.

Now, read the thUnderstanding JavaScript Property Descriptors 3ird post in this series or check out other related articles:

Related

  1. Deep dive into JavaScript Property Descriptors
  2. Learning ES2015 : let, const and var

Deep dive into JavaScript Property Descriptors


Creating Object Properties

There are a couple of ways to assign properties to objects in JavaScript. The most common example is using obj.field = value or obj[‘field’] = value. This approach is simple however, it is not flexible because it automatically defines property descriptor fields

let obj1 = {
    foo: 'bar'
};

let obj2 = {
    get foo() {
        return 'bar';
    }
};

let obj3 = Object.create({}, { foo : { value : 'bar' } });

let obj4 = Object.create({}, {
    foo : {
        get : function() { return 'bar'; }
    }
});

obj1.foo; // bar
obj2.foo; // bar
obj3.foo; // bar
obj4.foo; // bar

In all 4 obj objects, the foo property returns the same result. But are they the same? Obviously not. This post series examines these differences and shows how you can apply and leverage these capabilities.

Data and Accessor Property Descriptors

Property descriptors hold descriptive information about object properties. There are two types of property descriptors:

  1. Data descriptors – which only hold information about data
  2. Accessor descriptors – which hold information about accessor (get/set) descriptors.

A property descriptor is a data structure with a couple of identifying fields, some are shared between both types while the others apply to a single type as shown below.

Data descriptor Accessor descriptor
value Yes No
writable Yes No
enumerable Yes Yes
configurable Yes Yes
get No Yes
set No Yes

Viewing Property Descriptor information

The getOwnPropertyDescriptor allows you to get the property descriptor for any object.

let dataDescriptor = Object.getOwnPropertyDescriptor(obj1, 'foo');
dataDescriptor;
// Object {
//     value: "bar",
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

let accessorDescriptor = Object.getOwnPropertyDescriptor(obj2, 'foo');
accessorDescriptor;
// Object {
//     get: function foo () {}
//     set: undefined,
//     enumerable: true,
//     configurable: true
// }

Data Descriptor only fields

1. Value: Gets the value of the property.

2. Writable: Boolean indicating whether the property value can be changed. This can be used to create ‘constant‘ field values especially for primitive values.

Accessor Descriptor only fields

1. Get: Function which will be invoked whenever the property is to be retrieved. This is similar to getters in other languages.

2. Set: Function that would be invoked when the property is to be set. It’s the setter function.

Shared fields

1. Enumerable: Boolean indicating whether the property can be enumerated. This determines if the property shows up during enumeration. For example, with for..of loops or Object.keys.

2. Configurable: Boolean indicating whether the type of the property can be changed and if the property can be deleted from the object.

Setting Property Descriptors

The Object.defineProperty method allows you to specify and define these property descriptor fields. It takes the object, property key and a bag of descriptor values.

let obj5 = {};
Object.defineProperty(obj5, 'foo', {
  value: 'bar',
  writable: true,
  enumerable: true,
  configurable: true
});
obj5.foo; // bar

let obj6 = {};
Object.defineProperty(obj6, 'foo', {
  get: function() { return 'bar'; }
});
obj6.foo; // bar

Default values

All boolean descriptor fields default to false while the getter, setter and value properties default to undefined.  This is an important detail that is most visible when creating and modifying properties via object asssignment or  the  defineProperty method.

let sample = { a : 2 };
Object.defineProperty(sample, 'b', { value: 4 });
sample; // { a: 2, b:4 }

Object.getOwnPropertyDescriptor(sample, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.getOwnPropertyDescriptor(sample, 'b');
// Object {
//     value: 4,
//     writable: false,
//     enumerable: false,
//     configurable: false
// }

sample.b = 'cannot change'; //writable = false
sample.b //4

delete sample.b //configurable=false
sample.b //4

Object.keys(sample); //enumerable = false
// ['a']

Because the other properties of property b were not set on creation, they default to false. This effectively makes b immutable, not configurable and not enumerable on sample.

Validating property existence

Three tricky scenarios:

  • Accessing non-existent property fields results in undefined
  • Due to the default rules, accessing existing property fields with no value set also gives undefined
  • Finally, it is possible to define a property with the value undefined

So how do you verify if a property actually exists and has the value undefined or if doesn’t exist at all on an object?

let obj = { a: undefined };
Object.defineProperty(obj, 'b', {}); //use defaults

obj.a; //undefined
obj.b; //undefined
obj.c; //undefined

The way out of this is the hasOwnProperty function.

Object.hasOwnProperty('a'); //true
Object.hasOwnProperty('b'); //true
Object.hasOwnProperty('c'); //false

Conclusion

There is still a lot more about these values and how to use them. But that would make this post too long so this would be a series. In the next post, the theme would be about each field and what it can be used for.

Teasers before the next post

  • Try invoking a getter property as a function to see what happens. Can you explain why?
  • Try modifying some of the descriptor properties of native JavaScript objects e.g. RegExp, Array, Object etc. What happens?

Related

Read the second post in this series or check out other related articles:

Why I am moving to Angular 2


I started poking into core Angular 2 concepts a few weeks ago and it has been a pleasant experience so far. I rewrote a bare-bones replica of an Angular 1 app that took me months in about 2 or 3 weeks. Although rewrites are typically faster due to familiarity, it was impressive seeing built-in support for most of the painful areas of Angular.

Yes, there is some cost due to the absence of backwards compatibility but hey, you can’t have it all. If you are thinking of choosing between Angular 1 or Angular 2, I’ll say go for Angular 2; it’s totally worth it. However, if you already have an Angular 1 app, then you should evaluate the ROI and impact of the move on your team and delivery schedules.

1. Much Simpler

Both frameworks have steep learning curves, however I believe Angular 2 tries to simplify most of the confusing concepts of Angular 1.

The various derivatives of the $provider (value, constant, factory, service and provider itself) are all gone – everything is just a service now. The same applies to the scope, the powerful but hard-to-manage feature has been eliminated.

Error messages are much clearer and vector you faster into the root cause unlike Angular 1 which had some error messages that had to be ‘learnt’ over time for root-cause correlation.

The move to components, services and established modules and routes makes it easier to design and create components.

2. Better Tooling

Angular-cli is a great tool that reminds me of the ember-cli; it’s great that the Angular team finally provided first-class support for this. The cli is amazing, apart from the staples of project scaffolding, testing (unit + E2E) and linting; there is also support for pushing to Github (will even create a repo for you!), proxying and build targets. Big wins!!

 Augury worked well for me out of the box; I remember dropping batarang after running into lots of problems.

Codelyzer is another great tool that helps you to write consistent code conforming to your style guidelines across teams.

3. Typescript

Typescript is the main language for Angular 2 although there is support for JavaScript and Dart. This should hopefully make it more amenable to larger enterprises for adoption.

JavaScript can be difficult to manage at scale; I guess this is something that affects all weakly typed languages. Refactoring can be a big pain if you have to rename some module in a 100,000 line codebase. Quickly becomes a pain point and hard to do well. Static typing does help in that case.

4. Reactive Programming

Angular 2 is built with reactive programming in mind. It bundles Rxjs, part of the reactive extensions library which pushes you to use Observables and all the reactive goodness.

It can be challenging wrapping your head around functional reactive programming. Simply said, you need to understand the 5 building blocks of functional programming – map, reduce, zip, flatten and filter. With these, you can compose and combine various programming solutions. Hadoop is just a ramped up version of mapReduce.  The framework’s support for reactive concepts (e.g. observables) is deeply ingrained in a wide variety of places: routing, http and templates.

They is also support for promises but I think mixing Promises and Streams would lead to confusion. Choose one style and stick to it.

Want to learn more about streams? Check out my stream library and accompanying blog post.

5. Routing

Route guards, resolvers, router-link directives and more are a pure delight. Support for modular component routing is impressive too; this allows modules to have independent routing. So you can just pluck them out if you don’t need them anymore.

Angular 1’s routing was difficult to use because it was at the global level. Yes there were other routing implementations (proof to Angular’s extensibility) that helped with things like having multiple outlets in a page.

The good thing about angular 2 is that all these is built-in and that means you can easily implement a consistent approach to routing in all your app.

6. Modularity

Angular 2 comes with better modularity; you can declare modular blocks and use them to compose your application.

Angular 2 allows you to define components that control their routing, layout, sub-component make up and more. Imagine you are creating some web application to monitor social media platforms. I would imagine you’d have top-level navigation tabs for things like Facebook, Twitter and LinkedIn.

It’s possible to define each of these three as top-level modules on their own and then register them in the core app. So the Facebook module ideally should be able to handle its own routing, component and styling and more separately from the Twitter module. An extra benefit is that; you can take this module and re-use it in some other totally different project! That’s simply awesome.

Conclusion

Angular 2 is still new and though it’s been out there for some time; there is still a concern about how it would ‘perform’ at scale. The good thing though is that it handles most of the issues with Angular 1 really well.

Sure, there might be issues in the future but at least they would be new mistakes :)

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

How to track errors in JavaScript Web applications


Your wonderful one-of-a-kind web application just had a successful launch and your user base is rapidly growing. To keep your customers satisfied, you have to know what issues they face and address those as fast as possible.

One way to do that could be being reactive and waiting for them to call in – however, most customers won’t do this; they might just stop using your app. On the flip side, you could be proactive and log errors as soon as they occur in the browser to help roll out fixes.

But first, what error kinds exist in the browser?

Errors

There are two kinds of errors in JavaScript: runtime errors which have the window object as their target and then resource errors which have the source element as the target.

Since errors are events, you can catch them by using the addEventListener methods with the appropriate target (window or sourceElement). The WHATWG standard also provides onerror methods for both cases that you can use to grab errors.

Detecting Errors

One of JavaScript’s strengths (and also a source of much trouble too) is its flexibility. In this case, it’s possible to write wrappers around the default onerror handlers or even override them to instrument error logging automation.

Thus, these can serve as entry points for logging to external monitors or even sending messages to other application handlers.

//logger is error logger
var original = window.onerror; //if you still need a handle to this
window.onerror = function(message,source,lineNo,columnNo,errObject){
    logger.log('error', {
        message: message,
        stack: errObject && errObject.stack
    });
    original() //if you want to log the original
    return;
}

var elemOriginal = element.onerror;
element.onerror = function(event) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack
    });
    elemOriginal();
    return;
}

The Error Object

The interface for this contains the error message and optional values: fileName and lineNumber. However, the most important part of this is the stack which provides information about the stack.

Note: Stack traces vary from browser to browser as there exists no formatting standard.

Browser compatibility woes

Nope, you ain’t getting away from this one.

Not all browsers pass in the errorObject (the 5th parameter) to the window.onerror function. Arguably, this is the most important parameter since it provides the most information.

Currently the only big 5 browser that doesn’t pass in this parameter is the Edge browser – cue the only ‘edge’ case. Safari finally added support in June.

The good news though is there is a workaround! Hurray! Let’s go get our stack again.

window.addEventListener('error', function(errorEvent) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack
    });
});

And that wraps it up! You now have a way to track errors when they happen in production. Happier customers, happier you.

Note: The eventListener and window.onError approaches might need to be used in tandem to ensure enough coverage across browsers. Also the error events caught in the listener will still propagate to the onError handler so you might want to filter out duplicated events or cancel the default handlers.

Related

Tips for printing from web applications

Liked this article? Please share, subscribe or drop a comment.