What you didn’t know about JSON.Stringify

JSON, the ubiquitous data format that has become second nature to engineers all over the world. This post shows you how to achieve much more with JavaScript’s native JSON.Stringify method.

A quick refresher about JSON and JavaScript:

  • Not all valid JSON is valid JavaScript
  • JSON is a text-only format, no blobs please
  • Numbers are only base 10.

1. JSON.stringify

This returns the JSON-safe string representation of its input parameter. Note that non-stringifiable fields will be silently stripped off as shown below:

let foo = { a: 2, b: function() {} };
// "{ "a": 2 }"

What other types are non-stringifiable? 

Circular references

Since such objects point back at themselves, it’s quite easy to get into a non-ending loop. I once ran into a similar issue with memq in the past.

let foo = { b: foo };
// Uncaught TypeError: Converting circular structure to JSON

// Arrays
foo = [foo];
// Uncaught TypeError: Converting circular structure to JSON

Symbols and undefined

let foo = { b: undefined };
// {}
// Symbols
foo.b = Symbol();
// {}


Arrays containing non-stringifiable entries are handled specially though.

let foo = [Symbol(), undefined, function() {}, 'works']
// "[null,null,null,'works']"

Non-stringifiable fields get replaced with null in arrays and dropped in objects. The special array handling helps ‘preserve’ the shape of the array. In the example above, if the array entries were dropped as occurs in objects, then the output would have been [‘works’]. A single element array is very much different from a 4 element one.

I would argue for using null in objects too instead of dropping the fields. That way, we get a consistent behaviour and a way to know fields have been dropped.

Why aren’t all values stringifiable?

Because JSON is a language agnostic format.

For example, let us assume JSON allowed exporting functions as strings. With JavaScript, it would be possible to eval such strings in some scenarios. But what context would such eval-ed functions be evaluated in? What would that mean in a C# program?  And would you even represent some language-specific values (e.g. JavaScript Symbols)?

The ECMAScript standard highlights this point succinctly:

It does not attempt to impose ECMAScript’s internal data representations on other programming languages. Instead, it shares a small subset of ECMAScript’s textual representations with all other programming languages.

2. Overriding toJSON on object prototypes

One way to bypass the non-stringifiable fields issue in your objects is to implement the toJSON method. And since nearly every AJAX call involves a JSON.stringify call somewhere, this can lead to a very elegant trick for handling server communication.

This approach is similar to toString overrides that allow you to return representative strings for objects. Implementing toJSON enables you to sanitize your objects of non-stringifiable fields before JSON.stringify converts them.

function Person (first, last) {
    this.firstName = first;
    this.last = last;

Person.prototype.process = function () {
   return this.firstName + ' ' +

let ade = new Person('Ade', 'P');
// "{"firstName":"Ade","last":"P"}"

As expected, the instance process function is dropped. Let’s assume however that the server only wants the person’s full name. Instead of writing a dedicated converter function to create that format, toJSON offers a more scalable alternative.

Person.prototype.toJSON = function () {
    return { fullName: this.process(); };

let ade = new Person('Ade', 'P');
// "{"fullName":"Ade P"}"

The strength of this lies in its reusability and stability. You can use the ade instance with virtually any library and anywhere you want. You control exactly the data you want serialized and can be sure it’ll be created just as you want.

// jQuery
$.post('endpoint', ade);

// Angular 2
this.httpService.post('endpoint', ade)

Point: toJSON doesn’t create the JSON string, it only determines the object it’ll be called with. The call chain looks like this: toJSON -> JSON.stringify.

3. Optional arguments

The full signature stringify is JSON.stringify(value, replacer?, space?). I am copying the TypeScript ? style for identifying optional values. Now let’s dive into the replacer and space options.

4. Replacer

The replacer is a function or array that allows selecting fields for stringification. It differs from toJSON by allowing users to select choice fields rather than manipulate the entire structure.

If the replacer is not defined, then all fields of the object will be returned – just as JSON.stringify works in the default case.


For arrays, only the keys present in the replacer array would be stringified.

let foo = {
 a : 1,
 b : "string",
 c : false
JSON.stringify(foo, ['a', 'b']);

Arrays however might not be as flexible as desired,  let’s take a sample scenario involving nested objects.

let bar = {
 a : 1,
 b : { c : 2 }
JSON.stringify(bar, ['a', 'b']);

JSON.stringify(bar, ['a', 'b', 'c']);

Even nested objects are filtered out. Assuming you want more flexibility and control, then defining a function is the way out.


The replacer function is called for every key value pair and the return values are explained below:

  • Returning undefined drops that field in the JSON representation
  • Returning a string, boolean or number ensures that value is stringified
  • Returning an object triggers another recursive call until primitive values are encountered
  • Returning non-stringifiable valus (e.g. functions, Symbols etc) for a key will result in the field being dropped.
let baz = {
 a : 1,
 b : { c : 2 }

// return only values greater than 1
let replacer = function (key, value) {
    if(typeof === 'number') {
        return value > 1 ? value: undefined;
    return value;

JSON.stringify(baz, replacer);
// "{"b":{"c":2}}"

There is something to watch out for though, the entire object is passed in as the value in the first call; thereafter recursion begins. See the trace below.

let obj = {
 a : 1,
 b : { c : 2 }

let tracer = function (key, value){
  console.log('Key: ', key);
  console.log('Value: ', value);
  return value;

JSON.stringify(obj, tracer);
// Key:
// Value: Object {a: 1, b: Object}
// Key: a
// Value: 1
// Key: b
// Value: Object {c: 2}
// Key: c
// Value: 2

5. Space

Have you noticed the default JSON.stringify output? It’s always a single line with no spacing. But what if you wanted to pretty format some JSON, would you write a function to space it out?

What if I told you it was a one line fix? Just stringify the object with the tab(‘\t’) space option.

let space = {
 a : 1,
 b : { c : 2 }

// pretty format trick
JSON.stringify(space, undefined, '\t');
// "{
//  "a": 1,
//  "b": {
//   "c": 2
//  }
// }"

JSON.stringify(space, undefined, '');
// {"a":1,"b":{"c":2}}

// custom specifiers allowed too!
JSON.stringify(space, undefined, 'a');
// "{
//  a"a": 1,
//  a"b": {
//   aa"c": 2
//  a}
// }"

Puzzler: why does the nested c option have two ‘a’s in its representation – aa”c”?


This post showed a couple of new tricks and ways to properly leverage the hidden capabilities of JSON.stringify covering:
  • JSON expectations and non-serializable data formats
  • How to use toJSON to define objects properly for JSON serialization
  • The replacer option for filtering out values dynamically
  • The space parameter for formatting JSON output
  • The difference between stringifying arrays and objects containing non-stringifiable fields
Feel free to check out related posts, follow me on twitter or share your thoughts in the comments!


  1. Why JavaScript has two zeros: -0 and +0
  2. JavaScript has no Else If
  3. Deep dive into JavaScript Property Descriptors

Why JavaScript has two zeros: -0 and +0

Do you know there are two valid zero representations in JavaScript?

posZero = +0;
negZero = -0;

In pure mathematics, zero means nothing and its sign doesn’t matter. +0 = -0 = 0. Computers can’t represent value well enough and mostly use the IEEE 754 standard.

Most languages have two zeros!

The IEEE 754 standard for floating point numbers allows for signed zeros, thus it is possible to have both -0 and +0.  Correspondingly, 1 / +0 = +∞ while 1 / -0 = -∞ and these are values at opposite ends of the number line.

  • They can be viewed as vectors with zero magnitude pointing in opposite directions.
  • In the mathematical field of limits, negative and positive zeros show how zero was reached.

These two zeros can lead to issues as shown with the disparate ∞ results.

Why two zeros occur in IEEE 754

There is a bit representing the sign of each numeric value independent of its magnitude. Consequently if the magnitude of a number goes to zero without its sign changing then it becomes a -0.

So why does this matter? Well, JavaScript implements the IEEE 754 standard and this post goes into some of the details.

Keep in mind, the default zero value in JavaScript (and most languages) is actually the signed zero (+0).

The zeros in JavaScript

1. Representation

let a = -0;
a; // -0

let b = +0;
b; // 0

2. Creation

All mathematical operations give a signed zero result (+0 or -0) that depends on the operand values.

The only exception to this rule involves addition and subtraction involving +0 and -0.

  • Adding two -0 values will always be -0
  • Subtracting a 0 from -0 will also be -0

Any other combination of zero values gives a +0. Another thing to note is that negative zeros cannot be created as a result of addition or subtraction of non-zero operands.  Thus -3 + 3 = 3 – 3 = +0.

The code below shows some more examples.

// Addition and Subtraction
 3 - 3  // 0
-3 + 3  // 0

// Addition of zero values
-0 + -0; // -0
-0 -  0; // -0
 0 -  0; //  0
 0 + -0; //  0

// Multiplication
3 *  0  //  0
3 * -0  // -0

// Division
 3  / Infinity  //  0
-3  / Infinity  // -0

// Modulus
 6 % 2  //  0
-6 % 2  // -0

3. The issue with zero strings

There is a minor niggle with stringifying -0. Calling toString will always give the result “0”. On the flip side, parseInt and parseFloat parse negative zero values.

Consequently, there is a loss of information in the stringify -> parseInt transformation. For example, if you convert values to strings (for example, via JSON.stringify), POST to some server and then retrieve those strings later.

let a = '-0';
a.toString(); // '0'

parseInt('-0', 10);   // -0
parseFloat('-0', 10); // -0

4. Differentiating between +0 and -0

How would you tell one zero value apart from the other? Let’s try comparison.

-0 === 0;  // true
-0..toString(); // '0'
0..toString();  // '0'

-0 <  0; // false
 0 < -0; // false

0..toString() is valid JavaScript. Read this to know why.

ES2015’s Object.is method works

Object.is(0, -0); //false

The ES2015’s Math.sign method for checking the sign of a number is not of too much help since it returns 0 and -0 for +0 and -0 respectively.

Since ES5 has no such helper we can use the difference in behaviour of +0 and -0 to write a helper.

function isNegativeZero(value) {
    value = +value; // cast to number
    if(value) {
        return false;
    let infValue = 1 / value;
    return infValue < 0;

isNegativeZero(0);    // false
isNegativeZero(-0);   // true
isNegativeZero('-0'); // true

5. Applications

What is the use of knowing all this?

1. One example could be say for example if you are doing some machine learning and need to differentiate between positive and negative values for branching. If a -0 result gets coerced into a positive zero; then this could lead to a tricky branching bug.

2. Another usage scenario would be for people who write compilers and try to optimize code. Expressions that would result in zero e.g. x * 0 cannot be optimized as the result now depends on the value of x. Optimizing such expressions and replacing them with a 0 will lead to a bug.

3. And know too that there are lots of languages that support  IEEE 754. Let’s take C# and Java for example:

// Java
System.out.print(1.0 / 0.0);  // Infinity
System.out.print(1.0 / -0.0); // -Infinity
// C#
Console.WriteLine(1.0 / 0.0);  // Infinity
Console.WriteLine(1.0 / -0.0); // -Infinity;

Try it in your language too!

6. IEEE specifications

The IEEE specifications lead to the following results

Math.round(-0.4); // -0
Math.round(0.4);  //  0

Math.sqrt(-0);  // -0
Math.sqrt(0);   //  0

1 / -Infinity;  // -0
1 /  Infinity;  //  0

Rounding -0.4 leads to -0 because it is viewed as the limit of a value as it approaches 0 from the negative direction.

The square root rule is one I find  strange; the specification says: “Except that squareRoot(–0) shall be –0, every valid squareRoot shall have a positive sign.”. If you are wondering, IEEE 754 is the same reason why 0.1 + 0.2 != 0.3 in most languages; but that’s another story.

Thoughts? Do share them in the comments.


Understanding JavaScript Property Descriptors 3

If this is your first time here, you should read the part 1 and part 2 of this series. Then come back to this to continue.

Now that we know the basics, this post covers the JavaScript methods for setting and modifying object property descriptors.

1. Object.preventExtensions()

This blocks the addition of new properties to an object. Literally, it prevents extending the object in any way (pun intended) and returns the object.

This is a one-way switch, once an object is made inextensible, there is no way to undo the action. Just recreate the object. Another thing to note too is that once an object becomes inextensible, its protoype object automatically becomes closed to extensions too ; so be careful especially if ‘inheriting’ or ‘delegating’ to parent types.

There is also the object.isExtensible method for checking if an object has been made inextensible. This comes in handy because trying to extend such objects in strict mode would cause a TypeError.

let obj = { a : 1 };
// can't add new properties
obj.b = 3;
obj; // { a : 1 }

// can still change existing properties
obj.a = 3;
obj.a; // 3

Object.isExtensible(obj); // false

Object.getOwnPropertyDescriptor(obj, 'a');
// Object {
//     value: 3,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

2. Object.seal()

Calling Object.seal on an object achieves the following:

  1. Marks every existing property on the object as non-configurable
  2. Then call Object.preventExtensions to prevent adding new properties

Once an object is sealed, then you can’t add new properties or modify the existing ones. All the rules of non-configurability described in earlier posts apply.

Note however that this still leaves writable so it should be possible to change the value of the property (both ways, direct access or using Object.defineProperty). However since configurable is false, you can’t delete it.

The Object.isSealed method also exists for checking sealed objects.

let sealedObj = { a : 1 };
// non-configurable
delete sealedObj.a; // false
sealedObj.a; // 1 

// can still write
sealedObj.a = 2;
sealedObj.a; // 2

//Check properties
Object.getOwnPropertyDescriptor(sealedObj, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isSealed(sealedObj); // true
Object.isExtensible(sealedObj); // false

As shown above, the configurable property descriptor is now false. All properties of the object would have configurable set as false.

3. Object.freeze()

Similar to seal, calling Object.freeze on an object does the following:

  1. Mark every existing property on the object as non-writable
  2. Invokes Object.seal to prevent adding new properties and marks existing properties as non-configurable

Freeze is the highest level of immutability possible using these methods. Properties are now closed to changes due to the false configurable and writable attribute values. And yes, there is the expected Object.isFrozen method too.

let frozenObj = { a : 1 };

// non writable
frozenObj.a = 2;
frozenObj.a; // 1

// non configurable
delete frozenObj.a; // false
frozenObj.a; // 1

Object.getOwnPropertyDescriptor(frozenObj, 'a');
// Object {
//     value: 1,
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isFrozen(frozenObj); // true
Object.isSealed(frozenObj); // true
Object.isExtensible(frozenObj); // false

4. Shallow nature

A very important caveat to know while using these methods occurs when using them on properties that are reference values. These data descriptor properties and methods are all shallow and would not update the properties inside the referenced values.

So if you freeze an object containing another object, then the contained object properties are not automatically frozen; rather you’d have to write your own recursive implementation to handle that.

let shallow = {
    inner: {
        a : 1

shallow.inner = null; // fails
shallow; // { inner : { a : 1 } }

// inner properties not frozen
shallow.inner.a = 2;
shallow.inner.a; // 2

Object.getOwnPropertyDescriptor(shallow, 'inner');
// Object {
//     value: {a : 1},
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

Object.getOwnPropertyDescriptor(shallow.inner, 'a');
// Object {
//     value: 1,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.isFrozen(shallow); // true
Object.isFrozen(shallow.inner); // false

As the property descriptors above show, the inner object is frozen however its own properties are not.


Well, that about wraps it up! I hope you enjoyed the series and learnt a lot. Do let me know your thoughts and continue reading!

  1. Deep dive into JavaScript Property Descriptors
  2. Understanding JavaScript Property Descriptors 2

Understanding JavaScript Property Descriptors 2

If this is your first time here, you should read the first post in this series. Then come back to this to continue.

Continuing with the dive into property descriptors, this post goes deeply into the properties, what they mean and how they can be used.

1. Modifying existing properties

The defineProperty method allows users to create and modify properties. When the property exists, defineProperty will modify that object’s properties.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: true
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar',
//     writable: true,
//     enumerable: false,
//     configurable: false
// }

Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: false
obj1.foo; // bar
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar', // unchanged
//     writable:false, // updated
//     enumerable: false,
//     configurable: false
// }

Now that we know how to modify properties, let’s dive into the nitty-gritty. Take a deep breath, ready, set, go!

2. Writable

If this flag is true, then the value of the property can be changed. Otherwise, changes would be rejected. And if you are using strict mode (and you should!), you’ll get a TypeError.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
  value: 'bar',
  writable: true
obj1.foo; // bar

// change value
obj1.foo = 'baz';
obj1.foo; // baz

This can be used to set up ‘constant’ properties that you don’t want people to overwrite. You might ask, what happens if they just flip the writable flag? Someone might try to brute force the overwrite. Let’s see what happens in that scenario.

Re-using the same obj1 with writable already set to false.

Object.defineProperty(obj1, 'foo', {
    writable: false
obj1.foo; // baz
obj1.foo = 'bar'; // TypeError in strict mode
obj1.foo; // baz

// Try making property writable again
Object.defineProperty(obj1, 'foo', {
    writable: true
// Uncaught TypeError:
// Cannot redefine property: foo(…)

So you see, that’s safe! Once writable is false, it can’t be reset to true ever again. It’s a one way switch!

Wait a bit; there is still a hitch. If the property is still configurable, then there is a bypass to this. Let’s explore the configurable property.

2. Configurable

Setting writable to false only prevents changing the value however it doesn’t mean the property is not modifiable. To bypass the write-block, a user can just delete the property and then recreate it. Let’s see how.

let obj2 = {};
Object.defineProperty(obj2, 'foo', {
  value: 'bar',
  writable: false,
  configurable: true

delete obj2.foo;
obj2.foo = 'CHANGED!';
obj2.foo; //CHANGED

So if you don’t want someone changing your object properties, how would you go about that? The way to prevent third-party consumers from making changes to your properties is via setting configurable to false. Once set, it prevents the following:

  • Deleting that object property
  • Changing any other descriptor attributes. The only exception to this rule is that writable can be set to false if it was hitherto true. Otherwise, every call to defineProperty will throw a TypeError. Setting the same value doesn’t throw an error but that makes no difference any way.

And just like the writable flag, this change is a one-way switch. Once configurable is set to false, you can’t reset it to true afterwards.

let obj3 = {};
Object.defineProperty(obj3, 'foo', {
  value: 'bar',
  writable: true,
  configurable: false

Object.defineProperty(obj3, 'foo', {
    enumerable: false
// TypeError: Cannot redefine property: foo

// bypass fails now
delete obj3.foo; // false non-configurable
obj3.foo; // bar

// Can change writable to false
Object.defineProperty(obj3, 'foo', {
    writable: false
obj3.foo = 8;
obj3.foo; // bar

So to create immutable properties on Objects, you would consider setting both writable and configurable fields to false.

3. Enumerable

This determines if the properties show up when enumerating object properties. For example, when using for..in loops or Object.keys. However, it has no impact on whether you can use the property or not.

But why would you want to make properties non-enumerable?

1. JSON serialization

Usually, we build objects based off JSON data retrieved over XHR calls. These objects are then enhanced with a couple of new properties. When POSTing the data back, developers create a new object with extracted properties.

If those property enhancements are non-enumerable, then calling JSON.stringify on the object would automatically drop them. Since JSON.stringify also drops functions; this might be an easy way to serialize data accurately.

2. Mixins

Another application could be mixins which add extra behaviour to objects. If a mixin has an enumerable getter accessor property; then that calculated property will automatically show up in Object.keys and for..in loops. The getter will behave just like any property. Pretty neat and reminds me of Ember’s  computed properties and I wouldn’t be surprised if it’s the same thing under the hood. On the flip side, you could set enumerable to false to turn off this behaviour.

Unlike writable and configurable, enumerable is a two-way switch. You can set it back to true if it was false before.

Some code examples:

let obj4 = {
    name: 'John',
    surname: 'Smith'
Object.defineProperty(obj4, 'fullName', {
  get: function() {
      return this.name + ' ' + this.surname;
  enumerable: true,
  configurable: true

let keys = Object.keys(obj4);
//['name', 'surname', 'fullName']

keys.forEach(k =&gt; console.log(obj4[k]));
// John, Smith, John Smith

// "{"name":"John",
//   "surname":"Smith",
//   "fullName":"John Smith"}"

// can reset to false
Object.defineProperty(obj4, 'fullName', {
    enumerable: false
// ["name", "surname"]

// "{"name":"John","surname":"Smith"}"

4. Value, Get and Set

  1. An object property cannot have both the value and getter/setter descriptors. You’ve got to choose one.
  2. Value can be pretty much anything – primitives or built-in types. It can even be a function.
  3. You can use the getter and setters to mock read-only properties. You can even have the setter throw Exceptions when users try to set it.

5. Extras

  1. These properties are all shallow and not deep. You probably have to roll your own recursive helper for deep property setting.
  2. You can examine built in types and modify some of their properties. For example, you can delete the fromCharCode method of string. Don’t know why you would want that though…string
  3. The propertyIsEnumerable method checks if a property is enumerable. No, there are no propertyIsWritable or propertyIsConfigurable methods.

Now, read the thUnderstanding JavaScript Property Descriptors 3ird post in this series or check out other related articles:


  1. Deep dive into JavaScript Property Descriptors
  2. Learning ES2015 : let, const and var

Deep dive into JavaScript Property Descriptors

Creating Object Properties

There are a couple of ways to assign properties to objects in JavaScript. The most common example is using obj.field = value or obj[‘field’] = value. This approach is simple however, it is not flexible because it automatically defines property descriptor fields

let obj1 = {
    foo: 'bar'

let obj2 = {
    get foo() {
        return 'bar';

let obj3 = Object.create({}, { foo : { value : 'bar' } });

let obj4 = Object.create({}, {
    foo : {
        get : function() { return 'bar'; }

obj1.foo; // bar
obj2.foo; // bar
obj3.foo; // bar
obj4.foo; // bar

In all 4 obj objects, the foo property returns the same result. But are they the same? Obviously not. This post series examines these differences and shows how you can apply and leverage these capabilities.

Data and Accessor Property Descriptors

Property descriptors hold descriptive information about object properties. There are two types of property descriptors:

  1. Data descriptors – which only hold information about data
  2. Accessor descriptors – which hold information about accessor (get/set) descriptors.

A property descriptor is a data structure with a couple of identifying fields, some are shared between both types while the others apply to a single type as shown below.

Data descriptor Accessor descriptor
value Yes No
writable Yes No
enumerable Yes Yes
configurable Yes Yes
get No Yes
set No Yes

Viewing Property Descriptor information

The getOwnPropertyDescriptor allows you to get the property descriptor for any object.

let dataDescriptor = Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: "bar",
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

let accessorDescriptor = Object.getOwnPropertyDescriptor(obj2, 'foo');
// Object {
//     get: function foo () {}
//     set: undefined,
//     enumerable: true,
//     configurable: true
// }

Data Descriptor only fields

1. Value: Gets the value of the property.

2. Writable: Boolean indicating whether the property value can be changed. This can be used to create ‘constant‘ field values especially for primitive values.

Accessor Descriptor only fields

1. Get: Function which will be invoked whenever the property is to be retrieved. This is similar to getters in other languages.

2. Set: Function that would be invoked when the property is to be set. It’s the setter function.

Shared fields

1. Enumerable: Boolean indicating whether the property can be enumerated. This determines if the property shows up during enumeration. For example, with for..of loops or Object.keys.

2. Configurable: Boolean indicating whether the type of the property can be changed and if the property can be deleted from the object.

Setting Property Descriptors

The Object.defineProperty method allows you to specify and define these property descriptor fields. It takes the object, property key and a bag of descriptor values.

let obj5 = {};
Object.defineProperty(obj5, 'foo', {
  value: 'bar',
  writable: true,
  enumerable: true,
  configurable: true
obj5.foo; // bar

let obj6 = {};
Object.defineProperty(obj6, 'foo', {
  get: function() { return 'bar'; }
obj6.foo; // bar

Default values

All boolean descriptor fields default to false while the getter, setter and value properties default to undefined.  This is an important detail that is most visible when creating and modifying properties via object asssignment or  the  defineProperty method.

let sample = { a : 2 };
Object.defineProperty(sample, 'b', { value: 4 });
sample; // { a: 2, b:4 }

Object.getOwnPropertyDescriptor(sample, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.getOwnPropertyDescriptor(sample, 'b');
// Object {
//     value: 4,
//     writable: false,
//     enumerable: false,
//     configurable: false
// }

sample.b = 'cannot change'; //writable = false
sample.b //4

delete sample.b //configurable=false
sample.b //4

Object.keys(sample); //enumerable = false
// ['a']

Because the other properties of property b were not set on creation, they default to false. This effectively makes b immutable, not configurable and not enumerable on sample.

Validating property existence

Three tricky scenarios:

  • Accessing non-existent property fields results in undefined
  • Due to the default rules, accessing existing property fields with no value set also gives undefined
  • Finally, it is possible to define a property with the value undefined

So how do you verify if a property actually exists and has the value undefined or if doesn’t exist at all on an object?

let obj = { a: undefined };
Object.defineProperty(obj, 'b', {}); //use defaults

obj.a; //undefined
obj.b; //undefined
obj.c; //undefined

The way out of this is the hasOwnProperty function.

Object.hasOwnProperty('a'); //true
Object.hasOwnProperty('b'); //true
Object.hasOwnProperty('c'); //false


There is still a lot more about these values and how to use them. But that would make this post too long so this would be a series. In the next post, the theme would be about each field and what it can be used for.

Teasers before the next post

  • Try invoking a getter property as a function to see what happens. Can you explain why?
  • Try modifying some of the descriptor properties of native JavaScript objects e.g. RegExp, Array, Object etc. What happens?


Read the second post in this series or check out other related articles:

How to track errors in JavaScript Web applications

Your wonderful one-of-a-kind web application just had a successful launch and your user base is rapidly growing. To keep your customers satisfied, you have to know what issues they face and address those as fast as possible.

One way to do that could be being reactive and waiting for them to call in – however, most customers won’t do this; they might just stop using your app. On the flip side, you could be proactive and log errors as soon as they occur in the browser to help roll out fixes.

But first, what error kinds exist in the browser?


There are two kinds of errors in JavaScript: runtime errors which have the window object as their target and then resource errors which have the source element as the target.

Since errors are events, you can catch them by using the addEventListener methods with the appropriate target (window or sourceElement). The WHATWG standard also provides onerror methods for both cases that you can use to grab errors.

Detecting Errors

One of JavaScript’s strengths (and also a source of much trouble too) is its flexibility. In this case, it’s possible to write wrappers around the default onerror handlers or even override them to instrument error logging automation.

Thus, these can serve as entry points for logging to external monitors or even sending messages to other application handlers.

//logger is error logger
var original = window.onerror; //if you still need a handle to this
window.onerror = function(message,source,lineNo,columnNo,errObject){
    logger.log('error', {
        message: message,
        stack: errObject && errObject.stack
    original() //if you want to log the original

var elemOriginal = element.onerror;
element.onerror = function(event) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack

The Error Object

The interface for this contains the error message and optional values: fileName and lineNumber. However, the most important part of this is the stack which provides information about the stack.

Note: Stack traces vary from browser to browser as there exists no formatting standard.

Browser compatibility woes

Nope, you ain’t getting away from this one.

Not all browsers pass in the errorObject (the 5th parameter) to the window.onerror function. Arguably, this is the most important parameter since it provides the most information.

Currently the only big 5 browser that doesn’t pass in this parameter is the Edge browser – cue the only ‘edge’ case. Safari finally added support in June.

The good news though is there is a workaround! Hurray! Let’s go get our stack again.

window.addEventListener('error', function(errorEvent) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack

And that wraps it up! You now have a way to track errors when they happen in production. Happier customers, happier you.

Note: The eventListener and window.onError approaches might need to be used in tandem to ensure enough coverage across browsers. Also the error events caught in the listener will still propagate to the onError handler so you might want to filter out duplicated events or cancel the default handlers.


Tips for printing from web applications

Liked this article? Please share, subscribe or drop a comment.

How function spies work in JavaScript

If you write unit tests, then you likely use a testing framework and might have come across spies. If you don’t write unit tests, please take a quick pause and promise yourself to always write tests.

Testing framework suggestions? Try Sinon or Jasmine.

Spies allow you to monitor a function; they expose options to track invocation counts, arguments and return values. This enables you to write tests to verify function behaviour.

They can even help you mock out unneeded functions. For example, dummy spies can be used to swap out AJAX calls with preset promise values.

The code below shows how to spy on the bar method of object foo.

spyOn(foo, 'bar');


Jump into the documentation for more examples.

That was pretty cool right. So how difficult can it be to write a spy and what happens under the hood?  It turns out implementing a spy is very easy in JavaScript. So let’s write ours!

The goal of the spy is to intercept calls to a specified function. A possible approach is to replace the original function with another function that stores necessary information and then invokes the original method. Partial application makes this quite easy…

The Code

function Spy(obj, method) {
    let spy = {
        args: []

    let original = obj[method];
    obj[method] = function() {
        let args = [].slice.apply(arguments);
        return original.call(obj, args);

    return Object.freeze(spy);

let sample = {
    fn: function(args){

let spy = Spy(sample, 'fn');
console.log(spy.args.length); //1
console.log(spy.args); //[[1,2,3]]

sample.fn('The second call');
console.log(spy.args.length); //2
console.log(spy.args); //[[1,2,3], 'The second call']

//try modifying the spy
spy.args = [];
console.log(spy.args); //[[1,2,3], 'The second call']

Taking the code apart

The spy method takes an object and a method to be spied upon. Next, it creates an object containing the call count and an array tracking invocation arguments.

It swaps out the original call with a new function that always updates the information object whenever the original method is invoked.

The Object.freeze call ‘freezes’ the spy object and prevents any modifications of values. This is necessary to prevent arbitrary changes of the spied values.


The toy sample is brittle (yes I know it). Can you spot the issues? Here are some:

  • What happens if the method doesn’t exist on the object?
  • What happens if the object is null?
  • Can it work for non-object methods? Would pure functions work? Would using window as the parent object work?
  • What happens if method is a primitive and not a function?

These can (and should) be fixed but again, that would make this post very complicated. The goal was to show a simple toy implementation.


How do you ‘unregister’  spies without losing the original method? Hint: store it in some closure and replace once you expose an unregister call.

How would you implement a spy in Java?


Spying Constructors in JavaScript

Learning ES2015 : let, const and var

Lions at the zoo

Zoos allow for safely viewing dangerous wild animals like lions. Lions are caged in their enclosures and can’t escape its boundaries (if they did, it’d be chaos eh?). Handlers, however, can get into cages and interact with them.

Like cages, you can think of variable scoping rules as establishing the boundaries and walls in which certain entities can live. In the zoo scenario, the zoo can be the global or function scope, the cage is the block scope and the lions are the variables. Handlers declare lions in their cages and lions at the zoo typically don’t exist outside their cages. This is how most block-scoped  zoos languages behave (e.g. C#, Java etc).

The JavaScript zoo is different in many ways. One striking issue is that lions handled by its var handlers are free to roam around in the entire zoo. Tales of mauled unsuspecting developers are a dime a dozen; walk too close to a ‘cage’ and you can fall into a trap.

Be careful of var lions – they might not be limited to their cages.

Fortunately, the  JS zoo authorities (aka TC39) have introduced two new lion handlers. They are the let and const. Now lions stay in their cages and we can all go visit the JS zoo without nasty surprises.

Technically this is not totally correct but who doesn’t like a good story? In block-scoped languages, variables are limited to enclosing scopes (mostly {} ); however JavaScript had function-scoped variables which were limited to the function body. let and const fix that for us.

How can function scope be dangerous?

Let’s go back to the zoo again…

function varHandler(){
    var visitor = 'going to the zoo!';
    var hasCages = true;

        //create lion in cage scope with var
        var lion = 'in cage';
    //check if lion exists outside its 'if cage'
    if(lion != null){
        visitor = 'Aaarrgh, the lion attacks!';
//Aaarrgh, the lion attacks!
//in cage

See! Visitors are not protected from encaged lions. Now, lets see what the new let handler

function letHandler(){
    let visitor = 'going to the zoo!';
    let hasCages= true;

        //create lion in cage scope with let
        let lion = 'in cage';

    //check if lion exists outside its 'if cage'
    if(lion != null){
       visitor = 'Aaarrgh, the lion attacks! :(';

//ReferenceError: lion is not defined

This code fails because we attempt to access the lion outside the if block cage. Even typeof wouldn’t work so the lion can’t be accessed outside its cage. To put it plainly, the lion can’t hurt the visitor.

Let – the solution to lion vars?

Let is mostly like var. The main advantage is the introduction of block (lexical) scoping. You can expect let variables to only exist in their containing blocks (could be a function, statement, expression or block).

The const

Const is a somewhat restricted let. Everything that applies to let applies to consts (e.g. scoping, hoisting etc) but consts have two more rules:

  • Consts must be initialized with a value
  • They cannot have another value assigned to them after initialization
const lion;
//SyntaxError: missing initializer...

const lion = 'hiya';
lion = 'hello';
//TypeError: Assignment to const...

Const objects?

While const objects can’t be reassigned to different values; you can still modify the contents of the object (just like in C# or Java finals).

const zoo = {};
zoo.hasLions = true;

zoo = { a: 3};
//TypeError: Identifier 'zoo'...

Why? Think of a house address – homeowners can change the contents of their house (e.g. add more computers) but can’t just go slap their address on a totally different building.

If you want further restriction, try looking at Object.freeze but that is also shallow.

Let + Const vs Var

1. Hoisting

Surprise, surprise. Yes, they are hoisted. According to the ES2015 spec:

The variables are created when their containing Lexical Environment is instantiated but may not be accessed in any way until the variable’s LexicalBinding is evaluated.

The access restriction is explained in the temporal dead zone section. This can have performance implications for JavaScript engines since the engine has to check for variable initialization. Yes, let can be slower than var.

2. Binding in loops

We already know that Let/const are block-scoped while var is not. But how does binding in loops work?

for(var i=0; i&lt;=5; i++){
//5 5 5 5 5

Why is the output 5? Well the setTimeout functions are evaluated after the loop exits (remember JavaScript has one execution thread). Thus they are all bound to the value of which is 5 then.

The fix? Well, some tricks like using IIFEs to bind values exist. But let provides a better way out.

for(let i=0; i&lt;=5; i++){
//1 2 3 4 5

This works because the let value is bound on each loop evaluation and not to the last value of i. This same trick allows usage of let/const in for..of loops since the evaluation context is new on every pass.

Should loop counters be consts? Yes since loop counters are usually sentinel values that shouldn’t change. Changing them can cause difficult bugs especially in nested multi-loop scenarios.

3. Overwriting Global values via bindings

let/const can not overwrite global values unlike var. This prevents unintentional clobbering of language constructs.

var alert = 'alert'; //clobbers global alert function
//error - alert is not a function

let alert = 'alert'; //shadows global alert function

The downside of this is that some legitimate usages e.g. for modules require vars.

4. Variable Redeclaration

Redeclaring a variable that already exists with let/const will throw a SyntaxError. However redeclaring a var with another var works.

var x = 1;
let x = 2;

const y = 0;
let y = 3;

let x = 3;
var x = 4;

5. Shadowing

let/const allow for variable shadowing.

function test() {
    let x = 1;
    if(true) {
        let x = 3;

You can think of blocks as anything enclosed between {} in most languages. Do remember that the switch block is one big block.

The Temporal Dead Zone

What happens if you try to access a hoisted variable before it is initialized? With var, the value is undefined however with  let(or const), this throws an error.

The engine assumes you shouldn’t know about or attempt to use a variable that has been declared but not initialized. This state is called the temporal dead zone.

let tdz;

tdx = 'Temporal dead zone';
//Temporal dead zone

Should I switch to let?

Yay for safe zoos! Well, converting all the var handlers into let/const handlers comes with a hitch. There is a high probability that the zoo breaks down. A better approach involves doing slow gradual conversions over a long period.

The issue is that some var usages help with some tricky areas (e.g. globals for modular namespacing). Converting such to lets will break the application. Others might be workarounds for some scenarios so go at it slowly. For example, when working in a file, you can change all the vars in that function/file to let and run tests to ensure nothing is broken.

Let support across browsers

Safari, iOS and Opera provide no let support so you might have to use a transpiler or polyfill. Here is the matrix on caniuse for let and const.

Is var ever going to be deprecated?

The var behaviour is probably here to stay. Once quirks are out in the wild, they become very difficult to fix since innumerable applications are built on these. Future improvements now have to support these for backward compatibility. The other alternative would be to declare an entirely new language (e.g. Python3 vs Python2) but that approach led to the schism discussed in the last post.


Let and const take away most of the issues associated with Javascript’s function scoping and make it more familiar to programmers coming from other languages. You probably should switch to it too and start using it more often.

Learning ES2015 : Getting Started

ES6 (also ES2015) is the rave of the moment. Finally JavaScript is getting a makeover after nearly 6 years. The enhancements allow for more powerful and expressive JavaScript, ease the building of complex applications and iron out some quirky behaviour.

The var hoisting problem? let resolves that. IIFEsThe function context syntax eliminates the need for that.

There’s lots more though and hopefully the upcoming series of posts will provide in-depth walkthroughs of de-structuring, iterators and generators, new data structures (Map + WeakMap, Set + WeakSet), Proxies, Promises, rest parameters, new standard library functions and much more.

But first, the history lesson.

How did we get here?

JavaScript was standardized in 1997 (nearly 2 decades ago) and its evolution timeline follows:

ES1 (June 1997) – First release of JavaScript on the ECMAScript standard.

ES2 (June 1998) – Contained only minor standardization changes.

ES3 (December 1999) – Introduced try/catchErrors and improved string handling. Saw strong adoption across all browsers before being overtaken by ES5. Do you know you could overwrite undefined with some other value in this JS version? Don’t do evil stuff like that…

ES4 (dumped July 2008) – An ambitious project to overhaul JavaScript. Proposed changes included classes, iterators, modules (now in ES6) and static typing (dropped). It also included some core changes to the language itself e.g. classical inheritance.

Two parties emerged – one party wanted the ES4 upgrade while the opposing party believed it would ‘break’ the Internet. Since they couldn’t agree, the opposing party designed a minor language upgrade tagged ES3.1 (read SemVer for more on version numbers).

Ultimately the two parties met again in Oslo and agreed to release ES3.1 as ES5, shelve ES4 and collaborate on a ‘harmonizing’ version. And that’s how ES6 got the Harmony name.

ES5 (2009) – This was the first major standardized release in the 10 years after ES3. It’s also the most commonly supported JavaScript version nowadays.

Some of the features that got released in this version include ‘strict mode‘, native JSON support, non-writable NaN, undefined, Infinity values and some standard lib improvements  e.g. forEach, map, keys etc.

ES5.1 (June 2011) contained a few minor corrections.

ES6 (2015)

ES6 became feature complete in 2015 – hence the ES2015 name since the plan is to have yearly releases going forward. Yes, work has started already on the ES2016/ES7 version and you can contribute if you want.

There is improving support across hosting environments (both browsers and node); you can check kangax’s table and the Microsoft platform reference. If you can’t wait to try the new awesomeness, why not try out Babel,  Traceur or even TypeScript. Full-band coverage of features by all browsers would likely take time, for example ES5 was released in 2009 and there are still browsers that don’t offer full feature support.

Why do we need another JavaScript Version?

Complex Applications

JavaScript is a great language and works great; however cracks start appearing when you build complex applications in it. Try refactoring a 100k line project and you’ll see what I mean. The introduction of native namespacing and module support would help improve the language in this area.

No Surprises

JavaScript is quite ‘surprising’ as there are few corollaries in its behaviour compared to other languages. For example, prototypical inheritance can be confusing for developers with a classical inheritance background.

The absence of lexical scoping, funny behaviour of dictionaries and loss of context have puzzled many programmers and led to many bugs. Yes, a couple of workarounds exist for all these but the principle of least surprises is a powerful one.

The class syntax should help hide prototypical inheritance by providing familiar syntax; let provides lexical scoping, arrow functions remove the need for that=this and Maps allow objects as keys.

Easier for all

The standard library has been enhanced – there is native promise support, improved string handling methods (templates are finally here!) and other methods too.


ES6 preserves the good parts of JavaScript while fixing a lot of the bad parts. And that’s the introduction; upcoming posts would cover more in-depth ES6 features that you should know about or start using.

Chrome dev tools deep dive : Network

1. Throttling

A web application might work beautifully on fast networks but stutter and even throw up bugs on slow connections. Fiddler used to be the go-to tool for simulating delays and adding latency during testing; the good news now is that Chrome provides throttling too.

There are preset profiles available (e.g. GPRS, 3G, etc.) but you can define custom latency and throughput profiles if you want. Something for those of us who want to simulate network connections to IoT devices.


2. Replay requests

At times, there is a need to replay an isolated request several times. Refreshing the page works but brings down the entire page payload again.

A better alternative would be to find the request in the network tab, right-click on it and then replay it. Note that if you want more control, for example, if you want to replay a  request 10,000 times, you might have to look at other tools like fiddler or curl.


3. The network panel headers

The default tabs that appear on the header row can be changed. For example, you can display the Method, eTags or even cookies columns.

Clicking on a tab header sorts all the rows by that column. Unfortunately Chrome doesn’t allow you to reorder the header ordering; a pity as I would love to customize tab ordering.


4. Network performance and content compression

This is one useful tool that is not so obvious. If you have the ‘small request rows’ option set; then you’ll have to switch to ‘large request rows’ (See the gif below).

  1. Size and Content: Size shows the magnitude of the downloaded file over the wire while content shows the real size of the contents. Ideally, size should be smaller than content since browsers can handle gzip-compressed files. If you see requests that have the same size and content value; that’s an optimization hole to plug.
  2. Time and Latency: The time row shows the request’s entire round trip time while the latency row shows the time it took to set up a connection and process the request on the server. If request latency times consistently take about 80 – 90% of the total time, there might be server processing or network connection investigations required.


5. Debugging with HARs

A customer living in Antarctica reports a bug in your web app. Obviously you can’t go to Antarctica, yet you need a dump of the customer’s state. Don’t fret it; HARs provide a way out.

HAR means HTTP Archive – a JSON file containing all the information necessary to recreate the network tab experience.

So, you tell the customer to open up the Chrome network tab, export the HAR file and send to you. There are lots of HAR readers available or you can set up your own via npm. Problem solved!


6. Examining HTTP traffic

Clicking on a request in the tab opens up a new panel which allows you to examine the request’s details. You can see all the headers; responses, cookies timelines etc. The preview option offers a nice prettified output for response data and if you prefer the plain view; that does exist too.


7. The Filmstrip

The filmstrip generates a frame-by-frame sequence of screenshots showing the page render process during load. It also includes timing information so you can figure out what the bottlenecks are.

For more accurate results; disable the cache and then refresh the page. You can read more about the filmstrip here.


8. Others

  • Disable the cache: Disables the cache to ensure a fresh load
  • Filter: Allows you to type expressions to filter network request rows. There are also out-of-box filters for XHR, JS, etc. I like the websockets (WS) option a lot.
  • Preserve log: Don’t wipe the log on refresh
  • Copy support: Allows you to copy  a request; exposes cURL as a copy option too. Yay for cURL users!

Here are other posts in this series: