Why JavaScript has two zeros: -0 and +0


Do you know there are two valid zero representations in JavaScript?

posZero = +0;
negZero = -0;

In pure mathematics, zero means nothing and its sign doesn’t matter. +0 = -0 = 0. Computers can’t represent value well enough and mostly use the IEEE 754 standard.

Most languages have two zeros!

The IEEE 754 standard for floating point numbers allows for signed zeros, thus it is possible to have both -0 and +0.  Correspondingly, 1 / +0 = +∞ while 1 / -0 = -∞ and these are values at opposite ends of the number line.

  • They can be viewed as vectors with zero magnitude pointing in opposite directions.
  • In the mathematical field of limits, negative and positive zeros show how zero was reached.

These two zeros can lead to issues as shown with the disparate ∞ results.

Why two zeros occur in IEEE 754

There is a bit representing the sign of each numeric value independent of its magnitude. Consequently if the magnitude of a number goes to zero without its sign changing then it becomes a -0.

So why does this matter? Well, JavaScript implements the IEEE 754 standard and this post goes into some of the details.

Keep in mind, the default zero value in JavaScript (and most languages) is actually the signed zero (+0).

The zeros in JavaScript

1. Representation

let a = -0;
a; // -0

let b = +0;
b; // 0

2. Creation

All mathematical operations give a signed zero result (+0 or -0) that depends on the operand values.

The only exception to this rule involves addition and subtraction involving +0 and -0.

  • Adding two -0 values will always be -0
  • Subtracting a 0 from -0 will also be -0

Any other combination of zero values gives a +0. Another thing to note is that negative zeros cannot be created as a result of addition or subtraction of non-zero operands.  Thus -3 + 3 = 3 – 3 = +0.

The code below shows some more examples.

// Addition and Subtraction
 3 - 3  // 0
-3 + 3  // 0

// Addition of zero values
-0 + -0; // -0
-0 -  0; // -0
 0 -  0; //  0
 0 + -0; //  0

// Multiplication
3 *  0  //  0
3 * -0  // -0

// Division
 3  / Infinity  //  0
-3  / Infinity  // -0

// Modulus
 6 % 2  //  0
-6 % 2  // -0

3. The issue with zero strings

There is a minor niggle with stringifying -0. Calling toString will always give the result “0”. On the flip side, parseInt and parseFloat parse negative zero values.

Consequently, there is a loss of information in the stringify -> parseInt transformation. For example, if you convert values to strings (for example, via JSON.stringify), POST to some server and then retrieve those strings later.

let a = '-0';
a.toString(); // '0'

parseInt('-0', 10);   // -0
parseFloat('-0', 10); // -0

4. Differentiating between +0 and -0

How would you tell one zero value apart from the other? Let’s try comparison.

-0 === 0;  // true
-0..toString(); // '0'
0..toString();  // '0'

-0 <  0; // false
 0 < -0; // false

0..toString() is valid JavaScript. Read this to know why.

ES2015’s Object.is method works

Object.is(0, -0); //false

The ES2015’s Math.sign method for checking the sign of a number is not of too much help since it returns 0 and -0 for +0 and -0 respectively.

Since ES5 has no such helper we can use the difference in behaviour of +0 and -0 to write a helper.

function isNegativeZero(value) {
    value = +value; // cast to number
    if(value) {
        return false;
    }
    let infValue = 1 / value;
    return infValue < 0;
}

isNegativeZero(0);    // false
isNegativeZero(-0);   // true
isNegativeZero('-0'); // true

5. Applications

What is the use of knowing all this?

1. One example could be say for example if you are doing some machine learning and need to differentiate between positive and negative values for branching. If a -0 result gets coerced into a positive zero; then this could lead to a tricky branching bug.

2. Another usage scenario would be for people who write compilers and try to optimize code. Expressions that would result in zero e.g. x * 0 cannot be optimized as the result now depends on the value of x. Optimizing such expressions and replacing them with a 0 will lead to a bug.

3. And know too that there are lots of languages that support  IEEE 754. Let’s take C# and Java for example:

// Java
System.out.print(1.0 / 0.0);  // Infinity
System.out.print(1.0 / -0.0); // -Infinity
// C#
Console.WriteLine(1.0 / 0.0);  // Infinity
Console.WriteLine(1.0 / -0.0); // -Infinity;

Try it in your language too!

6. IEEE specifications

The IEEE specifications lead to the following results

Math.round(-0.4); // -0
Math.round(0.4);  //  0

Math.sqrt(-0);  // -0
Math.sqrt(0);   //  0

1 / -Infinity;  // -0
1 /  Infinity;  //  0

Rounding -0.4 leads to -0 because it is viewed as the limit of a value as it approaches 0 from the negative direction.

The square root rule is one I find  strange; the specification says: “Except that squareRoot(–0) shall be –0, every valid squareRoot shall have a positive sign.”. If you are wondering, IEEE 754 is the same reason why 0.1 + 0.2 != 0.3 in most languages; but that’s another story.

Thoughts? Do share them in the comments.

Related

Advertisements

Things to check before releasing your web application


This post originally started out as a list of tips on how to break web applications but quickly morphed into a pre-release checklist.

So here are a couple of things to validate before you press the ‘go-live’ button on that wonderful web application of yours.

General

  1. Does the application handle extremely large input? Try copying a Wikipedia page into an input field. Strings can be too long and overflow database models.
  2. Does it handle boundary values properly? Try extremely large or small values; Infinity is a good one.
  3. Do you have validation? Try submitting forms with no entry.
  4. Do you validate mismatched value types? Try submitting strings where numbers are expected.
  5. Has all web copy been proofread and spell-checked? Typos are bad for reputation.

Localization (L10n) and Internationalization (I18n)

  1. Do you support Unicode? The Turkish i and German ß are two quick tests.
  2. Do you support right-to-left languages? CssJanus is a great tool for flipping pages.
  3. Time zones and daylight saving time changes.
  4. Time formats: 12 and 24 hour clocks
  5. Date formats: mm/dd/yyy vs dd/mm/yyyy
  6. Currencies in different locales.

Connections

  1. Does your web app work well on slow connections? You can use Chrome or Fiddler to simulate this.
  2. What happens when abrupt network disconnections occur while using your web application?
  3. Do you cut off expensive operations when the user navigates away or page is idle?

Usability + UX

  1. Does the application work well across the major browsers you support (including mobile)?
  2. Does the application look good at various resolution levels? Try resizing the window and see what happens.
  3. Is your application learnable? Are actions and flows consistent through the application? For example, modal dialogs should have the same layout regardless of the action triggering them.
  4. Do you have your own custom 404 page?
  5. Do you support print?
  6. Do error messages provide enough guidance to users?
  7. Does your application degrade gracefully when JavaScript is disabled?
  8. Are all links valid?

Security

  1. Do you validate all input?
  2. Are all assets secured and locked down?
  3. Do you grant least permissions for actions?
  4. Ensure error messages do not reveal sensitive server information.
  5. Have you stripped response headers of infrastructure-revealing information? E.g. server type, version etc.
  6. Do you have the latest patches installed on your servers and have a plan for regular updates?
  7. Do you have a Business Continuity / Disaster Response (BCDR) plan in place?
  8. Are you protected against the Owasp Top Ten?
  9. Do you have throttling and rate limiting mechanisms?
  10. Do you have a way to quickly rotate secrets?
  11. Have you scanned your code to ensure no valuable information is being released?

Code

  1. Did you lint your CSS and JS (see JSLint, JSHint, TSLint)?
  2. Have all assets (JavaScript, CSS etc) been minified, obfuscated and bundled?
  3. Do you have unit, integration and functional tests?

Performance

  1. Have you run Google’s Page Speed and Yahoo’s YSlow to identify issues?
  2. Are images optimized? Are you using sprites?
  3. Do you use a CDN for your static assets?
  4. Do you have a favicon? Helps to prevent unwanted 404s since browsers auto-request for them.
  5. Are you gzipping content?
  6. Do you have stylesheets at the top and JavaScript at the bottom?
  7. Have you considered moving to HTTP2?

Release Pipeline

  1. Do you have test and staging environments?
  2. Do you have automated release pipelines?
  3. Can you roll back changes?

Others

  1. Do you have a way to track errors and monitor this with logging?
  2. Do you have a plan to handle customer reported issues?
  3. Have you met all legal and compliance requirements for your domain?
  4. Have you handled SEO requirements?

Conclusion

These are just a few off of my head – feel free to suggest things I missed out. I should probably consider transferring these to a Github repo or something for easier usage.

Understanding JavaScript Property Descriptors 3


If this is your first time here, you should read the part 1 and part 2 of this series. Then come back to this to continue.

Now that we know the basics, this post covers the JavaScript methods for setting and modifying object property descriptors.

1. Object.preventExtensions()

This blocks the addition of new properties to an object. Literally, it prevents extending the object in any way (pun intended) and returns the object.

This is a one-way switch, once an object is made inextensible, there is no way to undo the action. Just recreate the object. Another thing to note too is that once an object becomes inextensible, its protoype object automatically becomes closed to extensions too ; so be careful especially if ‘inheriting’ or ‘delegating’ to parent types.

There is also the object.isExtensible method for checking if an object has been made inextensible. This comes in handy because trying to extend such objects in strict mode would cause a TypeError.

let obj = { a : 1 };
Object.preventExtensions(obj);
// can't add new properties
obj.b = 3;
obj; // { a : 1 }

// can still change existing properties
obj.a = 3;
obj.a; // 3

Object.isExtensible(obj); // false

Object.getOwnPropertyDescriptor(obj, 'a');
// Object {
//     value: 3,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

2. Object.seal()

Calling Object.seal on an object achieves the following:

  1. Marks every existing property on the object as non-configurable
  2. Then call Object.preventExtensions to prevent adding new properties

Once an object is sealed, then you can’t add new properties or modify the existing ones. All the rules of non-configurability described in earlier posts apply.

Note however that this still leaves writable so it should be possible to change the value of the property (both ways, direct access or using Object.defineProperty). However since configurable is false, you can’t delete it.

The Object.isSealed method also exists for checking sealed objects.

let sealedObj = { a : 1 };
Object.seal(sealedObj);
// non-configurable
delete sealedObj.a; // false
sealedObj.a; // 1 

// can still write
sealedObj.a = 2;
sealedObj.a; // 2

//Check properties
Object.getOwnPropertyDescriptor(sealedObj, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isSealed(sealedObj); // true
Object.isExtensible(sealedObj); // false

As shown above, the configurable property descriptor is now false. All properties of the object would have configurable set as false.

3. Object.freeze()

Similar to seal, calling Object.freeze on an object does the following:

  1. Mark every existing property on the object as non-writable
  2. Invokes Object.seal to prevent adding new properties and marks existing properties as non-configurable

Freeze is the highest level of immutability possible using these methods. Properties are now closed to changes due to the false configurable and writable attribute values. And yes, there is the expected Object.isFrozen method too.

let frozenObj = { a : 1 };
Object.freeze(frozenObj);

// non writable
frozenObj.a = 2;
frozenObj.a; // 1

// non configurable
delete frozenObj.a; // false
frozenObj.a; // 1

Object.getOwnPropertyDescriptor(frozenObj, 'a');
// Object {
//     value: 1,
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isFrozen(frozenObj); // true
Object.isSealed(frozenObj); // true
Object.isExtensible(frozenObj); // false

4. Shallow nature

A very important caveat to know while using these methods occurs when using them on properties that are reference values. These data descriptor properties and methods are all shallow and would not update the properties inside the referenced values.

So if you freeze an object containing another object, then the contained object properties are not automatically frozen; rather you’d have to write your own recursive implementation to handle that.

let shallow = {
    inner: {
        a : 1
    }
};

Object.freeze(shallow);
shallow.inner = null; // fails
shallow; // { inner : { a : 1 } }

// inner properties not frozen
shallow.inner.a = 2;
shallow.inner.a; // 2

Object.getOwnPropertyDescriptor(shallow, 'inner');
// Object {
//     value: {a : 1},
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

Object.getOwnPropertyDescriptor(shallow.inner, 'a');
// Object {
//     value: 1,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.isFrozen(shallow); // true
Object.isFrozen(shallow.inner); // false

As the property descriptors above show, the inner object is frozen however its own properties are not.

Conclusion

Well, that about wraps it up! I hope you enjoyed the series and learnt a lot. Do let me know your thoughts and continue reading!

  1. Deep dive into JavaScript Property Descriptors
  2. Understanding JavaScript Property Descriptors 2

Understanding JavaScript Property Descriptors 2


If this is your first time here, you should read the first post in this series. Then come back to this to continue.

Continuing with the dive into property descriptors, this post goes deeply into the properties, what they mean and how they can be used.

1. Modifying existing properties

The defineProperty method allows users to create and modify properties. When the property exists, defineProperty will modify that object’s properties.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: true
});
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar',
//     writable: true,
//     enumerable: false,
//     configurable: false
// }

Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: false
});
obj1.foo; // bar
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar', // unchanged
//     writable:false, // updated
//     enumerable: false,
//     configurable: false
// }

Now that we know how to modify properties, let’s dive into the nitty-gritty. Take a deep breath, ready, set, go!

2. Writable

If this flag is true, then the value of the property can be changed. Otherwise, changes would be rejected. And if you are using strict mode (and you should!), you’ll get a TypeError.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
  value: 'bar',
  writable: true
});
obj1.foo; // bar

// change value
obj1.foo = 'baz';
obj1.foo; // baz

This can be used to set up ‘constant’ properties that you don’t want people to overwrite. You might ask, what happens if they just flip the writable flag? Someone might try to brute force the overwrite. Let’s see what happens in that scenario.

Re-using the same obj1 with writable already set to false.

Object.defineProperty(obj1, 'foo', {
    writable: false
});
obj1.foo; // baz
obj1.foo = 'bar'; // TypeError in strict mode
obj1.foo; // baz

// Try making property writable again
Object.defineProperty(obj1, 'foo', {
    writable: true
});
// Uncaught TypeError:
// Cannot redefine property: foo(…)

So you see, that’s safe! Once writable is false, it can’t be reset to true ever again. It’s a one way switch!

Wait a bit; there is still a hitch. If the property is still configurable, then there is a bypass to this. Let’s explore the configurable property.

2. Configurable

Setting writable to false only prevents changing the value however it doesn’t mean the property is not modifiable. To bypass the write-block, a user can just delete the property and then recreate it. Let’s see how.

let obj2 = {};
Object.defineProperty(obj2, 'foo', {
  value: 'bar',
  writable: false,
  configurable: true
});

//bypass
delete obj2.foo;
obj2.foo = 'CHANGED!';
obj2.foo; //CHANGED

So if you don’t want someone changing your object properties, how would you go about that? The way to prevent third-party consumers from making changes to your properties is via setting configurable to false. Once set, it prevents the following:

  • Deleting that object property
  • Changing any other descriptor attributes. The only exception to this rule is that writable can be set to false if it was hitherto true. Otherwise, every call to defineProperty will throw a TypeError. Setting the same value doesn’t throw an error but that makes no difference any way.

And just like the writable flag, this change is a one-way switch. Once configurable is set to false, you can’t reset it to true afterwards.

let obj3 = {};
Object.defineProperty(obj3, 'foo', {
  value: 'bar',
  writable: true,
  configurable: false
});

Object.defineProperty(obj3, 'foo', {
    enumerable: false
});
// TypeError: Cannot redefine property: foo

// bypass fails now
delete obj3.foo; // false non-configurable
obj3.foo; // bar

// Can change writable to false
Object.defineProperty(obj3, 'foo', {
    writable: false
});
obj3.foo = 8;
obj3.foo; // bar

So to create immutable properties on Objects, you would consider setting both writable and configurable fields to false.

3. Enumerable

This determines if the properties show up when enumerating object properties. For example, when using for..in loops or Object.keys. However, it has no impact on whether you can use the property or not.

But why would you want to make properties non-enumerable?

1. JSON serialization

Usually, we build objects based off JSON data retrieved over XHR calls. These objects are then enhanced with a couple of new properties. When POSTing the data back, developers create a new object with extracted properties.

If those property enhancements are non-enumerable, then calling JSON.stringify on the object would automatically drop them. Since JSON.stringify also drops functions; this might be an easy way to serialize data accurately.

2. Mixins

Another application could be mixins which add extra behaviour to objects. If a mixin has an enumerable getter accessor property; then that calculated property will automatically show up in Object.keys and for..in loops. The getter will behave just like any property. Pretty neat and reminds me of Ember’s  computed properties and I wouldn’t be surprised if it’s the same thing under the hood. On the flip side, you could set enumerable to false to turn off this behaviour.

Unlike writable and configurable, enumerable is a two-way switch. You can set it back to true if it was false before.

Some code examples:

let obj4 = {
    name: 'John',
    surname: 'Smith'
};
Object.defineProperty(obj4, 'fullName', {
  get: function() {
      return this.name + ' ' + this.surname;
  },
  enumerable: true,
  configurable: true
});

let keys = Object.keys(obj4);
//['name', 'surname', 'fullName']

keys.forEach(k =&gt; console.log(obj4[k]));
// John, Smith, John Smith

JSON.stringify(obj4);
// "{"name":"John",
//   "surname":"Smith",
//   "fullName":"John Smith"}"

// can reset to false
Object.defineProperty(obj4, 'fullName', {
    enumerable: false
});
Object.keys(obj4);
// ["name", "surname"]

JSON.stringify(obj4);
// "{"name":"John","surname":"Smith"}"

4. Value, Get and Set

  1. An object property cannot have both the value and getter/setter descriptors. You’ve got to choose one.
  2. Value can be pretty much anything – primitives or built-in types. It can even be a function.
  3. You can use the getter and setters to mock read-only properties. You can even have the setter throw Exceptions when users try to set it.

5. Extras

  1. These properties are all shallow and not deep. You probably have to roll your own recursive helper for deep property setting.
  2. You can examine built in types and modify some of their properties. For example, you can delete the fromCharCode method of string. Don’t know why you would want that though…string
  3. The propertyIsEnumerable method checks if a property is enumerable. No, there are no propertyIsWritable or propertyIsConfigurable methods.

Now, read the thUnderstanding JavaScript Property Descriptors 3ird post in this series or check out other related articles:

Related

  1. Deep dive into JavaScript Property Descriptors
  2. Learning ES2015 : let, const and var

Deep dive into JavaScript Property Descriptors


Creating Object Properties

There are a couple of ways to assign properties to objects in JavaScript. The most common example is using obj.field = value or obj[‘field’] = value. This approach is simple however, it is not flexible because it automatically defines property descriptor fields

let obj1 = {
    foo: 'bar'
};

let obj2 = {
    get foo() {
        return 'bar';
    }
};

let obj3 = Object.create({}, { foo : { value : 'bar' } });

let obj4 = Object.create({}, {
    foo : {
        get : function() { return 'bar'; }
    }
});

obj1.foo; // bar
obj2.foo; // bar
obj3.foo; // bar
obj4.foo; // bar

In all 4 obj objects, the foo property returns the same result. But are they the same? Obviously not. This post series examines these differences and shows how you can apply and leverage these capabilities.

Data and Accessor Property Descriptors

Property descriptors hold descriptive information about object properties. There are two types of property descriptors:

  1. Data descriptors – which only hold information about data
  2. Accessor descriptors – which hold information about accessor (get/set) descriptors.

A property descriptor is a data structure with a couple of identifying fields, some are shared between both types while the others apply to a single type as shown below.

Data descriptor Accessor descriptor
value Yes No
writable Yes No
enumerable Yes Yes
configurable Yes Yes
get No Yes
set No Yes

Viewing Property Descriptor information

The getOwnPropertyDescriptor allows you to get the property descriptor for any object.

let dataDescriptor = Object.getOwnPropertyDescriptor(obj1, 'foo');
dataDescriptor;
// Object {
//     value: "bar",
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

let accessorDescriptor = Object.getOwnPropertyDescriptor(obj2, 'foo');
accessorDescriptor;
// Object {
//     get: function foo () {}
//     set: undefined,
//     enumerable: true,
//     configurable: true
// }

Data Descriptor only fields

1. Value: Gets the value of the property.

2. Writable: Boolean indicating whether the property value can be changed. This can be used to create ‘constant‘ field values especially for primitive values.

Accessor Descriptor only fields

1. Get: Function which will be invoked whenever the property is to be retrieved. This is similar to getters in other languages.

2. Set: Function that would be invoked when the property is to be set. It’s the setter function.

Shared fields

1. Enumerable: Boolean indicating whether the property can be enumerated. This determines if the property shows up during enumeration. For example, with for..of loops or Object.keys.

2. Configurable: Boolean indicating whether the type of the property can be changed and if the property can be deleted from the object.

Setting Property Descriptors

The Object.defineProperty method allows you to specify and define these property descriptor fields. It takes the object, property key and a bag of descriptor values.

let obj5 = {};
Object.defineProperty(obj5, 'foo', {
  value: 'bar',
  writable: true,
  enumerable: true,
  configurable: true
});
obj5.foo; // bar

let obj6 = {};
Object.defineProperty(obj6, 'foo', {
  get: function() { return 'bar'; }
});
obj6.foo; // bar

Default values

All boolean descriptor fields default to false while the getter, setter and value properties default to undefined.  This is an important detail that is most visible when creating and modifying properties via object asssignment or  the  defineProperty method.

let sample = { a : 2 };
Object.defineProperty(sample, 'b', { value: 4 });
sample; // { a: 2, b:4 }

Object.getOwnPropertyDescriptor(sample, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.getOwnPropertyDescriptor(sample, 'b');
// Object {
//     value: 4,
//     writable: false,
//     enumerable: false,
//     configurable: false
// }

sample.b = 'cannot change'; //writable = false
sample.b //4

delete sample.b //configurable=false
sample.b //4

Object.keys(sample); //enumerable = false
// ['a']

Because the other properties of property b were not set on creation, they default to false. This effectively makes b immutable, not configurable and not enumerable on sample.

Validating property existence

Three tricky scenarios:

  • Accessing non-existent property fields results in undefined
  • Due to the default rules, accessing existing property fields with no value set also gives undefined
  • Finally, it is possible to define a property with the value undefined

So how do you verify if a property actually exists and has the value undefined or if doesn’t exist at all on an object?

let obj = { a: undefined };
Object.defineProperty(obj, 'b', {}); //use defaults

obj.a; //undefined
obj.b; //undefined
obj.c; //undefined

The way out of this is the hasOwnProperty function.

Object.hasOwnProperty('a'); //true
Object.hasOwnProperty('b'); //true
Object.hasOwnProperty('c'); //false

Conclusion

There is still a lot more about these values and how to use them. But that would make this post too long so this would be a series. In the next post, the theme would be about each field and what it can be used for.

Teasers before the next post

  • Try invoking a getter property as a function to see what happens. Can you explain why?
  • Try modifying some of the descriptor properties of native JavaScript objects e.g. RegExp, Array, Object etc. What happens?

Related

Read the second post in this series or check out other related articles:

Why I am moving to Angular 2


I started poking into core Angular 2 concepts a few weeks ago and it has been a pleasant experience so far. I rewrote a bare-bones replica of an Angular 1 app that took me months in about 2 or 3 weeks. Although rewrites are typically faster due to familiarity, it was impressive seeing built-in support for most of the painful areas of Angular.

Yes, there is some cost due to the absence of backwards compatibility but hey, you can’t have it all. If you are thinking of choosing between Angular 1 or Angular 2, I’ll say go for Angular 2; it’s totally worth it. However, if you already have an Angular 1 app, then you should evaluate the ROI and impact of the move on your team and delivery schedules.

1. Much Simpler

Both frameworks have steep learning curves, however I believe Angular 2 tries to simplify most of the confusing concepts of Angular 1.

The various derivatives of the $provider (value, constant, factory, service and provider itself) are all gone – everything is just a service now. The same applies to the scope, the powerful but hard-to-manage feature has been eliminated.

Error messages are much clearer and vector you faster into the root cause unlike Angular 1 which had some error messages that had to be ‘learnt’ over time for root-cause correlation.

The move to components, services and established modules and routes makes it easier to design and create components.

2. Better Tooling

Angular-cli is a great tool that reminds me of the ember-cli; it’s great that the Angular team finally provided first-class support for this. The cli is amazing, apart from the staples of project scaffolding, testing (unit + E2E) and linting; there is also support for pushing to Github (will even create a repo for you!), proxying and build targets. Big wins!!

 Augury worked well for me out of the box; I remember dropping batarang after running into lots of problems.

Codelyzer is another great tool that helps you to write consistent code conforming to your style guidelines across teams.

3. Typescript

Typescript is the main language for Angular 2 although there is support for JavaScript and Dart. This should hopefully make it more amenable to larger enterprises for adoption.

JavaScript can be difficult to manage at scale; I guess this is something that affects all weakly typed languages. Refactoring can be a big pain if you have to rename some module in a 100,000 line codebase. Quickly becomes a pain point and hard to do well. Static typing does help in that case.

4. Reactive Programming

Angular 2 is built with reactive programming in mind. It bundles Rxjs, part of the reactive extensions library which pushes you to use Observables and all the reactive goodness.

It can be challenging wrapping your head around functional reactive programming. Simply said, you need to understand the 5 building blocks of functional programming – map, reduce, zip, flatten and filter. With these, you can compose and combine various programming solutions. Hadoop is just a ramped up version of mapReduce.  The framework’s support for reactive concepts (e.g. observables) is deeply ingrained in a wide variety of places: routing, http and templates.

They is also support for promises but I think mixing Promises and Streams would lead to confusion. Choose one style and stick to it.

Want to learn more about streams? Check out my stream library and accompanying blog post.

5. Routing

Route guards, resolvers, router-link directives and more are a pure delight. Support for modular component routing is impressive too; this allows modules to have independent routing. So you can just pluck them out if you don’t need them anymore.

Angular 1’s routing was difficult to use because it was at the global level. Yes there were other routing implementations (proof to Angular’s extensibility) that helped with things like having multiple outlets in a page.

The good thing about angular 2 is that all these is built-in and that means you can easily implement a consistent approach to routing in all your app.

6. Modularity

Angular 2 comes with better modularity; you can declare modular blocks and use them to compose your application.

Angular 2 allows you to define components that control their routing, layout, sub-component make up and more. Imagine you are creating some web application to monitor social media platforms. I would imagine you’d have top-level navigation tabs for things like Facebook, Twitter and LinkedIn.

It’s possible to define each of these three as top-level modules on their own and then register them in the core app. So the Facebook module ideally should be able to handle its own routing, component and styling and more separately from the Twitter module. An extra benefit is that; you can take this module and re-use it in some other totally different project! That’s simply awesome.

Conclusion

Angular 2 is still new and though it’s been out there for some time; there is still a concern about how it would ‘perform’ at scale. The good thing though is that it handles most of the issues with Angular 1 really well.

Sure, there might be issues in the future but at least they would be new mistakes :)

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

Fighting the impostor syndrome


I bet everyone has had thoughts similar to the following go through their minds at one point or the other in their careers:

side A: No, you don’t know it, in fact you don’t know anything…

side B: hmm, I think you are just beating yourself too hard, it’s a new area and you are ramping up fast actually.

side A: Why wasn’t I included in the meeting? Must be because you know nothing! See I was right!!

side B: Well, maybe your input was not needed because you are busy with task xyz

side A: I don’t know… I don’t know… I think I look like a complete newbie. Did I just say something stupid?

side B: Even the smartest people make mistakes and remember they all started somewhere…

The Impostor vs Dunning-Kruger chart

Some chart I saw drawn on one of the whiteboards a long time ago.

drawing

The Dunning-Kruger effect argues that amateurs tend to over-estimate their skills while professionals under-estimate their capabilities. On the other hand, the impostor syndrome makes people think that their accomplishments were just by chance and had nothing to do with their efforts or preparation.

In the graph above, the ‘sweet’ spot would be at the top right – where the skills and confidence are at optimum levels.

Confidence, the missing link?

There are several articles about the impostor syndrome and I must say I finally got the chance to ‘really’ experience it.

My proposed expansion to new frontiers has pushed me out of my comfort zone and exposed me to a few humbling experiences. The confidence and familiarity from countless hours shipping code in the front end domain was missing. That familiar reassurance of knowing that you could always dive into the details and find a solution to whatever was thrown at you was somewhat missing.

The good news however are that good patterns and practices are the same regardless of the domain – another good reason to learn the basics really well. Applications can vary due to environment, framework and language implementations but the core concepts will remain similar. For example, dependency injection, MVC, design patterns, algorithms etc.

Why should I leave my comfort zone?

It sure feels comfortable sticking to what you know best – in fact, this might be recommended in some scenarios. But broadening your scope opens you up to new experiences and improves you all around as a software engineer.

I remember listening to an old career talk about always being the weakest on your team. The ‘director’ talked about finding the strongest team you can find and then joining them and growing through the ranks. Over time, you’ll acquire a lot of skills and eventually become a very strong developer.

In reality, starting again as a ‘newbie’ on a team of experts might be difficult so you need to be really confident; it is easy to become disillusioned and give up. Get some support from a loved one and put the long goal in mind. You’ll eventually grow and learn; moreover you’ll bring in new perspectives, provide insight into other domains and also improve existing practices.

Everyone has these fears and even the experts don’t know it all. The biggest prize, as one of my mentors said, is gaining the confidence that you can dive into a new field, pull through and deliver something of importance inshaaha Allaah.

New beginnings : New frontiers


I have been pretty much a JavaScript person mostly for the past four (or is it 5?) years – well ever since I did my internship in 2012. No doubt I really like the language, the ecosystem and the potentials. It’s easy to get so engrossed in the ecosystem – there is never a dearth of things to learn or tools to try out. Quite intellectually stimulating and mind-broadening (provided you can spend the time to learn it well).

JavaScript still looks exciting especially with the upcoming changes (async, await, fetch, ES6). As they say however, the more things change the more they remain the same eh? Personally, I think it is time to check out what happens on the other side – the backend. Advocates say server-side development is ‘easier’ and more stable (yeah, they don’t have 1000 frameworks, build tools, task runners and patterns!).

So why the change? Simple answer: Growth. I want to try something new, expose myself to stimulating challenges and stretch myself. What’s the point of finding cozy places? The goal is to grow, expand and become better. And did I just get these thoughts? No, been on my mind for nearly a year now.

So no more JavaScript? Nope – I enjoy that too much and I still have to finish myangular implementation and descrambler. Nevertheless, I am planning to do more full stack work inshaaha Allaah – expect new topics covering micro-services, scaling huge services, rapid deployment in addition to the staples of programming languages, computer science theory and software engineering.

Let’s go!

The difficult parts of software development


Time for a classic rant again; yeah it’s always good to express thoughts and hear about the feelings of others – a good way to learn.

Lots of people think the most difficult aspects of software development revolve around engineering themes like:

  • Writing elegant pieces of code that are a joy to extend and scale beautifully
  • Crafting brilliant algorithms that can land rockets on small floating platforms (yup, SpaceX, I see you…)
  • Inventing new cryptographic systems (please don’t attempt this at home…)
  • Building and maintaining massively parallel computation systems

Agreed, these are extremely challenging and sometimes it is difficult to find a perfect solution. However, since most of these deal with code and systems, the required skills can be learned and there are usually workarounds available.

There is more to software development than engineering and these other facets can spawn tougher (or even impossible-to-solve) challenges.

Here are three of those difficult areas:

1. Exponential Chaos

The combinatorial complexity of code grows exponentially. It’s well nigh impossible and also futile trying to exercise all possible execution paths of a program. Equivalence class partitioning helps a lot to cut down the test space but invariably, we still miss out on a few.

A single if statement with one condition has two paths – the condition is either true or false. Let’s assign the simple one-condition if check code above a theoretical complexity  value of 1. If that if statement is nested in another if statement, the number of paths explode to 4; ditto for two conditions in the if condition check. Re-using our complexity model, this comes to a value of 2 or so.

Most codebases have loads of conditional branching, loops, multi-threading, various components and what have you. So we can safely our complexity values for such code bases in in the high millions or thereabout. Scary? Not yet.

Now imagine what happens when there are hundreds of developers working in that same codebase and making a few hundred check-ins daily? Maybe the complexity value should sky-rocket to the high billions? Trillions?

Given the rapid rate of change and inherent complexity, how do you ensure that quality is maintained? How do you enforce consistency across large groups? A very difficult problem to solve – there are approaches to mitigate the risk but I do not know of any foolproof method that works all the time. If you know of a way, do let me know.

2. I’ll know what I want when I see it

We all think we know what we want – alas, we typically don’t until we see the finished product. Let’s take the following series of interactions between Ade who wants a new dress and his tailor.

Ade: I want some beautiful dress that fits me, is wearable all year round and casual. I want it in 2 weeks.

Tailor: Aha, so you want something casual that fits you, is wearable all year round and need it in 2 weeks.

Ade: Yup right on point

2 weeks later

Tailor: Here it is! (Beaming proudly)

Ade: (Not impressed); I like the fabric and design. But I don’t like the colour, the sleeve length and it doesn’t fit me quite right. Can you change it to black?

Tailor: here, it is in black

Ade: On second thoughts, black would be too hot, could you change it to brown?

Tailor: here it is in brown

Ade: Great! Could the sleeves be shortened by 2cm?

Tailor: Done

Ade: Hhmm, could you revert the sleeves to their original length? I think I now like the earlier design.

Tailor: Done!! (getting annoyed probably)

Ade: Great! This is becoming awesome, could you please reduce the width of the dress? It’s too wide…

Tailor: @#$@#$@$#!!!

Most people usually don’t have physical products tailor-made to their desires. We go to the store (to meet a car dealer, a tailor or an architect) and choose one of the several options there. We can take a car out for a ride, walk through a building or try on a new dress. This helps a lot as we know if we want that product or not.

In software development, it’s a different story – we want something tailored but we cannot express that need accurately until we see the product. Invariably, our descriptions do not match what we desire. To  restate: it’s highly probable that you wouldn’t like a dress you described in its entirety to a tailor when you try it on.

Figuring out what users actually want is a difficult problem – probably why agile methodologies are popular. Less difficult way? Do the minimum possible thing and allow users to play with it. For example, the tailor could have given Ade a paper dress to evaluate all the styles and all that.

Let’s play a simple game: when you start your next project, make sure you document all user requests, also record every update as you go along. I am pretty sure the new requests will significantly differ from the original one. The end product might be totally different from the initial ask even.

3. No laws – it’s the wild wild west out there

If I release my grip on an apple, I expect it to fall down – why? Gravity of course. Most interactions in the physical world are bound by these models. Imagine that a car manufacturer has to design a new super car for some super-rich guy. Mr-rich-guy asks for the following:

  • Must be drive-able by adults, teenagers and infants
  • Must work on Earth, Venus and Mars
  • Can run perfectly on gas, water or coal

The manufacturer can tell him it’s impossible since the current physical models make it extremely difficult to achieve the three impossible orthogonal requirements; maybe if he wants a movie though…

Let’s go to the world of software; consider the typical AAA game, to capture the largest possible market share, products have to be usable on:

  • Multiple consoles (XBox, PlayStation, Nintendo etc)
  • Other architectures (e.g. 32-bit and 64-bit PCs)
  • Operating systems – Windows, Linux
  • Various frame rates

There are also limitations in software (hardware limits, processors, memory etc) but we often have to build ‘cars’ that can be driven well by people in various age groups living in multiple planets!

The support matrix explodes all the time and generating optimal experiences is an extremely difficult task. In fact, most times, the workaround is to have a limited set of supported platforms.

Alas, it’s the realm of 0s and 1s, so software engineers have to conjure all sort of tricks and contortions to make things work. Maybe some day, cars would run everywhere too…

Conclusion

So apart from the technical challenges, there are a few other just-as-challenging (or even more challenging areas) in software development. These include

  • Ensuring your requirements match hidden customer desires
  • Working to meet various regulations and ensuring proper support across platforms
  • Managing technical debt and reducing risk in heavily changed code bases

Your thoughts?