Understanding JavaScript Property Descriptors 3


If this is your first time here, you should read the part 1 and part 2 of this series. Then come back to this to continue.

Now that we know the basics, this post covers the JavaScript methods for setting and modifying object property descriptors.

1. Object.preventExtensions()

This blocks the addition of new properties to an object. Literally, it prevents extending the object in any way (pun intended) and returns the object.

This is a one-way switch, once an object is made inextensible, there is no way to undo the action. Just recreate the object. Another thing to note too is that once an object becomes inextensible, its protoype object automatically becomes closed to extensions too ; so be careful especially if ‘inheriting’ or ‘delegating’ to parent types.

There is also the object.isExtensible method for checking if an object has been made inextensible. This comes in handy because trying to extend such objects in strict mode would cause a TypeError.

let obj = { a : 1 };
Object.preventExtensions(obj);
// can't add new properties
obj.b = 3;
obj; // { a : 1 }

// can still change existing properties
obj.a = 3;
obj.a; // 3

Object.isExtensible(obj); // false

Object.getOwnPropertyDescriptor(obj, 'a');
// Object {
//     value: 3,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

2. Object.seal()

Calling Object.seal on an object achieves the following:

  1. Marks every existing property on the object as non-configurable
  2. Then call Object.preventExtensions to prevent adding new properties

Once an object is sealed, then you can’t add new properties or modify the existing ones. All the rules of non-configurability described in earlier posts apply.

Note however that this still leaves writable so it should be possible to change the value of the property (both ways, direct access or using Object.defineProperty). However since configurable is false, you can’t delete it.

The Object.isSealed method also exists for checking sealed objects.

let sealedObj = { a : 1 };
Object.seal(sealedObj);
// non-configurable
delete sealedObj.a; // false
sealedObj.a; // 1 

// can still write
sealedObj.a = 2;
sealedObj.a; // 2

//Check properties
Object.getOwnPropertyDescriptor(sealedObj, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isSealed(sealedObj); // true
Object.isExtensible(sealedObj); // false

As shown above, the configurable property descriptor is now false. All properties of the object would have configurable set as false.

3. Object.freeze()

Similar to seal, calling Object.freeze on an object does the following:

  1. Mark every existing property on the object as non-writable
  2. Invokes Object.seal to prevent adding new properties and marks existing properties as non-configurable

Freeze is the highest level of immutability possible using these methods. Properties are now closed to changes due to the false configurable and writable attribute values. And yes, there is the expected Object.isFrozen method too.

let frozenObj = { a : 1 };
Object.freeze(frozenObj);

// non writable
frozenObj.a = 2;
frozenObj.a; // 1

// non configurable
delete frozenObj.a; // false
frozenObj.a; // 1

Object.getOwnPropertyDescriptor(frozenObj, 'a');
// Object {
//     value: 1,
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

// Check
Object.isFrozen(frozenObj); // true
Object.isSealed(frozenObj); // true
Object.isExtensible(frozenObj); // false

4. Shallow nature

A very important caveat to know while using these methods occurs when using them on properties that are reference values. These data descriptor properties and methods are all shallow and would not update the properties inside the referenced values.

So if you freeze an object containing another object, then the contained object properties are not automatically frozen; rather you’d have to write your own recursive implementation to handle that.

let shallow = {
    inner: {
        a : 1
    }
};

Object.freeze(shallow);
shallow.inner = null; // fails
shallow; // { inner : { a : 1 } }

// inner properties not frozen
shallow.inner.a = 2;
shallow.inner.a; // 2

Object.getOwnPropertyDescriptor(shallow, 'inner');
// Object {
//     value: {a : 1},
//     writable: false,
//     enumerable: true,
//     configurable: false
// }

Object.getOwnPropertyDescriptor(shallow.inner, 'a');
// Object {
//     value: 1,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.isFrozen(shallow); // true
Object.isFrozen(shallow.inner); // false

As the property descriptors above show, the inner object is frozen however its own properties are not.

Conclusion

Well, that about wraps it up! I hope you enjoyed the series and learnt a lot. Do let me know your thoughts and continue reading!

  1. Deep dive into JavaScript Property Descriptors
  2. Understanding JavaScript Property Descriptors 2

Advertisements

Understanding JavaScript Property Descriptors 2


If this is your first time here, you should read the first post in this series. Then come back to this to continue.

Continuing with the dive into property descriptors, this post goes deeply into the properties, what they mean and how they can be used.

1. Modifying existing properties

The defineProperty method allows users to create and modify properties. When the property exists, defineProperty will modify that object’s properties.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: true
});
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar',
//     writable: true,
//     enumerable: false,
//     configurable: false
// }

Object.defineProperty(obj1, 'foo', {
    value: 'bar',
    writable: false
});
obj1.foo; // bar
Object.getOwnPropertyDescriptor(obj1, 'foo');
// Object {
//     value: 'bar', // unchanged
//     writable:false, // updated
//     enumerable: false,
//     configurable: false
// }

Now that we know how to modify properties, let’s dive into the nitty-gritty. Take a deep breath, ready, set, go!

2. Writable

If this flag is true, then the value of the property can be changed. Otherwise, changes would be rejected. And if you are using strict mode (and you should!), you’ll get a TypeError.

let obj1 = {};
Object.defineProperty(obj1, 'foo', {
  value: 'bar',
  writable: true
});
obj1.foo; // bar

// change value
obj1.foo = 'baz';
obj1.foo; // baz

This can be used to set up ‘constant’ properties that you don’t want people to overwrite. You might ask, what happens if they just flip the writable flag? Someone might try to brute force the overwrite. Let’s see what happens in that scenario.

Re-using the same obj1 with writable already set to false.

Object.defineProperty(obj1, 'foo', {
    writable: false
});
obj1.foo; // baz
obj1.foo = 'bar'; // TypeError in strict mode
obj1.foo; // baz

// Try making property writable again
Object.defineProperty(obj1, 'foo', {
    writable: true
});
// Uncaught TypeError:
// Cannot redefine property: foo(…)

So you see, that’s safe! Once writable is false, it can’t be reset to true ever again. It’s a one way switch!

Wait a bit; there is still a hitch. If the property is still configurable, then there is a bypass to this. Let’s explore the configurable property.

2. Configurable

Setting writable to false only prevents changing the value however it doesn’t mean the property is not modifiable. To bypass the write-block, a user can just delete the property and then recreate it. Let’s see how.

let obj2 = {};
Object.defineProperty(obj2, 'foo', {
  value: 'bar',
  writable: false,
  configurable: true
});

//bypass
delete obj2.foo;
obj2.foo = 'CHANGED!';
obj2.foo; //CHANGED

So if you don’t want someone changing your object properties, how would you go about that? The way to prevent third-party consumers from making changes to your properties is via setting configurable to false. Once set, it prevents the following:

  • Deleting that object property
  • Changing any other descriptor attributes. The only exception to this rule is that writable can be set to false if it was hitherto true. Otherwise, every call to defineProperty will throw a TypeError. Setting the same value doesn’t throw an error but that makes no difference any way.

And just like the writable flag, this change is a one-way switch. Once configurable is set to false, you can’t reset it to true afterwards.

let obj3 = {};
Object.defineProperty(obj3, 'foo', {
  value: 'bar',
  writable: true,
  configurable: false
});

Object.defineProperty(obj3, 'foo', {
    enumerable: false
});
// TypeError: Cannot redefine property: foo

// bypass fails now
delete obj3.foo; // false non-configurable
obj3.foo; // bar

// Can change writable to false
Object.defineProperty(obj3, 'foo', {
    writable: false
});
obj3.foo = 8;
obj3.foo; // bar

So to create immutable properties on Objects, you would consider setting both writable and configurable fields to false.

3. Enumerable

This determines if the properties show up when enumerating object properties. For example, when using for..in loops or Object.keys. However, it has no impact on whether you can use the property or not.

But why would you want to make properties non-enumerable?

1. JSON serialization

Usually, we build objects based off JSON data retrieved over XHR calls. These objects are then enhanced with a couple of new properties. When POSTing the data back, developers create a new object with extracted properties.

If those property enhancements are non-enumerable, then calling JSON.stringify on the object would automatically drop them. Since JSON.stringify also drops functions; this might be an easy way to serialize data accurately.

2. Mixins

Another application could be mixins which add extra behaviour to objects. If a mixin has an enumerable getter accessor property; then that calculated property will automatically show up in Object.keys and for..in loops. The getter will behave just like any property. Pretty neat and reminds me of Ember’s  computed properties and I wouldn’t be surprised if it’s the same thing under the hood. On the flip side, you could set enumerable to false to turn off this behaviour.

Unlike writable and configurable, enumerable is a two-way switch. You can set it back to true if it was false before.

Some code examples:

let obj4 = {
    name: 'John',
    surname: 'Smith'
};
Object.defineProperty(obj4, 'fullName', {
  get: function() {
      return this.name + ' ' + this.surname;
  },
  enumerable: true,
  configurable: true
});

let keys = Object.keys(obj4);
//['name', 'surname', 'fullName']

keys.forEach(k => console.log(obj4[k]));
// John, Smith, John Smith

JSON.stringify(obj4);
// "{"name":"John",
//   "surname":"Smith",
//   "fullName":"John Smith"}"

// can reset to false
Object.defineProperty(obj4, 'fullName', {
    enumerable: false
});
Object.keys(obj4);
// ["name", "surname"]

JSON.stringify(obj4);
// "{"name":"John","surname":"Smith"}"

4. Value, Get and Set

  1. An object property cannot have both the value and getter/setter descriptors. You’ve got to choose one.
  2. Value can be pretty much anything – primitives or built-in types. It can even be a function.
  3. You can use the getter and setters to mock read-only properties. You can even have the setter throw Exceptions when users try to set it.

5. Extras

  1. These properties are all shallow and not deep. You probably have to roll your own recursive helper for deep property setting.
  2. You can examine built in types and modify some of their properties. For example, you can delete the fromCharCode method of string. Don’t know why you would want that though…string
  3. The propertyIsEnumerable method checks if a property is enumerable. No, there are no propertyIsWritable or propertyIsConfigurable methods.

Now, read the thUnderstanding JavaScript Property Descriptors 3ird post in this series or check out other related articles:

Related

  1. Deep dive into JavaScript Property Descriptors
  2. Learning ES2015 : let, const and var

Deep dive into JavaScript Property Descriptors


Creating Object Properties

There are a couple of ways to assign properties to objects in JavaScript. The most common example is using obj.field = value or obj[‘field’] = value. This approach is simple however, it is not flexible because it automatically defines property descriptor fields

let obj1 = {
    foo: 'bar'
};

let obj2 = {
    get foo() {
        return 'bar';
    }
};

let obj3 = Object.create({}, { foo : { value : 'bar' } });

let obj4 = Object.create({}, {
    foo : {
        get : function() { return 'bar'; }
    }
});

obj1.foo; // bar
obj2.foo; // bar
obj3.foo; // bar
obj4.foo; // bar

In all 4 obj objects, the foo property returns the same result. But are they the same? Obviously not. This post series examines these differences and shows how you can apply and leverage these capabilities.

Data and Accessor Property Descriptors

Property descriptors hold descriptive information about object properties. There are two types of property descriptors:

  1. Data descriptors – which only hold information about data
  2. Accessor descriptors – which hold information about accessor (get/set) descriptors.

A property descriptor is a data structure with a couple of identifying fields, some are shared between both types while the others apply to a single type as shown below.

Data descriptor Accessor descriptor
value Yes No
writable Yes No
enumerable Yes Yes
configurable Yes Yes
get No Yes
set No Yes

Viewing Property Descriptor information

The getOwnPropertyDescriptor allows you to get the property descriptor for any object.

let dataDescriptor = Object.getOwnPropertyDescriptor(obj1, 'foo');
dataDescriptor;
// Object {
//     value: "bar",
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

let accessorDescriptor = Object.getOwnPropertyDescriptor(obj2, 'foo');
accessorDescriptor;
// Object {
//     get: function foo () {}
//     set: undefined,
//     enumerable: true,
//     configurable: true
// }

Data Descriptor only fields

1. Value: Gets the value of the property.

2. Writable: Boolean indicating whether the property value can be changed. This can be used to create ‘constant‘ field values especially for primitive values.

Accessor Descriptor only fields

1. Get: Function which will be invoked whenever the property is to be retrieved. This is similar to getters in other languages.

2. Set: Function that would be invoked when the property is to be set. It’s the setter function.

Shared fields

1. Enumerable: Boolean indicating whether the property can be enumerated. This determines if the property shows up during enumeration. For example, with for..of loops or Object.keys.

2. Configurable: Boolean indicating whether the type of the property can be changed and if the property can be deleted from the object.

Setting Property Descriptors

The Object.defineProperty method allows you to specify and define these property descriptor fields. It takes the object, property key and a bag of descriptor values.

let obj5 = {};
Object.defineProperty(obj5, 'foo', {
  value: 'bar',
  writable: true,
  enumerable: true,
  configurable: true
});
obj5.foo; // bar

let obj6 = {};
Object.defineProperty(obj6, 'foo', {
  get: function() { return 'bar'; }
});
obj6.foo; // bar

Default values

All boolean descriptor fields default to false while the getter, setter and value properties default to undefined.  This is an important detail that is most visible when creating and modifying properties via object asssignment or  the  defineProperty method.

let sample = { a : 2 };
Object.defineProperty(sample, 'b', { value: 4 });
sample; // { a: 2, b:4 }

Object.getOwnPropertyDescriptor(sample, 'a');
// Object {
//     value: 2,
//     writable: true,
//     enumerable: true,
//     configurable: true
// }

Object.getOwnPropertyDescriptor(sample, 'b');
// Object {
//     value: 4,
//     writable: false,
//     enumerable: false,
//     configurable: false
// }

sample.b = 'cannot change'; //writable = false
sample.b //4

delete sample.b //configurable=false
sample.b //4

Object.keys(sample); //enumerable = false
// ['a']

Because the other properties of property b were not set on creation, they default to false. This effectively makes b immutable, not configurable and not enumerable on sample.

Validating property existence

Three tricky scenarios:

  • Accessing non-existent property fields results in undefined
  • Due to the default rules, accessing existing property fields with no value set also gives undefined
  • Finally, it is possible to define a property with the value undefined

So how do you verify if a property actually exists and has the value undefined or if doesn’t exist at all on an object?

let obj = { a: undefined };
Object.defineProperty(obj, 'b', {}); //use defaults

obj.a; //undefined
obj.b; //undefined
obj.c; //undefined

The way out of this is the hasOwnProperty function.

Object.hasOwnProperty('a'); //true
Object.hasOwnProperty('b'); //true
Object.hasOwnProperty('c'); //false

Conclusion

There is still a lot more about these values and how to use them. But that would make this post too long so this would be a series. In the next post, the theme would be about each field and what it can be used for.

Teasers before the next post

  • Try invoking a getter property as a function to see what happens. Can you explain why?
  • Try modifying some of the descriptor properties of native JavaScript objects e.g. RegExp, Array, Object etc. What happens?

Related

Read the second post in this series or check out other related articles:

Why I am moving to Angular 2


I started poking into core Angular 2 concepts a few weeks ago and it has been a pleasant experience so far. I rewrote a bare-bones replica of an Angular 1 app that took me months in about 2 or 3 weeks. Although rewrites are typically faster due to familiarity, it was impressive seeing built-in support for most of the painful areas of Angular.

Yes, there is some cost due to the absence of backwards compatibility but hey, you can’t have it all. If you are thinking of choosing between Angular 1 or Angular 2, I’ll say go for Angular 2; it’s totally worth it. However, if you already have an Angular 1 app, then you should evaluate the ROI and impact of the move on your team and delivery schedules.

1. Much Simpler

Both frameworks have steep learning curves, however I believe Angular 2 tries to simplify most of the confusing concepts of Angular 1.

The various derivatives of the $provider (value, constant, factory, service and provider itself) are all gone – everything is just a service now. The same applies to the scope, the powerful but hard-to-manage feature has been eliminated.

Error messages are much clearer and vector you faster into the root cause unlike Angular 1 which had some error messages that had to be ‘learnt’ over time for root-cause correlation.

The move to components, services and established modules and routes makes it easier to design and create components.

2. Better Tooling

Angular-cli is a great tool that reminds me of the ember-cli; it’s great that the Angular team finally provided first-class support for this. The cli is amazing, apart from the staples of project scaffolding, testing (unit + E2E) and linting; there is also support for pushing to Github (will even create a repo for you!), proxying and build targets. Big wins!!

 Augury worked well for me out of the box; I remember dropping batarang after running into lots of problems.

Codelyzer is another great tool that helps you to write consistent code conforming to your style guidelines across teams.

3. Typescript

Typescript is the main language for Angular 2 although there is support for JavaScript and Dart. This should hopefully make it more amenable to larger enterprises for adoption.

JavaScript can be difficult to manage at scale; I guess this is something that affects all weakly typed languages. Refactoring can be a big pain if you have to rename some module in a 100,000 line codebase. Quickly becomes a pain point and hard to do well. Static typing does help in that case.

4. Reactive Programming

Angular 2 is built with reactive programming in mind. It bundles Rxjs, part of the reactive extensions library which pushes you to use Observables and all the reactive goodness.

It can be challenging wrapping your head around functional reactive programming. Simply said, you need to understand the 5 building blocks of functional programming – map, reduce, zip, flatten and filter. With these, you can compose and combine various programming solutions. Hadoop is just a ramped up version of mapReduce.  The framework’s support for reactive concepts (e.g. observables) is deeply ingrained in a wide variety of places: routing, http and templates.

They is also support for promises but I think mixing Promises and Streams would lead to confusion. Choose one style and stick to it.

Want to learn more about streams? Check out my stream library and accompanying blog post.

5. Routing

Route guards, resolvers, router-link directives and more are a pure delight. Support for modular component routing is impressive too; this allows modules to have independent routing. So you can just pluck them out if you don’t need them anymore.

Angular 1’s routing was difficult to use because it was at the global level. Yes there were other routing implementations (proof to Angular’s extensibility) that helped with things like having multiple outlets in a page.

The good thing about angular 2 is that all these is built-in and that means you can easily implement a consistent approach to routing in all your app.

6. Modularity

Angular 2 comes with better modularity; you can declare modular blocks and use them to compose your application.

Angular 2 allows you to define components that control their routing, layout, sub-component make up and more. Imagine you are creating some web application to monitor social media platforms. I would imagine you’d have top-level navigation tabs for things like Facebook, Twitter and LinkedIn.

It’s possible to define each of these three as top-level modules on their own and then register them in the core app. So the Facebook module ideally should be able to handle its own routing, component and styling and more separately from the Twitter module. An extra benefit is that; you can take this module and re-use it in some other totally different project! That’s simply awesome.

Conclusion

Angular 2 is still new and though it’s been out there for some time; there is still a concern about how it would ‘perform’ at scale. The good thing though is that it handles most of the issues with Angular 1 really well.

Sure, there might be issues in the future but at least they would be new mistakes :)

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

Fighting the impostor syndrome


I bet everyone has had thoughts similar to the following go through their minds at one point or the other in their careers:

side A: No, you don’t know it, in fact you don’t know anything…

side B: hmm, I think you are just beating yourself too hard, it’s a new area and you are ramping up fast actually.

side A: Why wasn’t I included in the meeting? Must be because you know nothing! See I was right!!

side B: Well, maybe your input was not needed because you are busy with task xyz

side A: I don’t know… I don’t know… I think I look like a complete newbie. Did I just say something stupid?

side B: Even the smartest people make mistakes and remember they all started somewhere…

The Impostor vs Dunning-Kruger chart

Some chart I saw drawn on one of the whiteboards a long time ago.

drawing

The Dunning-Kruger effect argues that amateurs tend to over-estimate their skills while professionals under-estimate their capabilities. On the other hand, the impostor syndrome makes people think that their accomplishments were just by chance and had nothing to do with their efforts or preparation.

In the graph above, the ‘sweet’ spot would be at the top right – where the skills and confidence are at optimum levels.

Confidence, the missing link?

There are several articles about the impostor syndrome and I must say I finally got the chance to ‘really’ experience it.

My proposed expansion to new frontiers has pushed me out of my comfort zone and exposed me to a few humbling experiences. The confidence and familiarity from countless hours shipping code in the front end domain was missing. That familiar reassurance of knowing that you could always dive into the details and find a solution to whatever was thrown at you was somewhat missing.

The good news however are that good patterns and practices are the same regardless of the domain – another good reason to learn the basics really well. Applications can vary due to environment, framework and language implementations but the core concepts will remain similar. For example, dependency injection, MVC, design patterns, algorithms etc.

Why should I leave my comfort zone?

It sure feels comfortable sticking to what you know best – in fact, this might be recommended in some scenarios. But broadening your scope opens you up to new experiences and improves you all around as a software engineer.

I remember listening to an old career talk about always being the weakest on your team. The ‘director’ talked about finding the strongest team you can find and then joining them and growing through the ranks. Over time, you’ll acquire a lot of skills and eventually become a very strong developer.

In reality, starting again as a ‘newbie’ on a team of experts might be difficult so you need to be really confident; it is easy to become disillusioned and give up. Get some support from a loved one and put the long goal in mind. You’ll eventually grow and learn; moreover you’ll bring in new perspectives, provide insight into other domains and also improve existing practices.

Everyone has these fears and even the experts don’t know it all. The biggest prize, as one of my mentors said, is gaining the confidence that you can dive into a new field, pull through and deliver something of importance inshaaha Allaah.

New beginnings : New frontiers


I have been pretty much a JavaScript person mostly for the past four (or is it 5?) years – well ever since I did my internship in 2012. No doubt I really like the language, the ecosystem and the potentials. It’s easy to get so engrossed in the ecosystem – there is never a dearth of things to learn or tools to try out. Quite intellectually stimulating and mind-broadening (provided you can spend the time to learn it well).

JavaScript still looks exciting especially with the upcoming changes (async, await, fetch, ES6). As they say however, the more things change the more they remain the same eh? Personally, I think it is time to check out what happens on the other side – the backend. Advocates say server-side development is ‘easier’ and more stable (yeah, they don’t have 1000 frameworks, build tools, task runners and patterns!).

So why the change? Simple answer: Growth. I want to try something new, expose myself to stimulating challenges and stretch myself. What’s the point of finding cozy places? The goal is to grow, expand and become better. And did I just get these thoughts? No, been on my mind for nearly a year now.

So no more JavaScript? Nope – I enjoy that too much and I still have to finish myangular implementation and descrambler. Nevertheless, I am planning to do more full stack work inshaaha Allaah – expect new topics covering micro-services, scaling huge services, rapid deployment in addition to the staples of programming languages, computer science theory and software engineering.

Let’s go!

The difficult parts of software development


Time for a classic rant again; yeah it’s always good to express thoughts and hear about the feelings of others – a good way to learn.

Lots of people think the most difficult aspects of software development revolve around engineering themes like:

  • Writing elegant pieces of code that are a joy to extend and scale beautifully
  • Crafting brilliant algorithms that can land rockets on small floating platforms (yup, SpaceX, I see you…)
  • Inventing new cryptographic systems (please don’t attempt this at home…)
  • Building and maintaining massively parallel computation systems

Agreed, these are extremely challenging and sometimes it is difficult to find a perfect solution. However, since most of these deal with code and systems, the required skills can be learned and there are usually workarounds available.

There is more to software development than engineering and these other facets can spawn tougher (or even impossible-to-solve) challenges.

Here are three of those difficult areas:

1. Exponential Chaos

The combinatorial complexity of code grows exponentially. It’s well nigh impossible and also futile trying to exercise all possible execution paths of a program. Equivalence class partitioning helps a lot to cut down the test space but invariably, we still miss out on a few.

A single if statement with one condition has two paths – the condition is either true or false. Let’s assign the simple one-condition if check code above a theoretical complexity  value of 1. If that if statement is nested in another if statement, the number of paths explode to 4; ditto for two conditions in the if condition check. Re-using our complexity model, this comes to a value of 2 or so.

Most codebases have loads of conditional branching, loops, multi-threading, various components and what have you. So we can safely our complexity values for such code bases in in the high millions or thereabout. Scary? Not yet.

Now imagine what happens when there are hundreds of developers working in that same codebase and making a few hundred check-ins daily? Maybe the complexity value should sky-rocket to the high billions? Trillions?

Given the rapid rate of change and inherent complexity, how do you ensure that quality is maintained? How do you enforce consistency across large groups? A very difficult problem to solve – there are approaches to mitigate the risk but I do not know of any foolproof method that works all the time. If you know of a way, do let me know.

2. I’ll know what I want when I see it

We all think we know what we want – alas, we typically don’t until we see the finished product. Let’s take the following series of interactions between Ade who wants a new dress and his tailor.

Ade: I want some beautiful dress that fits me, is wearable all year round and casual. I want it in 2 weeks.

Tailor: Aha, so you want something casual that fits you, is wearable all year round and need it in 2 weeks.

Ade: Yup right on point

2 weeks later

Tailor: Here it is! (Beaming proudly)

Ade: (Not impressed); I like the fabric and design. But I don’t like the colour, the sleeve length and it doesn’t fit me quite right. Can you change it to black?

Tailor: here, it is in black

Ade: On second thoughts, black would be too hot, could you change it to brown?

Tailor: here it is in brown

Ade: Great! Could the sleeves be shortened by 2cm?

Tailor: Done

Ade: Hhmm, could you revert the sleeves to their original length? I think I now like the earlier design.

Tailor: Done!! (getting annoyed probably)

Ade: Great! This is becoming awesome, could you please reduce the width of the dress? It’s too wide…

Tailor: @#$@#$@$#!!!

Most people usually don’t have physical products tailor-made to their desires. We go to the store (to meet a car dealer, a tailor or an architect) and choose one of the several options there. We can take a car out for a ride, walk through a building or try on a new dress. This helps a lot as we know if we want that product or not.

In software development, it’s a different story – we want something tailored but we cannot express that need accurately until we see the product. Invariably, our descriptions do not match what we desire. To  restate: it’s highly probable that you wouldn’t like a dress you described in its entirety to a tailor when you try it on.

Figuring out what users actually want is a difficult problem – probably why agile methodologies are popular. Less difficult way? Do the minimum possible thing and allow users to play with it. For example, the tailor could have given Ade a paper dress to evaluate all the styles and all that.

Let’s play a simple game: when you start your next project, make sure you document all user requests, also record every update as you go along. I am pretty sure the new requests will significantly differ from the original one. The end product might be totally different from the initial ask even.

3. No laws – it’s the wild wild west out there

If I release my grip on an apple, I expect it to fall down – why? Gravity of course. Most interactions in the physical world are bound by these models. Imagine that a car manufacturer has to design a new super car for some super-rich guy. Mr-rich-guy asks for the following:

  • Must be drive-able by adults, teenagers and infants
  • Must work on Earth, Venus and Mars
  • Can run perfectly on gas, water or coal

The manufacturer can tell him it’s impossible since the current physical models make it extremely difficult to achieve the three impossible orthogonal requirements; maybe if he wants a movie though…

Let’s go to the world of software; consider the typical AAA game, to capture the largest possible market share, products have to be usable on:

  • Multiple consoles (XBox, PlayStation, Nintendo etc)
  • Other architectures (e.g. 32-bit and 64-bit PCs)
  • Operating systems – Windows, Linux
  • Various frame rates

There are also limitations in software (hardware limits, processors, memory etc) but we often have to build ‘cars’ that can be driven well by people in various age groups living in multiple planets!

The support matrix explodes all the time and generating optimal experiences is an extremely difficult task. In fact, most times, the workaround is to have a limited set of supported platforms.

Alas, it’s the realm of 0s and 1s, so software engineers have to conjure all sort of tricks and contortions to make things work. Maybe some day, cars would run everywhere too…

Conclusion

So apart from the technical challenges, there are a few other just-as-challenging (or even more challenging areas) in software development. These include

  • Ensuring your requirements match hidden customer desires
  • Working to meet various regulations and ensuring proper support across platforms
  • Managing technical debt and reducing risk in heavily changed code bases

Your thoughts?

A simple explanation of 1s complement arithmetic


I remember taking the digital systems course in my second year of university and being exposed to concepts like k-maps, 1s and 2s complement arithmetic. These building blocks served as the foundations for bigger concepts like adders, half-adders, comparators, gates and what not.

It was pretty much fun doing the calculations then even though I didn’t fully realize why I needed to add a 1 while doing 1s complement arithmetic. Until I stumbled upon the excellent explanation in Charles Petzold’s Code; a great book that uses very lucid metaphors for explaining computing concepts. As is usually the case – the best explanations are typically so simple and straightforward that anyone can grasp them.

Even if you already know about 1s and 2s complement arithmetic; still go ahead and read this, you might find something interesting.

Subtraction – the basics

Assuming you need to find the difference between two numbers, e.g. 173 and 41; this is pretty straightforward, you do so

minuend 174
subtrahend 041
difference 133

Aside: Minuend and subtrahend are valid names, the names of parameters for mathematical operations are given below.

Expression

First number (i.e. 5)

Second number (i.e. 3)

5 + 3

Augend

Addend

5 – 3

Subtrahend

Minuend

5 * 3

Multiplier

Multiplicand

5 / 3

Dividend

Divisor

This is the simple case, how about the carry scenario when you need to ‘borrow’ from preceding digits?

minuend 135
subtrahend 049
difference 086

Aha the pesky borrowing! What if there was a way to avoid borrowing? The first thing to think of is the ceiling of all 3-digit numbers i.e. the smallest possible number that would require no borrows for any 3-digit number. We use 3-digits since we are taking a 3-digit example; were the subtrahend to be a 5-digit number, we would need the smallest 5-digit number value too.

That smallest ‘no-borrows-required’ number is 999. Unsurprisingly, it is the maximum possible value in base ten if you have only 3 digits to use in the hundreds, tens and units positions. Note: In other bases, the values would also be the penultimate value e.g. for base 8, it’ll be 777.

Now, let’s use this maximum value as the minuend

minuend 999
subtrahend 049
difference 950

Since we are using 999 as the reference value, then 49 and 950 are complements of each other; i.e. both add up to give 999. So we can say 49 is the 9s complement of 950 just as 950 is the 9s complement of 49.

Awesome! We can now avoid the annoying carries!! But knowing this is useless in itself unless we can find some way to leverage this new-found knowledge. Are there math tricks to use?  Turns out this is very possible and straightforward.

Math-fu?

Lets do some more maths tricks (remember all those crafty calculus dy/dx tricks)…

135 – 49 = 135 – 49 + 1000 – 1000

= 135 + 1000 – 49 – 1000

= 135 + 1 + 999 – 49 – 1000

= 135 + 1 + (999 – 49) – 1000

= 136 + 950 – 1000

= 1086 – 1000

= 86

QED!

What’s the use of such a long process?

We just found a very long way to avoid carries while doing subtraction. However, there is no motivation to use this since it is quite inefficient in base 10. So what’s the reason for all this?

It turns out that in computer-land, counting is done in 0s and 1s. The folks there can’t even imagine there are numbers other than 1 and 0. As you’ll see, there are some great advantages to using the 1s complement approach in this area.

Lets take a simple example e.g. 11 – 7

minuend 1011
subtrahend 0111
difference ????

Applying the same trick again (this time the minuend will be 1111 instead of 9999).

minuend 1111
subtrahend 0111
difference 1000

Do you notice a pattern between the subtrahend (0111) and the difference (1000)? The complements seem to be ‘inverses’ of each other.

The 1s complement of any numeric binary value is just the bitwise inverse of the bits in the original value. Calculation is just a matter of flipping each bit’s value, a linear  O(n) operation that can be quite fast. That’s a BIG WIN.

Continuing the process again with the addition step this time:

Augend (difference from step above) 01000
Addend (of 1) 00001
Addend (of original 11 value) 01011
Sum 10100

Finally the subtraction of the next 1* number that is larger which will be 10000 (since we used 1111).

subtrahend 10100
minuend 10000
difference 00100

And there is the 4! answer done!

How about negative numbers? Simple, just do the same process and invert the answers.

Hope you found this fascinating and enjoyed it. Let me know your thoughts in the comments.

How to detect page visibility in web applications


You are building a web application and need the application to pause whenever the user stops interacting with the page; for example, the user opens up another browser tab or minimizes the browser itself. Example scenarios include games where you want to automatically pause the action or video/chat applications where you’d like to raise a notification.

The main advantage of such an API is to prevent resource wastage (battery life on mobile, internet bandwidth or unnecessary computing tasks). Definitely, something to have in mind especially for developers targeting mobile devices. So how would you this?

Can I use event listeners?

Technically, you could use a global event listener on the window object to listen for focus/blur events however, this can not detect browser minification. Also, the blur/focus event would be fired whenever the page loses focus; however, it is possible that a webpage is still visible despite losing focus – think about users having multiple monitors.

The good news is that this is possible with the PageVisibilityAPI which comes with the browsers and this post shows how to use this.

Deep dive into details

The Document interface has been extended with two more attributes – visibilityState and hidden.

Hidden

This is true whenever the page is not visible. What counts as being not visible includes lock screens, minimization, being in a background tab etc.

VisibilityState

This can be one of 4 possible enums explaining the visibility state of the page.

  • hidden: page is hidden, hidden is true
  • visible: page is visible, hidden is false
  • prerender: page is being pre-rendered and not visible. Support for this is optional across browsers and not enforced
  • unloaded: page is being unloaded; hidden would also be false too. Support for this is also optional across browsers

Show me some code!

document.addEventListener('visibilitychange',function(){
    if(document.hidden) {
        console.log('hidden');
    } else {
        console.log('visible');
    }
}, false);

Browser support

You can have it in nearly all modern browsers except Opera mini. Also, you might need to specify vendor prefixes for some of the other browsers. See this.

Conclusion

There it is; you now know a way to effectively manage resource consumption – be it battery, internet data or computing power.

You can use this to determine how long users spend on your page, automatically pause streaming video/audio (with some nice fadeout effects for audio especially) or even raise notifications.

Did you enjoy this post? Here are a few more related posts: