Why I am moving to Angular 2


I started poking into core Angular 2 concepts a few weeks ago and it has been a pleasant experience so far. I rewrote a bare-bones replica of an Angular 1 app that took me months in about 2 or 3 weeks. Although rewrites are typically faster due to familiarity, it was impressive seeing built-in support for most of the painful areas of Angular.

Yes, there is some cost due to the absence of backwards compatibility but hey, you can’t have it all. If you are thinking of choosing between Angular 1 or Angular 2, I’ll say go for Angular 2; it’s totally worth it. However, if you already have an Angular 1 app, then you should evaluate the ROI and impact of the move on your team and delivery schedules.

1. Much Simpler

Both frameworks have steep learning curves, however I believe Angular 2 tries to simplify most of the confusing concepts of Angular 1.

The various derivatives of the $provider (value, constant, factory, service and provider itself) are all gone – everything is just a service now. The same applies to the scope, the powerful but hard-to-manage feature has been eliminated.

Error messages are much clearer and vector you faster into the root cause unlike Angular 1 which had some error messages that had to be ‘learnt’ over time for root-cause correlation.

The move to components, services and established modules and routes makes it easier to design and create components.

2. Better Tooling

Angular-cli is a great tool that reminds me of the ember-cli; it’s great that the Angular team finally provided first-class support for this. The cli is amazing, apart from the staples of project scaffolding, testing (unit + E2E) and linting; there is also support for pushing to Github (will even create a repo for you!), proxying and build targets. Big wins!!

 Augury worked well for me out of the box; I remember dropping batarang after running into lots of problems.

Codelyzer is another great tool that helps you to write consistent code conforming to your style guidelines across teams.

3. Typescript

Typescript is the main language for Angular 2 although there is support for JavaScript and Dart. This should hopefully make it more amenable to larger enterprises for adoption.

JavaScript can be difficult to manage at scale; I guess this is something that affects all weakly typed languages. Refactoring can be a big pain if you have to rename some module in a 100,000 line codebase. Quickly becomes a pain point and hard to do well. Static typing does help in that case.

4. Reactive Programming

Angular 2 is built with reactive programming in mind. It bundles Rxjs, part of the reactive extensions library which pushes you to use Observables and all the reactive goodness.

It can be challenging wrapping your head around functional reactive programming. Simply said, you need to understand the 5 building blocks of functional programming – map, reduce, zip, flatten and filter. With these, you can compose and combine various programming solutions. Hadoop is just a ramped up version of mapReduce.  The framework’s support for reactive concepts (e.g. observables) is deeply ingrained in a wide variety of places: routing, http and templates.

They is also support for promises but I think mixing Promises and Streams would lead to confusion. Choose one style and stick to it.

Want to learn more about streams? Check out my stream library and accompanying blog post.

5. Routing

Route guards, resolvers, router-link directives and more are a pure delight. Support for modular component routing is impressive too; this allows modules to have independent routing. So you can just pluck them out if you don’t need them anymore.

Angular 1’s routing was difficult to use because it was at the global level. Yes there were other routing implementations (proof to Angular’s extensibility) that helped with things like having multiple outlets in a page.

The good thing about angular 2 is that all these is built-in and that means you can easily implement a consistent approach to routing in all your app.

6. Modularity

Angular 2 comes with better modularity; you can declare modular blocks and use them to compose your application.

Angular 2 allows you to define components that control their routing, layout, sub-component make up and more. Imagine you are creating some web application to monitor social media platforms. I would imagine you’d have top-level navigation tabs for things like Facebook, Twitter and LinkedIn.

It’s possible to define each of these three as top-level modules on their own and then register them in the core app. So the Facebook module ideally should be able to handle its own routing, component and styling and more separately from the Twitter module. An extra benefit is that; you can take this module and re-use it in some other totally different project! That’s simply awesome.

Conclusion

Angular 2 is still new and though it’s been out there for some time; there is still a concern about how it would ‘perform’ at scale. The good thing though is that it handles most of the issues with Angular 1 really well.

Sure, there might be issues in the future but at least they would be new mistakes 🙂

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

Fighting the impostor syndrome


I bet everyone has had thoughts similar to the following go through their minds at one point or the other in their careers:

side A: No, you don’t know it, in fact you don’t know anything…

side B: hmm, I think you are just beating yourself too hard, it’s a new area and you are ramping up fast actually.

side A: Why wasn’t I included in the meeting? Must be because you know nothing! See I was right!!

side B: Well, maybe your input was not needed because you are busy with task xyz

side A: I don’t know… I don’t know… I think I look like a complete newbie. Did I just say something stupid?

side B: Even the smartest people make mistakes and remember they all started somewhere…

The Impostor vs Dunning-Kruger chart

Some chart I saw drawn on one of the whiteboards a long time ago.

drawing

The Dunning-Kruger effect argues that amateurs tend to over-estimate their skills while professionals under-estimate their capabilities. On the other hand, the impostor syndrome makes people think that their accomplishments were just by chance and had nothing to do with their efforts or preparation.

In the graph above, the ‘sweet’ spot would be at the top right – where the skills and confidence are at optimum levels.

Confidence, the missing link?

There are several articles about the impostor syndrome and I must say I finally got the chance to ‘really’ experience it.

My proposed expansion to new frontiers has pushed me out of my comfort zone and exposed me to a few humbling experiences. The confidence and familiarity from countless hours shipping code in the front end domain was missing. That familiar reassurance of knowing that you could always dive into the details and find a solution to whatever was thrown at you was somewhat missing.

The good news however are that good patterns and practices are the same regardless of the domain – another good reason to learn the basics really well. Applications can vary due to environment, framework and language implementations but the core concepts will remain similar. For example, dependency injection, MVC, design patterns, algorithms etc.

Why should I leave my comfort zone?

It sure feels comfortable sticking to what you know best – in fact, this might be recommended in some scenarios. But broadening your scope opens you up to new experiences and improves you all around as a software engineer.

I remember listening to an old career talk about always being the weakest on your team. The ‘director’ talked about finding the strongest team you can find and then joining them and growing through the ranks. Over time, you’ll acquire a lot of skills and eventually become a very strong developer.

In reality, starting again as a ‘newbie’ on a team of experts might be difficult so you need to be really confident; it is easy to become disillusioned and give up. Get some support from a loved one and put the long goal in mind. You’ll eventually grow and learn; moreover you’ll bring in new perspectives, provide insight into other domains and also improve existing practices.

Everyone has these fears and even the experts don’t know it all. The biggest prize, as one of my mentors said, is gaining the confidence that you can dive into a new field, pull through and deliver something of importance inshaaha Allaah.

New beginnings : New frontiers


I have been pretty much a JavaScript person mostly for the past four (or is it 5?) years – well ever since I did my internship in 2012. No doubt I really like the language, the ecosystem and the potentials. It’s easy to get so engrossed in the ecosystem – there is never a dearth of things to learn or tools to try out. Quite intellectually stimulating and mind-broadening (provided you can spend the time to learn it well).

JavaScript still looks exciting especially with the upcoming changes (async, await, fetch, ES6). As they say however, the more things change the more they remain the same eh? Personally, I think it is time to check out what happens on the other side – the backend. Advocates say server-side development is ‘easier’ and more stable (yeah, they don’t have 1000 frameworks, build tools, task runners and patterns!).

So why the change? Simple answer: Growth. I want to try something new, expose myself to stimulating challenges and stretch myself. What’s the point of finding cozy places? The goal is to grow, expand and become better. And did I just get these thoughts? No, been on my mind for nearly a year now.

So no more JavaScript? Nope – I enjoy that too much and I still have to finish myangular implementation and descrambler. Nevertheless, I am planning to do more full stack work inshaaha Allaah – expect new topics covering micro-services, scaling huge services, rapid deployment in addition to the staples of programming languages, computer science theory and software engineering.

Let’s go!

The difficult parts of software development


Time for a classic rant again; yeah it’s always good to express thoughts and hear about the feelings of others – a good way to learn.

Lots of people think the most difficult aspects of software development revolve around engineering themes like:

  • Writing elegant pieces of code that are a joy to extend and scale beautifully
  • Crafting brilliant algorithms that can land rockets on small floating platforms (yup, SpaceX, I see you…)
  • Inventing new cryptographic systems (please don’t attempt this at home…)
  • Building and maintaining massively parallel computation systems

Agreed, these are extremely challenging and sometimes it is difficult to find a perfect solution. However, since most of these deal with code and systems, the required skills can be learned and there are usually workarounds available.

There is more to software development than engineering and these other facets can spawn tougher (or even impossible-to-solve) challenges.

Here are three of those difficult areas:

1. Exponential Chaos

The combinatorial complexity of code grows exponentially. It’s well nigh impossible and also futile trying to exercise all possible execution paths of a program. Equivalence class partitioning helps a lot to cut down the test space but invariably, we still miss out on a few.

A single if statement with one condition has two paths – the condition is either true or false. Let’s assign the simple one-condition if check code above a theoretical complexity  value of 1. If that if statement is nested in another if statement, the number of paths explode to 4; ditto for two conditions in the if condition check. Re-using our complexity model, this comes to a value of 2 or so.

Most codebases have loads of conditional branching, loops, multi-threading, various components and what have you. So we can safely our complexity values for such code bases in in the high millions or thereabout. Scary? Not yet.

Now imagine what happens when there are hundreds of developers working in that same codebase and making a few hundred check-ins daily? Maybe the complexity value should sky-rocket to the high billions? Trillions?

Given the rapid rate of change and inherent complexity, how do you ensure that quality is maintained? How do you enforce consistency across large groups? A very difficult problem to solve – there are approaches to mitigate the risk but I do not know of any foolproof method that works all the time. If you know of a way, do let me know.

2. I’ll know what I want when I see it

We all think we know what we want – alas, we typically don’t until we see the finished product. Let’s take the following series of interactions between Ade who wants a new dress and his tailor.

Ade: I want some beautiful dress that fits me, is wearable all year round and casual. I want it in 2 weeks.

Tailor: Aha, so you want something casual that fits you, is wearable all year round and need it in 2 weeks.

Ade: Yup right on point

2 weeks later

Tailor: Here it is! (Beaming proudly)

Ade: (Not impressed); I like the fabric and design. But I don’t like the colour, the sleeve length and it doesn’t fit me quite right. Can you change it to black?

Tailor: here, it is in black

Ade: On second thoughts, black would be too hot, could you change it to brown?

Tailor: here it is in brown

Ade: Great! Could the sleeves be shortened by 2cm?

Tailor: Done

Ade: Hhmm, could you revert the sleeves to their original length? I think I now like the earlier design.

Tailor: Done!! (getting annoyed probably)

Ade: Great! This is becoming awesome, could you please reduce the width of the dress? It’s too wide…

Tailor: @#$@#$@$#!!!

Most people usually don’t have physical products tailor-made to their desires. We go to the store (to meet a car dealer, a tailor or an architect) and choose one of the several options there. We can take a car out for a ride, walk through a building or try on a new dress. This helps a lot as we know if we want that product or not.

In software development, it’s a different story – we want something tailored but we cannot express that need accurately until we see the product. Invariably, our descriptions do not match what we desire. To  restate: it’s highly probable that you wouldn’t like a dress you described in its entirety to a tailor when you try it on.

Figuring out what users actually want is a difficult problem – probably why agile methodologies are popular. Less difficult way? Do the minimum possible thing and allow users to play with it. For example, the tailor could have given Ade a paper dress to evaluate all the styles and all that.

Let’s play a simple game: when you start your next project, make sure you document all user requests, also record every update as you go along. I am pretty sure the new requests will significantly differ from the original one. The end product might be totally different from the initial ask even.

3. No laws – it’s the wild wild west out there

If I release my grip on an apple, I expect it to fall down – why? Gravity of course. Most interactions in the physical world are bound by these models. Imagine that a car manufacturer has to design a new super car for some super-rich guy. Mr-rich-guy asks for the following:

  • Must be drive-able by adults, teenagers and infants
  • Must work on Earth, Venus and Mars
  • Can run perfectly on gas, water or coal

The manufacturer can tell him it’s impossible since the current physical models make it extremely difficult to achieve the three impossible orthogonal requirements; maybe if he wants a movie though…

Let’s go to the world of software; consider the typical AAA game, to capture the largest possible market share, products have to be usable on:

  • Multiple consoles (XBox, PlayStation, Nintendo etc)
  • Other architectures (e.g. 32-bit and 64-bit PCs)
  • Operating systems – Windows, Linux
  • Various frame rates

There are also limitations in software (hardware limits, processors, memory etc) but we often have to build ‘cars’ that can be driven well by people in various age groups living in multiple planets!

The support matrix explodes all the time and generating optimal experiences is an extremely difficult task. In fact, most times, the workaround is to have a limited set of supported platforms.

Alas, it’s the realm of 0s and 1s, so software engineers have to conjure all sort of tricks and contortions to make things work. Maybe some day, cars would run everywhere too…

Conclusion

So apart from the technical challenges, there are a few other just-as-challenging (or even more challenging areas) in software development. These include

  • Ensuring your requirements match hidden customer desires
  • Working to meet various regulations and ensuring proper support across platforms
  • Managing technical debt and reducing risk in heavily changed code bases

Your thoughts?

A simple explanation of 1s complement arithmetic


I remember taking the digital systems course in my second year of university and being exposed to concepts like k-maps, 1s and 2s complement arithmetic. These building blocks served as the foundations for bigger concepts like adders, half-adders, comparators, gates and what not.

It was pretty much fun doing the calculations then even though I didn’t fully realize why I needed to add a 1 while doing 1s complement arithmetic. Until I stumbled upon the excellent explanation in Charles Petzold’s Code; a great book that uses very lucid metaphors for explaining computing concepts. As is usually the case – the best explanations are typically so simple and straightforward that anyone can grasp them.

Even if you already know about 1s and 2s complement arithmetic; still go ahead and read this, you might find something interesting.

Subtraction – the basics

Assuming you need to find the difference between two numbers, e.g. 173 and 41; this is pretty straightforward, you do so

minuend 174
subtrahend 041
difference 133

Aside: Minuend and subtrahend are valid names, the names of parameters for mathematical operations are given below.

Expression

First number (i.e. 5)

Second number (i.e. 3)

5 + 3

Augend

Addend

5 – 3

Subtrahend

Minuend

5 * 3

Multiplier

Multiplicand

5 / 3

Dividend

Divisor

This is the simple case, how about the carry scenario when you need to ‘borrow’ from preceding digits?

minuend 135
subtrahend 049
difference 086

Aha the pesky borrowing! What if there was a way to avoid borrowing? The first thing to think of is the ceiling of all 3-digit numbers i.e. the smallest possible number that would require no borrows for any 3-digit number. We use 3-digits since we are taking a 3-digit example; were the subtrahend to be a 5-digit number, we would need the smallest 5-digit number value too.

That smallest ‘no-borrows-required’ number is 999. Unsurprisingly, it is the maximum possible value in base ten if you have only 3 digits to use in the hundreds, tens and units positions. Note: In other bases, the values would also be the penultimate value e.g. for base 8, it’ll be 777.

Now, let’s use this maximum value as the minuend

minuend 999
subtrahend 049
difference 950

Since we are using 999 as the reference value, then 49 and 950 are complements of each other; i.e. both add up to give 999. So we can say 49 is the 9s complement of 950 just as 950 is the 9s complement of 49.

Awesome! We can now avoid the annoying carries!! But knowing this is useless in itself unless we can find some way to leverage this new-found knowledge. Are there math tricks to use?  Turns out this is very possible and straightforward.

Math-fu?

Lets do some more maths tricks (remember all those crafty calculus dy/dx tricks)…

135 – 49 = 135 – 49 + 1000 – 1000

= 135 + 1000 – 49 – 1000

= 135 + 1 + 999 – 49 – 1000

= 135 + 1 + (999 – 49) – 1000

= 136 + 950 – 1000

= 1086 – 1000

= 86

QED!

What’s the use of such a long process?

We just found a very long way to avoid carries while doing subtraction. However, there is no motivation to use this since it is quite inefficient in base 10. So what’s the reason for all this?

It turns out that in computer-land, counting is done in 0s and 1s. The folks there can’t even imagine there are numbers other than 1 and 0. As you’ll see, there are some great advantages to using the 1s complement approach in this area.

Lets take a simple example e.g. 11 – 7

minuend 1011
subtrahend 0111
difference ????

Applying the same trick again (this time the minuend will be 1111 instead of 9999).

minuend 1111
subtrahend 0111
difference 1000

Do you notice a pattern between the subtrahend (0111) and the difference (1000)? The complements seem to be ‘inverses’ of each other.

The 1s complement of any numeric binary value is just the bitwise inverse of the bits in the original value. Calculation is just a matter of flipping each bit’s value, a linear  O(n) operation that can be quite fast. That’s a BIG WIN.

Continuing the process again with the addition step this time:

Augend (difference from step above) 01000
Addend (of 1) 00001
Addend (of original 11 value) 01011
Sum 10100

Finally the subtraction of the next 1* number that is larger which will be 10000 (since we used 1111).

subtrahend 10100
minuend 10000
difference 00100

And there is the 4! answer done!

How about negative numbers? Simple, just do the same process and invert the answers.

Hope you found this fascinating and enjoyed it. Let me know your thoughts in the comments.

How to detect page visibility in web applications


You are building a web application and need the application to pause whenever the user stops interacting with the page; for example, the user opens up another browser tab or minimizes the browser itself. Example scenarios include games where you want to automatically pause the action or video/chat applications where you’d like to raise a notification.

The main advantage of such an API is to prevent resource wastage (battery life on mobile, internet bandwidth or unnecessary computing tasks). Definitely, something to have in mind especially for developers targeting mobile devices. So how would you this?

Can I use event listeners?

Technically, you could use a global event listener on the window object to listen for focus/blur events however, this can not detect browser minification. Also, the blur/focus event would be fired whenever the page loses focus; however, it is possible that a webpage is still visible despite losing focus – think about users having multiple monitors.

The good news is that this is possible with the PageVisibilityAPI which comes with the browsers and this post shows how to use this.

Deep dive into details

The Document interface has been extended with two more attributes – visibilityState and hidden.

Hidden

This is true whenever the page is not visible. What counts as being not visible includes lock screens, minimization, being in a background tab etc.

VisibilityState

This can be one of 4 possible enums explaining the visibility state of the page.

  • hidden: page is hidden, hidden is true
  • visible: page is visible, hidden is false
  • prerender: page is being pre-rendered and not visible. Support for this is optional across browsers and not enforced
  • unloaded: page is being unloaded; hidden would also be false too. Support for this is also optional across browsers

Show me some code!

document.addEventListener('visibilitychange',function(){
    if(document.hidden) {
        console.log('hidden');
    } else {
        console.log('visible');
    }
}, false);

Browser support

You can have it in nearly all modern browsers except Opera mini. Also, you might need to specify vendor prefixes for some of the other browsers. See this.

Conclusion

There it is; you now know a way to effectively manage resource consumption – be it battery, internet data or computing power.

You can use this to determine how long users spend on your page, automatically pause streaming video/audio (with some nice fadeout effects for audio especially) or even raise notifications.

Did you enjoy this post? Here are a few more related posts:

How to track errors in JavaScript Web applications


Your wonderful one-of-a-kind web application just had a successful launch and your user base is rapidly growing. To keep your customers satisfied, you have to know what issues they face and address those as fast as possible.

One way to do that could be being reactive and waiting for them to call in – however, most customers won’t do this; they might just stop using your app. On the flip side, you could be proactive and log errors as soon as they occur in the browser to help roll out fixes.

But first, what error kinds exist in the browser?

Errors

There are two kinds of errors in JavaScript: runtime errors which have the window object as their target and then resource errors which have the source element as the target.

Since errors are events, you can catch them by using the addEventListener methods with the appropriate target (window or sourceElement). The WHATWG standard also provides onerror methods for both cases that you can use to grab errors.

Detecting Errors

One of JavaScript’s strengths (and also a source of much trouble too) is its flexibility. In this case, it’s possible to write wrappers around the default onerror handlers or even override them to instrument error logging automation.

Thus, these can serve as entry points for logging to external monitors or even sending messages to other application handlers.

//logger is error logger
var original = window.onerror; //if you still need a handle to this
window.onerror = function(message,source,lineNo,columnNo,errObject){
    logger.log('error', {
        message: message,
        stack: errObject && errObject.stack
    });
    original() //if you want to log the original
    return;
}

var elemOriginal = element.onerror;
element.onerror = function(event) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack
    });
    elemOriginal();
    return;
}

The Error Object

The interface for this contains the error message and optional values: fileName and lineNumber. However, the most important part of this is the stack which provides information about the stack.

Note: Stack traces vary from browser to browser as there exists no formatting standard.

Browser compatibility woes

Nope, you ain’t getting away from this one.

Not all browsers pass in the errorObject (the 5th parameter) to the window.onerror function. Arguably, this is the most important parameter since it provides the most information.

Currently the only big 5 browser that doesn’t pass in this parameter is the Edge browser – cue the only ‘edge’ case. Safari finally added support in June.

The good news though is there is a workaround! Hurray! Let’s go get our stack again.

window.addEventListener('error', function(errorEvent) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack
    });
});

And that wraps it up! You now have a way to track errors when they happen in production. Happier customers, happier you.

Note: The eventListener and window.onError approaches might need to be used in tandem to ensure enough coverage across browsers. Also the error events caught in the listener will still propagate to the onError handler so you might want to filter out duplicated events or cancel the default handlers.

Related

Tips for printing from web applications

Liked this article? Please share, subscribe or drop a comment.

Maturing as a software engineer


Looking back on my time as a developer, there are a lot of things I would have avoided doing if I had as much knowledge and maturity as I did now.

While I am grateful for the experiences and don’t regret them; I felt it would be a good idea to share these. These might motivate others or at least speed up their careers.

Here goes!

1. Patterns, patterns, patterns

When I take part in code reviews, I tend to look for recurring style patterns. Why? This helps to reduce the cognitive load on readers of the code (after all, code is written to be read).

I am  not advocating for bad software patterns rather having a plethora of ways for doing the same thing in a codebase creates confusion and productivity losses. How do you determine the ‘right’ pattern?

For example in JavaScript, there are several ways for creating an array.


var a = [];

var a = new Array();

var a = new Array(3);

Having a haphazard mixture only takes away brain processing cycles. Rather, have your team decide on a style and stick to it.

By the way, the first style is the ‘expected’ and preferred approach although there might be use cases for the latter two.

Ever wonder why the Google codebase is rated to be easy to work with? Well, think about consistency and established patterns.

2. Break the big picture down and make incremental progress

Building and distributing the smallest software piece you can imagine requires more effort than you would think. It is much more efficient to break down the big picture into small chunks of work that can be completed in an hour or less. Such breakdowns make you more effective and help in understanding progress and forecasting completion times (which is a tricky problem to solve).

I used to break down only the code pieces before (which itself was an improvement over my earlier dive-into-code-and-figure-it-out-as-you-go approach). Nowadays, I try to take some time and reflect on the end product itself: its behaviour, look and feel and how users would interact with it.

For a typical software project, such road maps covers:

  • Testing – unit tests, continuous integration,
  • Documentation – extensibility guides, tooling
  • Implementation
  • Discoverability and Distribution – release targets, getting started articles
  • Maintenance – handling bugs, user feedback etc

Sounds like too much work? Well, just focus on one small bit at a time and keep making progress.

3. Be lazy – start first on tasks with the largest impact/effort ratios

Two things matter: results and impact. There is no point in slaving for 20 hours to choose between blue and light blue if it has no impact on the users. Ditto for spending endless hours ‘arguing’ over what language should be used. Just choose the best usable one and deliver results.

My heuristic for tasks is thus:

  • Does completing the task move me closer towards the big picture?
  • Is this the easiest-to-achieve task with the biggest impact?

If so, I pick up that task and just do it – the goal is to maximize the impact/effort ratio.

Before I’d just stick to a task and spend endless hours on it even if it was something as trivial (and probably low-impact) as beautifying test scaffolding test output and elaborately designing test functions. Now? Common, my time is more valuable than that – I get the test functions right and try to get the coverage I want but won’t spend too much time once that is achieved and is readable for others.

Excessive polishing time can be spent on other more impactful pursuits like having fun with family or delivering high impact features.

4. Technical skills plateau

Sooner or later, you’ll get to the technical plateau. By that time, you’ll have so many successes under your belt and can detect potential pitfalls easily. Then, what next?

There are tons of ways to extend your impact and that is the way people become even better engineers. For example, I doubt if Anders Helsberg is still writing a lot of code, yet his ideas continue to empower and influence millions around the world.

Think about that, how do you scale your influence and make it possible to touch the lives of thousands of people? Are there engineering problems crippling your organization? Process pitfalls to improve with huge impact? Education ramp ups? There are always challenges to solve and problems to fix.

5. Choose career investments carefully

How would you set up an investment portfolio? Would you just go about investing in everything? Nope, you would evaluate the risks and benefits, consult experts and then invest in a select few areas while ignoring other areas.

You could spread out your risk by investing in a wide area but doing this excessively dilutes your returns. Conversely, investing in only company could be very risky too. Thus, it’s generally advised to spread  out your investment portfolio

Careers are investment portfolios. A typical career spans a long period ( upwards of 30 to 40 years) and shares some similarities with investments:

  • technologies, frameworks etc -> investment options
  • time -> funds

Just as you wouldn’t jump on every new fund, why would you do the same with your career? There is no harm in taking measured risks in careers but you should be strategic and know what your end goal is.

Every now and then a new framework pops up in the news. Before, I’d hop on the bandwagon and try figure it out. Nowadays? Well, if it really piques my interest, then I might spend some time learning about its core design principles and problem-solving approach.

If it neither solves any of my problems nor brings anything new to the table, then no thank you; I’d rather continue nurturing my current investment portfolio and hedging my bets.

Think about your bets and stick to them.

Conclusion

I am still learning and pray I continue. One thing that has struck me as being really critical is the will to try. We don’t know if something would work out or not however we can always try and then learn from the outcome (success or failure).

Don’t give up – continue learning and growing.

Understanding Bit masks


Bit masks enable the simultaneous storage and retrieval of multiple values using one variable. This is done by using flags with special properties (numbers that are the powers of 2). It becomes trivial to symbolize membership by checking if the bit at a position is 1 or 0.

How it works

Masking employs the bitwise OR and AND operations to encode and decode values respectively.

New composite values are created by a bitwise OR of the original composite and the predefined flag. Similarly, the bitwise AND of a composite and a particular flag validates the presence / absence of the flag.

Assuming we start off with decimal values 1,2,4 and 8. The table below shows the corresponding binary values.

Decimal Binary
0 0000
1 0001
2 0010
4 0100
8 1000

The nice thing about this table is that the four bits allow you to represent all numbers in the range 0 – 15 via combinations. For example, 5 which is binary 101, can be derived in two ways.

5     -> 1 + 4

or

101 -> 0001 | 0100

       -> 0101

7 which is 111 can also be derived in the same form.

Since any number in a range can be specified using these few ‘base’ numbers, we can use such a set to model things in the real world. For example, let’s say we want to model user permissions for an application.

Assuming the base permissions are read, write and execute, we can map these values to the base numbers to derive the table below:

Permissions Decimal Binary
None 0 000
Read 1 001
Write 2 010
Execute 4 100

Users of the application will have various permissions assigned to them (think ACL). Thus a potential model for visitor, reader, writer and admin roles with our base permissions is:

Role Permissions Decimal Binary
Visitors None 0 000
Readers Read 1 001
Writers Read + Write 3 011
Admins Read + Write + Execute 7 111

Noticed a pattern yet?

All the binary values can be obtained by ‘OR-ing’ the base binary values. For example, admins who have read, write and execute permissions have the value obtained when you do a bitwise OR of 1, 2 and 4.

The UNIX model uses the same numbering system. E.g. 777 translates into 111 111 111 which grants owners, groups and others read, write and execute permissions.

Checking Access

Now, the next question is how do you check if a composite value contains a particular flag? Going back to the binary data basics, this means checking if a bit at some position is a 1 or 0.

The bitwise AND operator comes in handy here – it guarantees a 1 when both values sharing the same index are 1 and 0 in all other cases. Thus, ‘AND-ing’ a composite value and the desired flag would reveal the desired outcome. If we get a value greater than zero as the result, then the user has the permission, otherwise a zero means there is no permission.

The admin role has a bitmask value of 111. To check if he really ‘has’ the execute permission we do a bitwise AND of 111 and the execute flag 100. The result is 100 which proves the permission.

More tables! The two tables show how to check for 2 users; one with the read + write + execute (111) permissions and another with the read and execute (101) permissions.

Read + Write + Execute (111)

Permissions 111 bitmask Binary Has permission?
Read 111 001 Yes: 111 & 001 → 1
Write 111 010 Yes: 111 & 010 → 1
Execute 111 100 Yes: 111 & 100 → 1

Read + Execute (101)

Permissions 101 bitmask Binary Has permission?
Read 101 001 Yes: 101 & 001 → 1
Write 101 010 No: 101 & 010 → 0
Execute 101 100 Yes: 101 & 100 → 1

See? Such a simple way to check without having to make unnecessary calls to the server.

Usage

This post shows the permission model however bit masks can be applied a wide variety of scenarios. Possible examples include checking membership, verifying characteristics and representing hierarchies.

A possible use in games could be to verify the various power-ups an actor has and to add new ones rapidly. Eliminates the need to iterate, use the bitmask.

Hierarchical models, where higher members encompass lower ones (e.g. the set of numbers) can also be adequately modeled using bitmaps.

Language support

Explicit language support is not needed to use bitmasks, the rules to know are:

  • Use increasing powers of 2 – ensures there is only one flag at the right spot
  • Create basic building blocks that are easy to extend and combine
  • Remember to watch out for overflows and use values that are large enough to hold all possible bit values. For example, C’s uint_8 / unsigned char can only hold 8 different flags, if you need more, you’d have to use a bigger value.

Some languages provide extra support for bit mask operations. For, example, C# provides the enum FlagsAtribute which implies that the enum would be used for bit masking operations.

Teaser

Q: Why not base 10? After all, we could use 10, 100, 1000 etc.

A: The decimal system falls short because you can have ten different numbers in one position. This makes it difficult to represent the binary ON/OFF state (which maps well to 0/1).