A framework for shipping high quality software


Software engineers, technical leads and managers all share one goal – shipping high-quality software on time. Ambiguous requirements, strict deadlines and technical debt exert conflicting tugs on a software team’s priorities. Software quality has to be great otherwise bugs inundate the team; further slowing down delivery speed.

This post proposes a model for consistently shipping high-quality software. It also provides a common vocabulary for communication across teams and people.

Origins

This framework is the culmination of lessons learnt delivering the most challenging project I have ever worked on. The task was to make a web application globally available to meet scaling and compliance requirements.

The one-line goal quickly ballooned into a multi-month effort requiring:

  • Moving from a single compute resource based approach to multiple compute resources.
  • Fundamental changes to platform-level components across all micro services.
  • Constantly collaborating with diverse teams to get more insight.

The icing on the cake? All critical deployments had to be seamless and not cause a service outage.

What’s Donald Rumsfeld gotta do with software?

He’s not a software engineer but his quote below provides the basis.

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

– Donald Rumsfeld

His quote is a simplified version of the Johari window from Psychology. Applying this to software, the window would look thus:

What the developer knows What the developer doesn’t know
What other developers know Known Unknown known
What other developers don’t know Known Unknown Unknown Unknown

1. The known

Feature requirements, bugs, customer requests etc. These are the concepts that are well-known and expected. However, writing code to implement a feature may not guarantee a full known status. For example, untested code can still be a known unknown until you guarantee how it works.

It is one thing to think code works as you think it would and it is another to prove it. Unit tests, functional tests and even manually stepping through every line help to increase the known.

2. The known unknown and the unknown known

I am collapsing both halves into one group because they are related.

  1. Known Unknown

    These are aspects that the developer knows about but other engineers in partner teams don’t know. A good example would be creating a replacement API and making an existing one obsolete. Another example would be changing the behaviour of some shared component.

  2. Unknown Known

    These are the aspects that the developer doesn’t know but engineers in other teams know about. For example, a seemingly minor update of a core component by a developer can trigger expensive rewrite cascades in partner teams. Another example could be quirks that are known to only a few engineers.

Clear communication is the one good fix for challenges in this category. Over-communicate! Send out emails, hold design reviews and continuously engage stakeholders.

This is extra important for changes with far-reaching impact. As the developer/lead/manager, you need to spend time with the key folks and understand their scenarios deeply. This would lead to better models as well as help forecast issues that might arise.

Finally, this applies to customers too – you may what the customer doesn’t know about and vice versa.

3. The unknown unknowns

This is the most challenging category. There is no way to model or prepare for something unpredictable – an event that has never happened before. Unknown Unknowns (UUs) include hacks, data loss / corruption, theft, sabotage, release bugs and so on.

Don’t fret it yet, the impact of UUs can be easily mitigated. Let’s take two more metrics:

  1. Mean time to repair (MTTR)

    The average amount of time it takes to repair an issue with the software.

  2. Mean time to detect (MTTD)

    The average amount of time it takes to detect a flaw.

The most reliable way of limiting the impact of UUs is to keep the MTTR and MTTD low. Compare the damage that a data-corrupting deployment can cause in 5 minutes versus 1 hour.

MTTD

A rich monitoring and telemetry system is essential for lowering MTTD metrics. Log compute system health usage metrics (RAM, CPU, disk reads etc), HTTP request statuses (500s, 400s etc.) and more.

Ideally, a bad release will trigger alarms and notify administrators immediately it goes out. This will enable the service owner to react and recover.

MTTR

Having a feature toggle or flighting system can help with MTTR metrics. Again using the bad release example, a flight/feature toggle will enable you to ‘turn off’ that feature before it causes irreparable damage.

Also critical is having a quick release pipeline, if it takes two days to get a fix out; then your MTTR is 2 days+x. That’s a red flag – invest in a CI pipeline.

tldr?

A software engineer is rolling out a critical core update, a few questions to ask:

  • Does he have enough logging to be able to debug and track issues if they arise?
  • Is the risky feature behind a flight or feature toggle? How soon can it be turned off if something goes wrong?
  • Are there metrics that can be used to find out if something goes wrong after the feature is deployed in production?

A release strategy is to roll out the feature in a turned off state and then turn it on for a few people and see if things are stable. If it fails, then you turn off the feature switch and fix the issue. Otherwise, you progressively roll out to more users.

What steps do you take to ensure software quality?

Related

  1. Creating Great User Experiences
  2. Efficiently shipping Big Hairy Audacious Software projects
  3. Things to check before releasing your web application
Advertisements

Faking goto in JavaScript


What if I told you JavaScript had a limited form of the infamous goto statement? Surprised? Read on.

Labeled Statements

It is possible to add label identifiers to JavaScript statements and then use these identifiers with the break and continue statements to manage program flow.

While it might be better to use functions instead of labels to jump around, it is worth seeing how to jump around or interrupt loops using these. Let’s take an example:

// print only even numbers
loop:
for(let i = 0; i < 10; i++){
    if(i % 2) {
        continue loop;
    }
    console.log(i);
}
//0, 2, 4, 6, 8

// print only values less than 5
loop:
for(let i = 0; i < 10; i++){
    if(i > 5) {
        break loop;
    }
    console.log(i);
}
// 0, 1, 2, 3, 4, 5

There is a subtle difference between when labels can be used:

  • break statements can apply to any label identifier
  • continue statements can only apply to labels identifying loops

Because of this, it is possible to have the sample code below (yes it’s valid JavaScript too!)

var i = 0;
block: {
     while(true){
         console.log(i);
         i++;
         if(i == 5) {
             break block;
             console.log('after break');
         }
     } 
}
console.log('outside block');
// 0, 1, 2, 3, 4, outside block

Note

  1. continue wouldn’t work in the above scenario since the block label applies to a block of code and not a loop.
  2. The {} after the block identifier signify a block of code. This is valid JavaScript and you can define any block by wrapping statements inside {}. See an example below
{
let i = 5;
console.log(i);
}

// 5

Should I use this?

This is an arcane corner of JavaScript and I personally have not seen any code using this. However if you have a good reason to use this, please do add comments and references to the documentation. Spare the next developer after you some effort…

Related

  1. What you didn’t know about JSON.Stringify
  2. Why JavaScript has two zeros: -0 and +0
  3. JavaScript has no Else If

What you didn’t know about JSON.Stringify


JSON, the ubiquitous data format that has become second nature to engineers all over the world. This post shows you how to achieve much more with JavaScript’s native JSON.Stringify method.

A quick refresher about JSON and JavaScript:

  • Not all valid JSON is valid JavaScript
  • JSON is a text-only format, no blobs please
  • Numbers are only base 10.

1. JSON.stringify

This returns the JSON-safe string representation of its input parameter. Note that non-stringifiable fields will be silently stripped off as shown below:

let foo = { a: 2, b: function() {} };
JSON.stringify(foo);
// "{ "a": 2 }"

What other types are non-stringifiable? 

Circular references

Since such objects point back at themselves, it’s quite easy to get into a non-ending loop. I once ran into a similar issue with memq in the past.

let foo = { b: foo };
JSON.stringify(foo);
// Uncaught TypeError: Converting circular structure to JSON

// Arrays
foo = [foo];
JSON.stringify(foo);
// Uncaught TypeError: Converting circular structure to JSON

Symbols and undefined

let foo = { b: undefined };
JSON.stringify(foo);
// {}
// Symbols
foo.b = Symbol();
JSON.stringify(foo);
// {}

Exceptions

Arrays containing non-stringifiable entries are handled specially though.

let foo = [Symbol(), undefined, function() {}, 'works']
JSON.stringify(foo);
// "[null,null,null,'works']"

Non-stringifiable fields get replaced with null in arrays and dropped in objects. The special array handling helps ‘preserve’ the shape of the array. In the example above, if the array entries were dropped as occurs in objects, then the output would have been [‘works’]. A single element array is very much different from a 4 element one.

I would argue for using null in objects too instead of dropping the fields. That way, we get a consistent behaviour and a way to know fields have been dropped.

Why aren’t all values stringifiable?

Because JSON is a language agnostic format.

For example, let us assume JSON allowed exporting functions as strings. With JavaScript, it would be possible to eval such strings in some scenarios. But what context would such eval-ed functions be evaluated in? What would that mean in a C# program?  And would you even represent some language-specific values (e.g. JavaScript Symbols)?

The ECMAScript standard highlights this point succinctly:

It does not attempt to impose ECMAScript’s internal data representations on other programming languages. Instead, it shares a small subset of ECMAScript’s textual representations with all other programming languages.

2. Overriding toJSON on object prototypes

One way to bypass the non-stringifiable fields issue in your objects is to implement the toJSON method. And since nearly every AJAX call involves a JSON.stringify call somewhere, this can lead to a very elegant trick for handling server communication.

This approach is similar to toString overrides that allow you to return representative strings for objects. Implementing toJSON enables you to sanitize your objects of non-stringifiable fields before JSON.stringify converts them.

function Person (first, last) {
    this.firstName = first;
    this.last = last;
}

Person.prototype.process = function () {
   return this.firstName + ' ' +
          this.lastName;
};

let ade = new Person('Ade', 'P');
JSON.stringify(ade);
// "{"firstName":"Ade","last":"P"}"

As expected, the instance process function is dropped. Let’s assume however that the server only wants the person’s full name. Instead of writing a dedicated converter function to create that format, toJSON offers a more scalable alternative.

Person.prototype.toJSON = function () {
    return { fullName: this.process(); };
};

let ade = new Person('Ade', 'P');
JSON.stringify(ade);
// "{"fullName":"Ade P"}"

The strength of this lies in its reusability and stability. You can use the ade instance with virtually any library and anywhere you want. You control exactly the data you want serialized and can be sure it’ll be created just as you want.

// jQuery
$.post('endpoint', ade);

// Angular 2
this.httpService.post('endpoint', ade)

Point: toJSON doesn’t create the JSON string, it only determines the object it’ll be called with. The call chain looks like this: toJSON -> JSON.stringify.

3. Optional arguments

The full signature stringify is JSON.stringify(value, replacer?, space?). I am copying the TypeScript ? style for identifying optional values. Now let’s dive into the replacer and space options.

4. Replacer

The replacer is a function or array that allows selecting fields for stringification. It differs from toJSON by allowing users to select choice fields rather than manipulate the entire structure.

If the replacer is not defined, then all fields of the object will be returned – just as JSON.stringify works in the default case.

Arrays

For arrays, only the keys present in the replacer array would be stringified.

let foo = {
 a : 1,
 b : "string",
 c : false
};
JSON.stringify(foo, ['a', 'b']);
//"{"a":1,"b":"string"}"

Arrays however might not be as flexible as desired,  let’s take a sample scenario involving nested objects.

let bar = {
 a : 1,
 b : { c : 2 }
};
JSON.stringify(bar, ['a', 'b']);
//"{"a":1,"b":{}}"

JSON.stringify(bar, ['a', 'b', 'c']);
//"{"a":1,"b":{"c":2}}"

Even nested objects are filtered out. Assuming you want more flexibility and control, then defining a function is the way out.

Functions

The replacer function is called for every key value pair and the return values are explained below:

  • Returning undefined drops that field in the JSON representation
  • Returning a string, boolean or number ensures that value is stringified
  • Returning an object triggers another recursive call until primitive values are encountered
  • Returning non-stringifiable valus (e.g. functions, Symbols etc) for a key will result in the field being dropped.
let baz = {
 a : 1,
 b : { c : 2 }
};

// return only values greater than 1
let replacer = function (key, value) {
    if(typeof value === 'number') {
        return value > 1 ? value: undefined;
    }
    return value;
};

JSON.stringify(baz, replacer);
// "{"b":{"c":2}}"

There is something to watch out for though, the entire object is passed in as the value in the first call; thereafter recursion begins. See the trace below.

let obj = {
 a : 1,
 b : { c : 2 }
};

let tracer = function (key, value){
  console.log('Key: ', key);
  console.log('Value: ', value);
  return value;
};

JSON.stringify(obj, tracer);
// Key:
// Value: Object {a: 1, b: Object}
// Key: a
// Value: 1
// Key: b
// Value: Object {c: 2}
// Key: c
// Value: 2

5. Space

Have you noticed the default JSON.stringify output? It’s always a single line with no spacing. But what if you wanted to pretty format some JSON, would you write a function to space it out?

What if I told you it was a one line fix? Just stringify the object with the tab(‘\t’) space option.

let space = {
 a : 1,
 b : { c : 2 }
};

// pretty format trick
JSON.stringify(space, undefined, '\t');
// "{
//  "a": 1,
//  "b": {
//   "c": 2
//  }
// }"

JSON.stringify(space, undefined, '');
// {"a":1,"b":{"c":2}}

// custom specifiers allowed too!
JSON.stringify(space, undefined, 'a');
// "{
//  a"a": 1,
//  a"b": {
//   aa"c": 2
//  a}
// }"

Puzzler: why does the nested c option have two ‘a’s in its representation – aa”c”?

Conclusion

This post showed a couple of new tricks and ways to properly leverage the hidden capabilities of JSON.stringify covering:
  • JSON expectations and non-serializable data formats
  • How to use toJSON to define objects properly for JSON serialization
  • The replacer option for filtering out values dynamically
  • The space parameter for formatting JSON output
  • The difference between stringifying arrays and objects containing non-stringifiable fields
Feel free to check out related posts, follow me on twitter or share your thoughts in the comments!

Related

  1. Why JavaScript has two zeros: -0 and +0
  2. JavaScript has no Else If
  3. Deep dive into JavaScript Property Descriptors

Creating Great User Experiences


Have you ever wondered why some applications always look and feel similar? Why for example does Apple have a unified experience across devices? Why are Google products starting to adopt the material experience?

This post explains some of the underlying themes influencing design choices. Developers can use these concepts to craft better user interfaces and experiences.

1. Consistency

A well designed user experience is consistent through out its constituent parts. Imagine how easy it would be to use an app with varying colour shades, item sizes and page styles? Wouldn’t that be confusing and difficult to understand?

A couple of examples of design consistency include:

  1. All computer applications usually have the set of three buttons on the title bar (close, minimize and maximize).
  2. Right-click nearly always opens up a context menu
  3. The menu bar for applications starts with drop downs titled File, Edit and so on

Such consistency lowers the learning barrier as they are implicit assumptions in the minds of users. When was the last time you tried to figure out the ‘close’ button? You don’t need to think, you subconsciously understand and use them.

UI frameworks like semantic-ui, bootstrap or Fabric provide a good foundation for quickly ramping up web applications. A colour palette helps ensure visual colour consistency. Do be careful of contrast though; having too much contrast is not visually appealing while having too little makes it difficult to see components.

And this leads us to the next point : ‘learnability’.

2. Learnability

One of the enduring messages I picked up from Steve Krug’s great book: ‘Don’t make me think’ is to avoid making users think. Why? When people have to ‘think’ deeply to use your application, that interrupts their concentration and flow; such interrupts might be fatal as some users might just leave your site.

A low learning curve helps users quickly become familiar and deeply understand how to use your product. Think of any software you are proficient in, now think about the first time you used that product, was it difficult? Did you grow to learn?

A couple of tips here:

  1. Component Behaviour – One common issue I run into with components is variable behaviour for the same action in different scenarios. For example, a set of actions might trigger a dialog to pop up while in other scenarios the dialog is not triggered. It’s much better to have the dialog pop up consistently otherwise you’ll have your users build many mental models of expected interaction responses.
  2. Grouping Related collections can be grouped and arranged in hierarchies that can be drilled into. I prefer using cards for such collections and allowing users drill up/down. Another thing to note is to ensure that the drill ladders are consistent; no point having users drill to a level they can’t drill out of.

3. Clickability

My principle of ‘clickability’

The number of clicks required to carry out a critical operation is inversely proportional to the usability scores for the web application.

The larger the number of clicks needed to achieve a major action, the more annoying the app gets over time. Minimize clicks as much as possible!

But why is this important? Let’s take a hypothetical example of two applications – app 1 and app 2 – which require 2 and 4 clicks to register a new user.

Assuming you need to add 5 new users, then app 1 requires 20 clicks while app 2 requires 40 clicks, a difference of 20 clicks! It’s safe to assume then that users who have to use the application for hours on end will prefer app1 to app2.

The heuristic I use is to minimize the number of clicks in core user paths and then allow short cuts for cases that are not possible.

Yes, ‘clickability’ does matter.

4. Navigability

Building upon the principle of clickability, this implies designing your flow so that the major content sections are well interlinked. It also includes thinking of the various links between independent modules so that navigation is seamless.

For example, you have a navigation hierarchy that is 3 layers deep and want users to do some higher-level action while at the bottom of the navigation tree. You can add links to the leaf nodes and even subsequent tree levels to ensure users can quick jump out rather than exposing a top-level option only. Why? See the principle of clickability above.

A good exercise is to build out the interaction tree which exposes the links between sections of your app. If you have a single leaf node somewhere, then something is really wrong. Once done, find ways to link leaf nodes if such interactions are vital and enhance the user experience.

5. Accessible + Responsive + nit-proof

One of my favorite tests is to see how a web application works on smaller devices.That quickly exposes a lot of responsive issues.

Another one is to add way too many text characters to see if text overflow is properly handled. The fix for that is very simple: just add the two CSS rules

.noOverflow {
   text-overflow: ellipsis;
   overflow: hidden;
}

Another important concept is adding titles to elements and following accessibility guidelines.

All these are important and show that some thought was put into the process.

Conclusion

Do a quick test of your application on some random user.The way users interact with your application and navigate around might surprise you!

While running such tests, ask if users feel they have an intuitive understanding of how the app works. If yes, bravo! Pat your self on the back. Otherwise, your assumptions might be way off and you need to return to the drawing board.

The best experiences always look deceptively simple but making the complex simple is a very difficult challenge. What are your experiences creating great user experiences?

Related

  1. Things to check before releasing your web application
  2. Tips for printing from web applications
  3. How to detect page visibility in web applications
  4. How to track errors in JavaScript Web applications

Things to check before releasing your web application


This post originally started out as a list of tips on how to break web applications but quickly morphed into a pre-release checklist.

So here are a couple of things to validate before you press the ‘go-live’ button on that wonderful web application of yours.

General

  1. Does the application handle extremely large input? Try copying a Wikipedia page into an input field. Strings can be too long and overflow database models.
  2. Does it handle boundary values properly? Try extremely large or small values; Infinity is a good one.
  3. Do you have validation? Try submitting forms with no entry.
  4. Do you validate mismatched value types? Try submitting strings where numbers are expected.
  5. Has all web copy been proofread and spell-checked? Typos are bad for reputation.

Localization (L10n) and Internationalization (I18n)

  1. Do you support Unicode? The Turkish i and German ß are two quick tests.
  2. Do you support right-to-left languages? CssJanus is a great tool for flipping pages.
  3. Time zones and daylight saving time changes.
  4. Time formats: 12 and 24 hour clocks
  5. Date formats: mm/dd/yyy vs dd/mm/yyyy
  6. Currencies in different locales.

Connections

  1. Does your web app work well on slow connections? You can use Chrome or Fiddler to simulate this.
  2. What happens when abrupt network disconnections occur while using your web application?
  3. Do you cut off expensive operations when the user navigates away or page is idle?

Usability + UX

  1. Does the application work well across the major browsers you support (including mobile)?
  2. Does the application look good at various resolution levels? Try resizing the window and see what happens.
  3. Is your application learnable? Are actions and flows consistent through the application? For example, modal dialogs should have the same layout regardless of the action triggering them.
  4. Do you have your own custom 404 page?
  5. Do you support print?
  6. Do error messages provide enough guidance to users?
  7. Does your application degrade gracefully when JavaScript is disabled?
  8. Are all links valid?

Security

  1. Do you validate all input?
  2. Are all assets secured and locked down?
  3. Do you grant least permissions for actions?
  4. Ensure error messages do not reveal sensitive server information.
  5. Have you stripped response headers of infrastructure-revealing information? E.g. server type, version etc.
  6. Do you have the latest patches installed on your servers and have a plan for regular updates?
  7. Do you have a Business Continuity / Disaster Response (BCDR) plan in place?
  8. Are you protected against the Owasp Top Ten?
  9. Do you have throttling and rate limiting mechanisms?
  10. Do you have a way to quickly rotate secrets?
  11. Have you scanned your code to ensure no valuable information is being released?

Code

  1. Did you lint your CSS and JS (see JSLint, JSHint, TSLint)?
  2. Have all assets (JavaScript, CSS etc) been minified, obfuscated and bundled?
  3. Do you have unit, integration and functional tests?

Performance

  1. Have you run Google’s Page Speed and Yahoo’s YSlow to identify issues?
  2. Are images optimized? Are you using sprites?
  3. Do you use a CDN for your static assets?
  4. Do you have a favicon? Helps to prevent unwanted 404s since browsers auto-request for them.
  5. Are you gzipping content?
  6. Do you have stylesheets at the top and JavaScript at the bottom?
  7. Have you considered moving to HTTP2?

Release Pipeline

  1. Do you have test and staging environments?
  2. Do you have automated release pipelines?
  3. Can you roll back changes?

Others

  1. Do you have a way to track errors and monitor this with logging?
  2. Do you have a plan to handle customer reported issues?
  3. Have you met all legal and compliance requirements for your domain?
  4. Have you handled SEO requirements?

Conclusion

These are just a few off of my head – feel free to suggest things I missed out. I should probably consider transferring these to a Github repo or something for easier usage.

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

How to detect page visibility in web applications


You are building a web application and need the application to pause whenever the user stops interacting with the page; for example, the user opens up another browser tab or minimizes the browser itself. Example scenarios include games where you want to automatically pause the action or video/chat applications where you’d like to raise a notification.

The main advantage of such an API is to prevent resource wastage (battery life on mobile, internet bandwidth or unnecessary computing tasks). Definitely, something to have in mind especially for developers targeting mobile devices. So how would you this?

Can I use event listeners?

Technically, you could use a global event listener on the window object to listen for focus/blur events however, this can not detect browser minification. Also, the blur/focus event would be fired whenever the page loses focus; however, it is possible that a webpage is still visible despite losing focus – think about users having multiple monitors.

The good news is that this is possible with the PageVisibilityAPI which comes with the browsers and this post shows how to use this.

Deep dive into details

The Document interface has been extended with two more attributes – visibilityState and hidden.

Hidden

This is true whenever the page is not visible. What counts as being not visible includes lock screens, minimization, being in a background tab etc.

VisibilityState

This can be one of 4 possible enums explaining the visibility state of the page.

  • hidden: page is hidden, hidden is true
  • visible: page is visible, hidden is false
  • prerender: page is being pre-rendered and not visible. Support for this is optional across browsers and not enforced
  • unloaded: page is being unloaded; hidden would also be false too. Support for this is also optional across browsers

Show me some code!

document.addEventListener('visibilitychange',function(){
    if(document.hidden) {
        console.log('hidden');
    } else {
        console.log('visible');
    }
}, false);

Browser support

You can have it in nearly all modern browsers except Opera mini. Also, you might need to specify vendor prefixes for some of the other browsers. See this.

Conclusion

There it is; you now know a way to effectively manage resource consumption – be it battery, internet data or computing power.

You can use this to determine how long users spend on your page, automatically pause streaming video/audio (with some nice fadeout effects for audio especially) or even raise notifications.

Did you enjoy this post? Here are a few more related posts:

How to track errors in JavaScript Web applications


Your wonderful one-of-a-kind web application just had a successful launch and your user base is rapidly growing. To keep your customers satisfied, you have to know what issues they face and address those as fast as possible.

One way to do that could be being reactive and waiting for them to call in – however, most customers won’t do this; they might just stop using your app. On the flip side, you could be proactive and log errors as soon as they occur in the browser to help roll out fixes.

But first, what error kinds exist in the browser?

Errors

There are two kinds of errors in JavaScript: runtime errors which have the window object as their target and then resource errors which have the source element as the target.

Since errors are events, you can catch them by using the addEventListener methods with the appropriate target (window or sourceElement). The WHATWG standard also provides onerror methods for both cases that you can use to grab errors.

Detecting Errors

One of JavaScript’s strengths (and also a source of much trouble too) is its flexibility. In this case, it’s possible to write wrappers around the default onerror handlers or even override them to instrument error logging automation.

Thus, these can serve as entry points for logging to external monitors or even sending messages to other application handlers.

//logger is error logger
var original = window.onerror; //if you still need a handle to this
window.onerror = function(message,source,lineNo,columnNo,errObject){
    logger.log('error', {
        message: message,
        stack: errObject && errObject.stack
    });
    original() //if you want to log the original
    return;
}

var elemOriginal = element.onerror;
element.onerror = function(event) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack
    });
    elemOriginal();
    return;
}

The Error Object

The interface for this contains the error message and optional values: fileName and lineNumber. However, the most important part of this is the stack which provides information about the stack.

Note: Stack traces vary from browser to browser as there exists no formatting standard.

Browser compatibility woes

Nope, you ain’t getting away from this one.

Not all browsers pass in the errorObject (the 5th parameter) to the window.onerror function. Arguably, this is the most important parameter since it provides the most information.

Currently the only big 5 browser that doesn’t pass in this parameter is the Edge browser – cue the only ‘edge’ case. Safari finally added support in June.

The good news though is there is a workaround! Hurray! Let’s go get our stack again.

window.addEventListener('error', function(errorEvent) {
    logger.log('error', {
        message: event.message,
        stack: event.error.stack
    });
});

And that wraps it up! You now have a way to track errors when they happen in production. Happier customers, happier you.

Note: The eventListener and window.onError approaches might need to be used in tandem to ensure enough coverage across browsers. Also the error events caught in the listener will still propagate to the onError handler so you might want to filter out duplicated events or cancel the default handlers.

Related

Tips for printing from web applications

Liked this article? Please share, subscribe or drop a comment.

Maturing as a software engineer


Looking back on my time as a developer, there are a lot of things I would have avoided doing if I had as much knowledge and maturity as I did now.

While I am grateful for the experiences and don’t regret them; I felt it would be a good idea to share these. These might motivate others or at least speed up their careers.

Here goes!

1. Patterns, patterns, patterns

When I take part in code reviews, I tend to look for recurring style patterns. Why? This helps to reduce the cognitive load on readers of the code (after all, code is written to be read).

I am  not advocating for bad software patterns rather having a plethora of ways for doing the same thing in a codebase creates confusion and productivity losses. How do you determine the ‘right’ pattern?

For example in JavaScript, there are several ways for creating an array.


var a = [];

var a = new Array();

var a = new Array(3);

Having a haphazard mixture only takes away brain processing cycles. Rather, have your team decide on a style and stick to it.

By the way, the first style is the ‘expected’ and preferred approach although there might be use cases for the latter two.

Ever wonder why the Google codebase is rated to be easy to work with? Well, think about consistency and established patterns.

2. Break the big picture down and make incremental progress

Building and distributing the smallest software piece you can imagine requires more effort than you would think. It is much more efficient to break down the big picture into small chunks of work that can be completed in an hour or less. Such breakdowns make you more effective and help in understanding progress and forecasting completion times (which is a tricky problem to solve).

I used to break down only the code pieces before (which itself was an improvement over my earlier dive-into-code-and-figure-it-out-as-you-go approach). Nowadays, I try to take some time and reflect on the end product itself: its behaviour, look and feel and how users would interact with it.

For a typical software project, such road maps covers:

  • Testing – unit tests, continuous integration,
  • Documentation – extensibility guides, tooling
  • Implementation
  • Discoverability and Distribution – release targets, getting started articles
  • Maintenance – handling bugs, user feedback etc

Sounds like too much work? Well, just focus on one small bit at a time and keep making progress.

3. Be lazy – start first on tasks with the largest impact/effort ratios

Two things matter: results and impact. There is no point in slaving for 20 hours to choose between blue and light blue if it has no impact on the users. Ditto for spending endless hours ‘arguing’ over what language should be used. Just choose the best usable one and deliver results.

My heuristic for tasks is thus:

  • Does completing the task move me closer towards the big picture?
  • Is this the easiest-to-achieve task with the biggest impact?

If so, I pick up that task and just do it – the goal is to maximize the impact/effort ratio.

Before I’d just stick to a task and spend endless hours on it even if it was something as trivial (and probably low-impact) as beautifying test scaffolding test output and elaborately designing test functions. Now? Common, my time is more valuable than that – I get the test functions right and try to get the coverage I want but won’t spend too much time once that is achieved and is readable for others.

Excessive polishing time can be spent on other more impactful pursuits like having fun with family or delivering high impact features.

4. Technical skills plateau

Sooner or later, you’ll get to the technical plateau. By that time, you’ll have so many successes under your belt and can detect potential pitfalls easily. Then, what next?

There are tons of ways to extend your impact and that is the way people become even better engineers. For example, I doubt if Anders Helsberg is still writing a lot of code, yet his ideas continue to empower and influence millions around the world.

Think about that, how do you scale your influence and make it possible to touch the lives of thousands of people? Are there engineering problems crippling your organization? Process pitfalls to improve with huge impact? Education ramp ups? There are always challenges to solve and problems to fix.

5. Choose career investments carefully

How would you set up an investment portfolio? Would you just go about investing in everything? Nope, you would evaluate the risks and benefits, consult experts and then invest in a select few areas while ignoring other areas.

You could spread out your risk by investing in a wide area but doing this excessively dilutes your returns. Conversely, investing in only company could be very risky too. Thus, it’s generally advised to spread  out your investment portfolio

Careers are investment portfolios. A typical career spans a long period ( upwards of 30 to 40 years) and shares some similarities with investments:

  • technologies, frameworks etc -> investment options
  • time -> funds

Just as you wouldn’t jump on every new fund, why would you do the same with your career? There is no harm in taking measured risks in careers but you should be strategic and know what your end goal is.

Every now and then a new framework pops up in the news. Before, I’d hop on the bandwagon and try figure it out. Nowadays? Well, if it really piques my interest, then I might spend some time learning about its core design principles and problem-solving approach.

If it neither solves any of my problems nor brings anything new to the table, then no thank you; I’d rather continue nurturing my current investment portfolio and hedging my bets.

Think about your bets and stick to them.

Conclusion

I am still learning and pray I continue. One thing that has struck me as being really critical is the will to try. We don’t know if something would work out or not however we can always try and then learn from the outcome (success or failure).

Don’t give up – continue learning and growing.

Understanding Bit masks


Bit masks enable the simultaneous storage and retrieval of multiple values using one variable. This is done by using flags with special properties (numbers that are the powers of 2). It becomes trivial to symbolize membership by checking if the bit at a position is 1 or 0.

How it works

Masking employs the bitwise OR and AND operations to encode and decode values respectively.

New composite values are created by a bitwise OR of the original composite and the predefined flag. Similarly, the bitwise AND of a composite and a particular flag validates the presence / absence of the flag.

Assuming we start off with decimal values 1,2,4 and 8. The table below shows the corresponding binary values.

Decimal Binary
0 0000
1 0001
2 0010
4 0100
8 1000

The nice thing about this table is that the four bits allow you to represent all numbers in the range 0 – 15 via combinations. For example, 5 which is binary 101, can be derived in two ways.

5     -> 1 + 4

or

101 -> 0001 | 0100

       -> 0101

7 which is 111 can also be derived in the same form.

Since any number in a range can be specified using these few ‘base’ numbers, we can use such a set to model things in the real world. For example, let’s say we want to model user permissions for an application.

Assuming the base permissions are read, write and execute, we can map these values to the base numbers to derive the table below:

Permissions Decimal Binary
None 0 000
Read 1 001
Write 2 010
Execute 4 100

Users of the application will have various permissions assigned to them (think ACL). Thus a potential model for visitor, reader, writer and admin roles with our base permissions is:

Role Permissions Decimal Binary
Visitors None 0 000
Readers Read 1 001
Writers Read + Write 3 011
Admins Read + Write + Execute 7 111

Noticed a pattern yet?

All the binary values can be obtained by ‘OR-ing’ the base binary values. For example, admins who have read, write and execute permissions have the value obtained when you do a bitwise OR of 1, 2 and 4.

The UNIX model uses the same numbering system. E.g. 777 translates into 111 111 111 which grants owners, groups and others read, write and execute permissions.

Checking Access

Now, the next question is how do you check if a composite value contains a particular flag? Going back to the binary data basics, this means checking if a bit at some position is a 1 or 0.

The bitwise AND operator comes in handy here – it guarantees a 1 when both values sharing the same index are 1 and 0 in all other cases. Thus, ‘AND-ing’ a composite value and the desired flag would reveal the desired outcome. If we get a value greater than zero as the result, then the user has the permission, otherwise a zero means there is no permission.

The admin role has a bitmask value of 111. To check if he really ‘has’ the execute permission we do a bitwise AND of 111 and the execute flag 100. The result is 100 which proves the permission.

More tables! The two tables show how to check for 2 users; one with the read + write + execute (111) permissions and another with the read and execute (101) permissions.

Read + Write + Execute (111)

Permissions 111 bitmask Binary Has permission?
Read 111 001 Yes: 111 & 001 → 1
Write 111 010 Yes: 111 & 010 → 1
Execute 111 100 Yes: 111 & 100 → 1

Read + Execute (101)

Permissions 101 bitmask Binary Has permission?
Read 101 001 Yes: 101 & 001 → 1
Write 101 010 No: 101 & 010 → 0
Execute 101 100 Yes: 101 & 100 → 1

See? Such a simple way to check without having to make unnecessary calls to the server.

Usage

This post shows the permission model however bit masks can be applied a wide variety of scenarios. Possible examples include checking membership, verifying characteristics and representing hierarchies.

A possible use in games could be to verify the various power-ups an actor has and to add new ones rapidly. Eliminates the need to iterate, use the bitmask.

Hierarchical models, where higher members encompass lower ones (e.g. the set of numbers) can also be adequately modeled using bitmaps.

Language support

Explicit language support is not needed to use bitmasks, the rules to know are:

  • Use increasing powers of 2 – ensures there is only one flag at the right spot
  • Create basic building blocks that are easy to extend and combine
  • Remember to watch out for overflows and use values that are large enough to hold all possible bit values. For example, C’s uint_8 / unsigned char can only hold 8 different flags, if you need more, you’d have to use a bigger value.

Some languages provide extra support for bit mask operations. For, example, C# provides the enum FlagsAtribute which implies that the enum would be used for bit masking operations.

Teaser

Q: Why not base 10? After all, we could use 10, 100, 1000 etc.

A: The decimal system falls short because you can have ten different numbers in one position. This makes it difficult to represent the binary ON/OFF state (which maps well to 0/1).