Why you should not use isNaN in JavaScript


I was on a video streaming site recently and moved the play point to the far right. It was amusing to see the hover details show NaN:NaN – ahah, some mathematical operation had NaN-ed and the code didn’t cater for that.

If you have read Why JavaScript ‘seems’ to get addition wrong; you would have seen that some operations do result in a NaNNaN is a value with its roots in the IEEE754 standard definition.

What is NaN?

Nan literally means Not a Number. Yes, it means that value is not a number and occurs when you try to coerce a non-mathematical value (e.g. string) into a number.

How do you check if a value is NaN

How do you know if some value is NaN? Turns out this is not so straightforward.

For numbers, we typically compare to the expected value and that is usually true; however the case for NaN is different.

let two = 2;
two === 2; // true
two == 2; // true

// NaN
let x = NaN;
x === NaN; // false
x == NaN; // false
x === x; // false ???

Unequal ‘equalities’ in Maths and JavaScript

You might be scratching your head and wondering if there are other values that can be unequal. Yes, there is one – Infinity. In Mathematics, infinite values are not equal even if most operations assume this for simplicity.

Imagine two containers of water – a large jug and a small cup. Both contain infinite amounts of atoms right? Yet, it is obvious that the infinite amount of atoms in the large jug is greater than the infinite amount of atoms present in the small cup. The inability to determine a specific value doesn’t automatically make all infinite values equal.

Thus, even though the result of 1 * ∞ and 10 * ∞ are both ∞ in most languages; we can argue the latter is a ‘larger’ type of ∞. It might not matter so much given that computers have finite storage limits. For a more in-depth discussion of this, read Jeremy Kun’s excellent post.

Let’s see if JavaScript obeys this Maths law.

let infinity = Infinity;
infinity === Infinity; // true

(2 * Infinity) === (10 * Infinity); // true

So JavaScript coalesces all Infinity values and makes them ‘equal’. But NaN is exempt from this as shown earlier.

The good thing is that this special quality of NaN stands out. According to the IEEE754 standard, NaN cannot be equal to anything (even itself). Thus to determine if a value is NaN, you can check if that value is not equal to itself.

let nan = NaN;
nan === nan; // false
nan !== nan; // true

The Issue with JavaScript’s isNaN

JavaScript exposes the isNaN method for checking for NaN values. The snag however is that it behaves unreliably with varying operand types.

isNaN(NaN); // true
isNaN(2); // false
isNaN('a'); // true
isNaN(); // true
isNaN(null); // false
isNaN(true); // false

Surprised? Again, this is the exhibition of one of JavaScript’s quirks. The spec reads thus:

Returns true if the argument coerces to NaN, and otherwise returns false.

And what’s the toNumber coercion table?

Value Numeric value
null 0
undefined NaN
true 1
false 0
123 123
[] 0
{} NaN

So you now know why isNaN() and isNaN({a: 1}) are both true even though isNaN([]) is false. Even though arrays are objects, their toNumber coercion is not NaN (as shown in the table above). Similarly since the boolean primitives coerce to numbers; calling isNaN(true) or isNaN(false) will give a false outcome.

Reliably verifying NaN values

There are two fixes to this

1. Prior to ES6, the only way is to check if the value is not equal to itself.

function isReliablyNaN(x) {
    return x !== x;
}

2. ES6 introduces the Number.isNaN method which avoids the inherent toNumber coercion of isNaN. This ensures that only NaN returns true.

Number.isNaN(NaN); // true

// All work as expected now
Number.isNaN(2); // false
Number. isNaN('a'); // false
Number. isNaN(); // false
Number. isNaN(null); // false
Number.isNaN(true); // false

Conclusion

If you are using isNaN in your code; you most likely have a bug waiting to happen some day some time.

You should switch to Number.isNaN which is already supported by all major browsers except IE11 and also add a polyfill fallback (just in case). You should also know that isFinite uses isNaN internally and consequently suffers the same flaws. Use Number.isFinite instead.

I would have wanted a reliable isNaN implementation but alas the special characteristic has now become a ‘feature’ and can’t even be fixed for backwards compatibility reasons.

Related

If you enjoyed this post and wanted to learn more; here are a couple of posts explaining various quirky behaviours in JavaScript.

  1. Why JavaScript ‘seems’ to get addition wrong
  2. Why JavaScript has two zeros: -0 and +0
  3. JavaScript has no Else If
  4. Quirky Quirky JavaScript: Episode One
Advertisements

What it means to be a Senior Software Engineer


What makes a great senior software engineer? That’s the question I always ask managers and senior engineers when I meet with them.

This post discusses the traits of the excellent engineers I have had the opportunity to work with over the years.

1. Empathy

Simple: do not be a brilliant jerk. No one enjoys working with one.

Your teammates are human and have feelings – do not hurt them. No matter how brilliant you are; if you are a jerk, people will avoid interacting with you. Eventually, you will become the ‘brilliant jerk (BJ)’, the ‘respected’ guy on the team that no one wants to work with.

This means being open to opposing ideas – seek to understand what people say and see their viewpoints. Even if you disagree, you can state your thoughts in a respectful way; there is no need to belittle or point fingers.

You do not need to ‘win’ every argument, conserve your energy and emotional strength for more important matters. The trick is to find the core set of high-impact important issues to concentrate on, these are the ones you put your foot down and fight strongly for.

2. See the end to end picture

Farmer: I see a lot of trees

Pilot: I see the Amazon

Astronaut: I see South America

Seeing the big picture is a criteria that distinguishes junior engineers from seniors. As a junior, it is easy to become immersed in implementing some small piece without seeing the big picture. A senior engineer sees how that piece fits into the system, the business need and extension areas.

See the continent, the forest and the trees!

As you go up, do not blindly start coding gung-ho; such approaches will cost you money, time and effort (fixing bugs, reverting code, redesigns). Seek first to understand the full story and see the big picture.

Discuss to get more information about usage, corner cases and expected behaviour. The outcome is better-designed software that is resilient to bugs and easy to extend.

Spend some time thinking about the trade-off between risks and benefits. For tightly-coupled code, a small refactor can trigger an avalanche of downstream changes or break some remote component. This also applies to jumping on the latest fad without fully understanding its design roots; you might not even need that hadoop system.

Senior engineers try to isolate risk and keep the system always running. It is not just about shipping features – it is about shipping features optimally.

3. Crisp Clear Communication

Have you heard of any brilliant scientist that could not communicate?

Concise clear communication is an important skill that most engineers unfortunately lack. Software engineers spend a lot of time discussing designs, explaining architectural decisions and convincing people. They have to go to meetings, attend service reviews and send emails. A lack of good communication skills makes it difficult to get ideas across.

Senior engineers tailor their communication to the target audience. They use the vocabulary and terms to get their ideas across and can express their thoughts to disparate groups. Thus, designers, program managers and their peers understand them.

The good news is that you can learn how to communicate. A heuristic follows:

  1. Know the desired communication outcome
  2. Gather your thoughts
  3. Create a logical sequence for articulating your thoughts
  4. Express your thoughts using the sequence in 3
  5. Know when to stop – no point trying to convince someone who has already closed off their mind
  6. Practice, practice, practice

Learn to get your ideas across convincingly and persuasively – it is key!

4. Sangfroid

Sangfroid: self-possession or imperturbability especially under strain

I met with a high up director when I was about to join PowerBI in 2015. He told me that great engineers have the confidence to take some risk.

Everyone likes a confident person! A confident engineer can motivate teams and colleagues toward achieving a goal. Even if they end up in unfamiliar terrain, past experiences and challenges will have equipped them with the confidence to pull through.

This year, I was working on some critical core of our SaaS offering and no matter how hard I tried some mistakes always slipped through. I was a bit demoralized after one incident and asked A, a senior engineer with nearly 20 years of experience. His advice was thus:

Humans make mistakes and that’s normal, that’s how you learn. What’s bad is making the same mistake twice and not learning from it.

That made my day and I got to realize the difference between learning and building confidence. Do not be too worried about mistakes, rather be ready and fortified to handle them when they arise. That’s sangfroid…

5. Disciplined to know and choose what matters

Time is limited but the amount of work that needs to be done is not. How do you prioritize and deliver the things with optimal impact? What matters to the business is delivering high-quality high-impact outcomes consistently over time.

Choose the tasks with disproportionate impact and do them done first. If your web service is down, what matters most is mitigating issues and getting it back up; deep fixes and reviews can come immediately afterwards. This also applies to working on features that have little business impact – really? If something doesn’t matter, is it worth it investing a lot into that?

A senior engineer approaches boring tasks (e.g., unit tests and documentation) with the same level of seriousness as writing code. These tasks, which are multipliers of value, are just as important (if not even more important) in the long run.

I set up a continuous delivery system via git hooks in 2013 for my team. Pushing to the ‘prod’ branch would update the Amazon Web Service VM we had running in about 2 minutes or so. The naïve me did not write a single line of documentation; I felt it was not needed for three reasons:

  1. I had it all in memory
  2. I could always Google for it
  3. I was the only one handling administration and automation duties

As expected, I got confused after some time trying to understand how the system was set up. That was something that documentation would have helped fix.

Some activities are value multipliers – they save you time and effort. They also make it easier to bring on new developers.

Whatever you do, choose wisely and then do it well!

Conclusion

When can you say you are senior? I will give a few heuristics:

  • When you can lead a huge development project end to end and work across multiple teams to achieve your goal.
  • When you can work independently
  • When you are known as an expert and your views reveal deep wisdom, tact and insight.
  • When you can communicate clearly and express your views

Oh, there are a few other aspects I didn’t cover such as estimation and having the right computer science basics.

Thoughts?

Related

The related posts below show the evolution of my views over the years and make for interesting reading.

  1. Maturing as a software engineer
  2. Advice for aspiring programmers
  3. 10 years of programming: Lessons Learnt
  4. The Effective Programmer – 3 tips to maximize impact
  5. Becoming a Professional Programmer
  6. So you want to become a better programmer
  7. Levels of Developer Expertise
  8. Become a better programmer

A framework for shipping high quality software


Software engineers, technical leads and managers all share one goal – shipping high-quality software on time. Ambiguous requirements, strict deadlines and technical debt exert conflicting tugs on a software team’s priorities. Software quality has to be great otherwise bugs inundate the team; further slowing down delivery speed.

This post proposes a model for consistently shipping high-quality software. It also provides a common vocabulary for communication across teams and people.

Origins

This framework is the culmination of lessons learnt delivering the most challenging project I have ever worked on. The task was to make a web application globally available to meet scaling and compliance requirements.

The one-line goal quickly ballooned into a multi-month effort requiring:

  • Moving from a single compute resource based approach to multiple compute resources.
  • Fundamental changes to platform-level components across all micro services.
  • Constantly collaborating with diverse teams to get more insight.

The icing on the cake? All critical deployments had to be seamless and not cause a service outage.

What’s Donald Rumsfeld gotta do with software?

He’s not a software engineer but his quote below provides the basis.

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

– Donald Rumsfeld

His quote is a simplified version of the Johari window from Psychology. Applying this to software, the window would look thus:

What the developer knows What the developer doesn’t know
What other developers know Known Unknown known
What other developers don’t know Known Unknown Unknown Unknown

1. The known

Feature requirements, bugs, customer requests etc. These are the concepts that are well-known and expected. However, writing code to implement a feature may not guarantee a full known status. For example, untested code can still be a known unknown until you guarantee how it works.

It is one thing to think code works as you think it would and it is another to prove it. Unit tests, functional tests and even manually stepping through every line help to increase the known.

2. The known unknown and the unknown known

I am collapsing both halves into one group because they are related.

  1. Known Unknown

    These are aspects that the developer knows about but other engineers in partner teams don’t know. A good example would be creating a replacement API and making an existing one obsolete. Another example would be changing the behaviour of some shared component.

  2. Unknown Known

    These are the aspects that the developer doesn’t know but engineers in other teams know about. For example, a seemingly minor update of a core component by a developer can trigger expensive rewrite cascades in partner teams. Another example could be quirks that are known to only a few engineers.

Clear communication is the one good fix for challenges in this category. Over-communicate! Send out emails, hold design reviews and continuously engage stakeholders.

This is extra important for changes with far-reaching impact. As the developer/lead/manager, you need to spend time with the key folks and understand their scenarios deeply. This would lead to better models as well as help forecast issues that might arise.

Finally, this applies to customers too – you may what the customer doesn’t know about and vice versa.

3. The unknown unknowns

This is the most challenging category. There is no way to model or prepare for something unpredictable – an event that has never happened before. Unknown Unknowns (UUs) include hacks, data loss / corruption, theft, sabotage, release bugs and so on.

Don’t fret it yet, the impact of UUs can be easily mitigated. Let’s take two more metrics:

  1. Mean time to repair (MTTR)

    The average amount of time it takes to repair an issue with the software.

  2. Mean time to detect (MTTD)

    The average amount of time it takes to detect a flaw.

The most reliable way of limiting the impact of UUs is to keep the MTTR and MTTD low. Compare the damage that a data-corrupting deployment can cause in 5 minutes versus 1 hour.

MTTD

A rich monitoring and telemetry system is essential for lowering MTTD metrics. Log compute system health usage metrics (RAM, CPU, disk reads etc), HTTP request statuses (500s, 400s etc.) and more.

Ideally, a bad release will trigger alarms and notify administrators immediately it goes out. This will enable the service owner to react and recover.

MTTR

Having a feature toggle or flighting system can help with MTTR metrics. Again using the bad release example, a flight/feature toggle will enable you to ‘turn off’ that feature before it causes irreparable damage.

Also critical is having a quick release pipeline, if it takes two days to get a fix out; then your MTTR is 2 days+x. That’s a red flag – invest in a CI pipeline.

tldr?

A software engineer is rolling out a critical core update, a few questions to ask:

  • Does he have enough logging to be able to debug and track issues if they arise?
  • Is the risky feature behind a flight or feature toggle? How soon can it be turned off if something goes wrong?
  • Are there metrics that can be used to find out if something goes wrong after the feature is deployed in production?

A release strategy is to roll out the feature in a turned off state and then turn it on for a few people and see if things are stable. If it fails, then you turn off the feature switch and fix the issue. Otherwise, you progressively roll out to more users.

What steps do you take to ensure software quality?

Related

  1. Creating Great User Experiences
  2. Efficiently shipping Big Hairy Audacious Software projects
  3. Things to check before releasing your web application

Why computer science matters for software developers


I used to think computer science never mattered because I rarely used algorithms and never saw the value of algorithm-based interviews (I still don’t ;) ). The few folks I asked also concurred so I felt I was right.

September 2016

My team got reorged and our goal was to build and deliver a brand new SaaS offering. As part of that move, I made a conscious decision to switch to full stack engineering + dev ops. My experiences have made me re-evaluate my initial stance and realize I was wrong.

Computer Science does matter, a lot! Having that foundation empowers you to

  • Make better tradeoff decisions
  • Innovate new ways of solving problems
  • Spot design pitfalls early; e.g. a design that violates CAP theorem is a disaster waiting to happen.
  • Avoid solving impossible problems e.g. trying to parse HTML with regex.

The following paragraphs show how computer science principles underpin common concepts.

1. File systems and the HTML DOM

What do hierarchical file systems have in common with the HTML DOM? Simple, both are based on trees. Some common operations on trees include reparenting, search and walks. The table below shows how file system and DOM operations can be tied back to basic tree operations.

Tree Operation File system DOM
Search File search Search
Traversal Directory listing Layout rendering
Node reparenting File move Hiding and showing sections of the DOM

Having this foundation allows you to ask deeper questions. For example, let’s analyze the reason behind the discouragement of excessive DOM manipulations.

It’s not the DOM – tree operations are fast. The challenge lies in repainting the entire HTML layout when deep-lying nodes change. If you keep moving nodes around, the browser has to play catch up and this adds up over time.

The ‘best practice’ is to detach the node, manipulate it and then re-attach it. This approach means the browser repaints twice – on attach and detach. The detached node can change several times without triggering DOM reflow since it’s no longer attached.

2. Solving scheduling problems

I wrote a timetable generator in PHP as an intern 8 years ago. Yes, it was a brute force solver and took about an hour to generate a timetable for an average-sized school.

A quick inefficient solution can get you to the market fast and might even work well at small scale. However, any attempt to extend or improve such solutions would be prohibitive. My brute force solution of 2009 would have broken for larger problem sets; a trivial ask such as introducing a new constraint e.g. multiple teachers for multiple classes, would have necessitated a rewrite.

The timetable problem is a constraint satisfaction problem (CSP). Other popular CSP problems include appointment scheduling, map colouring and the 8 queens puzzle. Backtracking search is a standard way to solve CSPs. Solvers can also leverage greedy search, minimum-conflicts heuristics and simulated annealing.

This approach separates the problem from the solver; thus it becomes easy to change the constraints and extend the solver to new scenarios.

3. Avoiding stack overflows

How would you make sure your recursive function never runs out of stack space? This might not be a problem if the language optimizes tail calls but alas most languages don’t.

  1. You could catch the stack overflow exception but that doesn’t mean your computer can’t calculate the value. It just means the recursive implementation exceeded the computer’s stack memory limit.
  2. You could convert the recursive function to a loop. This would require passing in values around and might not be so elegant.

A trampoline solves the problem beautifully and can be reused for all recursive functions and across languages. Read this post to learn how a trampolined factorial function allows you to compute the factorial of 30000.

4. Consistently parsing HTML with Regex?

A common joke is that experienced programmers send newbies on a wild goose chase by asking for a regex parser for HTML. It’s been long since I did any automata course but the short answer goes thus:

  • Regular expressions are a regular grammar (Type 3)
  • HTML is a context-free grammar (Type 2)

Type-2 grammars encompass Type-3 and are thus more complex. See the Chomsky hierarchy.

It might be safe to do this for a small set or to extract data out of pages. But saying this is something that’ll work all the time is not true.

Tip: Know the class of impossible computing problems and save yourself from wasting time on futile challenges.

Conclusion

Concepts like design patterns, computer networks, architecture, etc. all matter in the software engineering profession. However, having a solid CS background is key.

There are loads of opportunities to apply computer science principles to daily tasks – you just have to make the push.

Thoughts?

Related

What every programmer should know about types I


What is a type?

Type: (noun) a category of people or things having common characteristics.

A type represents the range of values of a particular type. Let’s take an example from the mathematical concept of sets in number theory. The set of integers can be seen as a type – only values such as 1, 2, 3 are integers; decimals (e.g. 1.1) or irrational numbers (e.g. π) aren’t members of the integer set. Extrapolating to common programming types, we can create more examples:

  • Integers can only be 1,2,3…
  • Boolean types can only be true or false
  • Floating point / real numbers can represent decimals (with some loss of accuracy for irrational numbers)
  • Strings are strings.

Buckets

Think about a bucket – it can hold various things (water, sand or even fruits). What you can do with a bucket depends on two things:

  • What the bucket contains
  • The rules determining what the bucket can hold.

Lets look at two bucket usage philosophies.

Strict bucket land

The standing rule in strict bucket-land is to have specialized buckets for distinct content types. If you try to use the same bucket for apples and oranges, you’ll get a good talking to.

One advantage is that you immediately know what you are getting once you read a bucket’s label; however, if you need to fetch a single apple and a single orange, you’ll need two buckets; I know this sounds ridiculous but rules are rules.

Loose bucket land

Everything is relaxed in loose bucket-land. Everything – apples, oranges, some sand, ice cream, water etc.) – goes into the same bucket. Unlike the strict folks, no one is going to fret as long as you don’t disrupt their daily lives.

If you need an orange, you dip into the bucket, pull ‘something’ out and then check if you really got an orange. The value is tied to what you actually pull out of the bucket and not the bucket’s label.

Some loose bucketers try to imitate the strict bucketers by using explicitly-labeled buckets . Because there is no hard rule preventing mixes; a trickster can drop an apple into your orange bucket!

Static vs Dynamic Types

The metaphors in the scenario explained above follows:

  • The bucket represents variables (they are containers after all)
  • The bucket contents (e.g. apples, oranges, sand etc.) are the values
  • The rules are the programming language rules

The big schism between dynamically typed and statically typed languages revolves around how they view variables and values.

In static languages; a variable has a type and this restricts the values you can put into it and the operations you can do with it. To draw on the bucket analogy – you typically won’t (and can’t) pour sand into the fruits bucket.

Dynamic language variables have no type – they are like the loose buckets described earlier. Because the containers are ‘loose’, valid actions for a variable depend on its content. To give an analogy – you wouldn’t want to eat out of a sand bucket.

  1. Static systems – variables have a type. So a container may only hold ints, floats or doubles etc.
  2. Dynamic type systems – variables can hold anything. However the values (contents of the variables) have a type.

That is why you can’t assign a string to an int in C#/Java because the variable container is of type int and only allows ints. However, in JavaScript, you can put anything in a variable. The typeof operator checks the type of the value in the variable and not the variable itself.

Why does this matter?

The adherents of the static typing methodology always argue that if it ‘compiles’ then it should work. Dynamically typed language adherents would quickly poke holes in this and tout good testing because that guarantees code works as expected.

In a very strictly-typed language, it would be theoretically impossible to write code that would have runtime bugs! Why? Because typically bugs are invalid states and the type checks would make it impossible to represent such states programmatically. Surprised? Check out Haskell, F#, Idris (yes it is a programming language) or Agda.

The strictness vs testing spectrum

How much testing is required given a language’s type system?

I would view this as a spectrum – to the left would be the extremely loose languages (JavaScript) whereas the right end would contain the strictest languages (Idris). Languages like Java and C# would fall around the middle of this spectrum – they are not strict enough as is evident by loads of runtime bugs.

As you move from the left to the right, then the amount of testing you need to validate the correctness of your program should reduce. Why? The type system will provide the checks for you on your behalf.

Conclusion

I hope this post clarifies some thoughts about type systems and testing for you. Here are some other posts that are related:

  1. Programming Language Type Systems I
  2. Programming Language Type Systems II

Faking goto in JavaScript


What if I told you JavaScript had a limited form of the infamous goto statement? Surprised? Read on.

Labeled Statements

It is possible to add label identifiers to JavaScript statements and then use these identifiers with the break and continue statements to manage program flow.

While it might be better to use functions instead of labels to jump around, it is worth seeing how to jump around or interrupt loops using these. Let’s take an example:

// print only even numbers
loop:
for(let i = 0; i < 10; i++){
    if(i % 2) {
        continue loop;
    }
    console.log(i);
}
//0, 2, 4, 6, 8

// print only values less than 5
loop:
for(let i = 0; i < 10; i++){
    if(i > 5) {
        break loop;
    }
    console.log(i);
}
// 0, 1, 2, 3, 4, 5

There is a subtle difference between when labels can be used:

  • break statements can apply to any label identifier
  • continue statements can only apply to labels identifying loops

Because of this, it is possible to have the sample code below (yes it’s valid JavaScript too!)

var i = 0;
block: {
     while(true){
         console.log(i);
         i++;
         if(i == 5) {
             break block;
             console.log('after break');
         }
     } 
}
console.log('outside block');
// 0, 1, 2, 3, 4, outside block

Note

  1. continue wouldn’t work in the above scenario since the block label applies to a block of code and not a loop.
  2. The {} after the block identifier signify a block of code. This is valid JavaScript and you can define any block by wrapping statements inside {}. See an example below
{
let i = 5;
console.log(i);
}

// 5

Should I use this?

This is an arcane corner of JavaScript and I personally have not seen any code using this. However if you have a good reason to use this, please do add comments and references to the documentation. Spare the next developer after you some effort…

Related

  1. What you didn’t know about JSON.Stringify
  2. Why JavaScript has two zeros: -0 and +0
  3. JavaScript has no Else If

What you didn’t know about JSON.Stringify


JSON, the ubiquitous data format that has become second nature to engineers all over the world. This post shows you how to achieve much more with JavaScript’s native JSON.Stringify method.

A quick refresher about JSON and JavaScript:

  • Not all valid JSON is valid JavaScript
  • JSON is a text-only format, no blobs please
  • Numbers are only base 10.

1. JSON.stringify

This returns the JSON-safe string representation of its input parameter. Note that non-stringifiable fields will be silently stripped off as shown below:

let foo = { a: 2, b: function() {} };
JSON.stringify(foo);
// "{ "a": 2 }"

What other types are non-stringifiable? 

Circular references

Since such objects point back at themselves, it’s quite easy to get into a non-ending loop. I once ran into a similar issue with memq in the past.

let foo = { b: foo };
JSON.stringify(foo);
// Uncaught TypeError: Converting circular structure to JSON

// Arrays
foo = [foo];
JSON.stringify(foo);
// Uncaught TypeError: Converting circular structure to JSON

Symbols and undefined

let foo = { b: undefined };
JSON.stringify(foo);
// {}
// Symbols
foo.b = Symbol();
JSON.stringify(foo);
// {}

Exceptions

Arrays containing non-stringifiable entries are handled specially though.

let foo = [Symbol(), undefined, function() {}, 'works']
JSON.stringify(foo);
// "[null,null,null,'works']"

Non-stringifiable fields get replaced with null in arrays and dropped in objects. The special array handling helps ‘preserve’ the shape of the array. In the example above, if the array entries were dropped as occurs in objects, then the output would have been [‘works’]. A single element array is very much different from a 4 element one.

I would argue for using null in objects too instead of dropping the fields. That way, we get a consistent behaviour and a way to know fields have been dropped.

Why aren’t all values stringifiable?

Because JSON is a language agnostic format.

For example, let us assume JSON allowed exporting functions as strings. With JavaScript, it would be possible to eval such strings in some scenarios. But what context would such eval-ed functions be evaluated in? What would that mean in a C# program?  And would you even represent some language-specific values (e.g. JavaScript Symbols)?

The ECMAScript standard highlights this point succinctly:

It does not attempt to impose ECMAScript’s internal data representations on other programming languages. Instead, it shares a small subset of ECMAScript’s textual representations with all other programming languages.

2. Overriding toJSON on object prototypes

One way to bypass the non-stringifiable fields issue in your objects is to implement the toJSON method. And since nearly every AJAX call involves a JSON.stringify call somewhere, this can lead to a very elegant trick for handling server communication.

This approach is similar to toString overrides that allow you to return representative strings for objects. Implementing toJSON enables you to sanitize your objects of non-stringifiable fields before JSON.stringify converts them.

function Person (first, last) {
    this.firstName = first;
    this.last = last;
}

Person.prototype.process = function () {
   return this.firstName + ' ' +
          this.lastName;
};

let ade = new Person('Ade', 'P');
JSON.stringify(ade);
// "{"firstName":"Ade","last":"P"}"

As expected, the instance process function is dropped. Let’s assume however that the server only wants the person’s full name. Instead of writing a dedicated converter function to create that format, toJSON offers a more scalable alternative.

Person.prototype.toJSON = function () {
    return { fullName: this.process(); };
};

let ade = new Person('Ade', 'P');
JSON.stringify(ade);
// "{"fullName":"Ade P"}"

The strength of this lies in its reusability and stability. You can use the ade instance with virtually any library and anywhere you want. You control exactly the data you want serialized and can be sure it’ll be created just as you want.

// jQuery
$.post('endpoint', ade);

// Angular 2
this.httpService.post('endpoint', ade)

Point: toJSON doesn’t create the JSON string, it only determines the object it’ll be called with. The call chain looks like this: toJSON -> JSON.stringify.

3. Optional arguments

The full signature stringify is JSON.stringify(value, replacer?, space?). I am copying the TypeScript ? style for identifying optional values. Now let’s dive into the replacer and space options.

4. Replacer

The replacer is a function or array that allows selecting fields for stringification. It differs from toJSON by allowing users to select choice fields rather than manipulate the entire structure.

If the replacer is not defined, then all fields of the object will be returned – just as JSON.stringify works in the default case.

Arrays

For arrays, only the keys present in the replacer array would be stringified.

let foo = {
 a : 1,
 b : "string",
 c : false
};
JSON.stringify(foo, ['a', 'b']);
//"{"a":1,"b":"string"}"

Arrays however might not be as flexible as desired,  let’s take a sample scenario involving nested objects.

let bar = {
 a : 1,
 b : { c : 2 }
};
JSON.stringify(bar, ['a', 'b']);
//"{"a":1,"b":{}}"

JSON.stringify(bar, ['a', 'b', 'c']);
//"{"a":1,"b":{"c":2}}"

Even nested objects are filtered out. Assuming you want more flexibility and control, then defining a function is the way out.

Functions

The replacer function is called for every key value pair and the return values are explained below:

  • Returning undefined drops that field in the JSON representation
  • Returning a string, boolean or number ensures that value is stringified
  • Returning an object triggers another recursive call until primitive values are encountered
  • Returning non-stringifiable valus (e.g. functions, Symbols etc) for a key will result in the field being dropped.
let baz = {
 a : 1,
 b : { c : 2 }
};

// return only values greater than 1
let replacer = function (key, value) {
    if(typeof value === 'number') {
        return value > 1 ? value: undefined;
    }
    return value;
};

JSON.stringify(baz, replacer);
// "{"b":{"c":2}}"

There is something to watch out for though, the entire object is passed in as the value in the first call; thereafter recursion begins. See the trace below.

let obj = {
 a : 1,
 b : { c : 2 }
};

let tracer = function (key, value){
  console.log('Key: ', key);
  console.log('Value: ', value);
  return value;
};

JSON.stringify(obj, tracer);
// Key:
// Value: Object {a: 1, b: Object}
// Key: a
// Value: 1
// Key: b
// Value: Object {c: 2}
// Key: c
// Value: 2

5. Space

Have you noticed the default JSON.stringify output? It’s always a single line with no spacing. But what if you wanted to pretty format some JSON, would you write a function to space it out?

What if I told you it was a one line fix? Just stringify the object with the tab(‘\t’) space option.

let space = {
 a : 1,
 b : { c : 2 }
};

// pretty format trick
JSON.stringify(space, undefined, '\t');
// "{
//  "a": 1,
//  "b": {
//   "c": 2
//  }
// }"

JSON.stringify(space, undefined, '');
// {"a":1,"b":{"c":2}}

// custom specifiers allowed too!
JSON.stringify(space, undefined, 'a');
// "{
//  a"a": 1,
//  a"b": {
//   aa"c": 2
//  a}
// }"

Puzzler: why does the nested c option have two ‘a’s in its representation – aa”c”?

Conclusion

This post showed a couple of new tricks and ways to properly leverage the hidden capabilities of JSON.stringify covering:
  • JSON expectations and non-serializable data formats
  • How to use toJSON to define objects properly for JSON serialization
  • The replacer option for filtering out values dynamically
  • The space parameter for formatting JSON output
  • The difference between stringifying arrays and objects containing non-stringifiable fields
Feel free to check out related posts, follow me on twitter or share your thoughts in the comments!

Related

  1. Why JavaScript has two zeros: -0 and +0
  2. JavaScript has no Else If
  3. Deep dive into JavaScript Property Descriptors

Creating Great User Experiences


Have you ever wondered why some applications always look and feel similar? Why for example does Apple have a unified experience across devices? Why are Google products starting to adopt the material experience?

This post explains some of the underlying themes influencing design choices. Developers can use these concepts to craft better user interfaces and experiences.

1. Consistency

A well designed user experience is consistent through out its constituent parts. Imagine how easy it would be to use an app with varying colour shades, item sizes and page styles? Wouldn’t that be confusing and difficult to understand?

A couple of examples of design consistency include:

  1. All computer applications usually have the set of three buttons on the title bar (close, minimize and maximize).
  2. Right-click nearly always opens up a context menu
  3. The menu bar for applications starts with drop downs titled File, Edit and so on

Such consistency lowers the learning barrier as they are implicit assumptions in the minds of users. When was the last time you tried to figure out the ‘close’ button? You don’t need to think, you subconsciously understand and use them.

UI frameworks like semantic-ui, bootstrap or Fabric provide a good foundation for quickly ramping up web applications. A colour palette helps ensure visual colour consistency. Do be careful of contrast though; having too much contrast is not visually appealing while having too little makes it difficult to see components.

And this leads us to the next point : ‘learnability’.

2. Learnability

One of the enduring messages I picked up from Steve Krug’s great book: ‘Don’t make me think’ is to avoid making users think. Why? When people have to ‘think’ deeply to use your application, that interrupts their concentration and flow; such interrupts might be fatal as some users might just leave your site.

A low learning curve helps users quickly become familiar and deeply understand how to use your product. Think of any software you are proficient in, now think about the first time you used that product, was it difficult? Did you grow to learn?

A couple of tips here:

  1. Component Behaviour – One common issue I run into with components is variable behaviour for the same action in different scenarios. For example, a set of actions might trigger a dialog to pop up while in other scenarios the dialog is not triggered. It’s much better to have the dialog pop up consistently otherwise you’ll have your users build many mental models of expected interaction responses.
  2. Grouping Related collections can be grouped and arranged in hierarchies that can be drilled into. I prefer using cards for such collections and allowing users drill up/down. Another thing to note is to ensure that the drill ladders are consistent; no point having users drill to a level they can’t drill out of.

3. Clickability

My principle of ‘clickability’

The number of clicks required to carry out a critical operation is inversely proportional to the usability scores for the web application.

The larger the number of clicks needed to achieve a major action, the more annoying the app gets over time. Minimize clicks as much as possible!

But why is this important? Let’s take a hypothetical example of two applications – app 1 and app 2 – which require 2 and 4 clicks to register a new user.

Assuming you need to add 5 new users, then app 1 requires 20 clicks while app 2 requires 40 clicks, a difference of 20 clicks! It’s safe to assume then that users who have to use the application for hours on end will prefer app1 to app2.

The heuristic I use is to minimize the number of clicks in core user paths and then allow short cuts for cases that are not possible.

Yes, ‘clickability’ does matter.

4. Navigability

Building upon the principle of clickability, this implies designing your flow so that the major content sections are well interlinked. It also includes thinking of the various links between independent modules so that navigation is seamless.

For example, you have a navigation hierarchy that is 3 layers deep and want users to do some higher-level action while at the bottom of the navigation tree. You can add links to the leaf nodes and even subsequent tree levels to ensure users can quick jump out rather than exposing a top-level option only. Why? See the principle of clickability above.

A good exercise is to build out the interaction tree which exposes the links between sections of your app. If you have a single leaf node somewhere, then something is really wrong. Once done, find ways to link leaf nodes if such interactions are vital and enhance the user experience.

5. Accessible + Responsive + nit-proof

One of my favorite tests is to see how a web application works on smaller devices.That quickly exposes a lot of responsive issues.

Another one is to add way too many text characters to see if text overflow is properly handled. The fix for that is very simple: just add the two CSS rules

.noOverflow {
   text-overflow: ellipsis;
   overflow: hidden;
}

Another important concept is adding titles to elements and following accessibility guidelines.

All these are important and show that some thought was put into the process.

Conclusion

Do a quick test of your application on some random user.The way users interact with your application and navigate around might surprise you!

While running such tests, ask if users feel they have an intuitive understanding of how the app works. If yes, bravo! Pat your self on the back. Otherwise, your assumptions might be way off and you need to return to the drawing board.

The best experiences always look deceptively simple but making the complex simple is a very difficult challenge. What are your experiences creating great user experiences?

Related

  1. Things to check before releasing your web application
  2. Tips for printing from web applications
  3. How to detect page visibility in web applications
  4. How to track errors in JavaScript Web applications

JavaScript has no Else If


Surprised? Read on.

The following code might be very familiar to you, in fact I write this too a lot.

if (a > b) {
    return 1;
} else if (b > a) {
    return -1;
} else {
    return 0
}

However what JavaScript actually parses is shown below

if (a > b) {
    return 1;
} else {
    if (b < a) {
        return -1;
    } else {
        return 0
    }
}

Same end result but closer to the underlying language semantics.

Why?

JavaScript allows if/else conditionals to omit the wrapping braces – some actually argue this is a neater style due to its terseness.

A contrived example is shown below.

if (a > b)
    return 1;
else
    return -1;

Consequently it follows that the else in the else if line is actually omitting its {} wrappers.

So whenever you write else if, you are technically not following the rule of always using braces. Not saying it is a bad thing though…

References

  1. MDN article
  2. ES5 spec

Related

  1. Why JavaScript ‘seems’ to get addition wrong
  2. Three Important JavaScript Concepts

Efficiently shipping Big Hairy Audacious Software projects


I recently transitioned into a full-stack role – I wanted to stretch myself and step out of my comfort zone. The biggest challenge was my struggle to quell the quite nagging voice in my mind screaming ‘impostor!’.

So I got a couple of Big Hairy Audacious Software (BHAS) pieces dropped on my plate and that experience motivated this post; after all who doesn’t want to get better at delivering BHAS projects? My style of working is to consciously take notes as I run into issues, ship software pieces or meet milestones. The introspection typically involves 3 questions:

  • What to avoid:

While shipping some feature, I implemented some shared parts in the common library. Unfortunately, that caused a lot of dependencies and significantly slowed down my development speed. Learning? Implement the shared pieces in the same library and do the refactoring as a follow up task.

  • What to repeat

Typically hard-earned lessons, for example, having to debug issues in production quickly showed me what good tracing information should look like.

  • What to improve

The middle ground, things that went well but could be improved. This is mostly driven by notes I keep while developing.

So thoughts so far.

1. Define the end goal

A BHAS project contains a lot of ambiguity in addition to the technical challenges involved! A sample task might be to implement a usage tracking system for ships.

The first question should not be ‘how do I start’ or ‘where do I start from’; rather I’d re-frame that to be ‘What is the desired end product’; what does a successful launch mean? Having crisp targets and communicating those to all the stakeholders is very essential as it helps to set the right expectations.

Furthermore, once the success metric is known, then it becomes possible to breakdown the BHAS into smaller more manageable chunks with less ambiguity.  Once done, order this list and put the tasks with the highest impact at the very top; why? The Pareto principle (also the 80/20 rule) implies that a few of these tasks would provide the highest importance and value. So if you don’t get the lower tasks done, it’s still a win.

2. Take small measured steps

A favorite quote of mine:

How do you eat an elephant? One bite at a time!

Assuming you picked up the most important task to work on and are still stumped. Start with the simplest easiest thing you think will help you wrap up that task – it could even be creating a branch or implementing some simple helper function.

Why this is important is that it sets you on a roll and helps you to build momentum. The more micro-tasks you complete, the more confident you are and the faster you accelerate towards ‘cruise speed’ (aka flow).

3. One thing at a time

I have tried in the past to multi-task; on such days I would feel like I did a lot of work yet deeper examinations would imply low productivity. There are two theories that come to mind:

  1. My brain convinces me that rapidly switching is more efficient and productive.
  2. Same brain convinces me that it is possible to simultaneously do two mentally intensive tasks.

No, humans can’t multi-task; rather our brains do rapid context switching. So if you think you can carry out two very demanding tasks then you either have two brains or are losing ‘brain cycles’ to rapid switches.

Another illusion to watch out for is the effect that confuses ‘busy work’ with being ‘productive’; no you can be busy (answering emails or chatting etc) without being productive. Now, I try to do one thing at a time with full mindfulness and the results have been more meaningful outcomes.

So if you are working on a task, concentrate all efforts into finishing that task alone. When you are done or if you get blocked, then you can consider jumping on a parallel or orthogonal task.

Staying mindful is also quite difficult as there are lots of distractions – consciously evaluate if you need to be in that meeting, that discussion or reply to that group email.

4. Be persistent

Once you’ve picked a task you should stick to it. Ideally any task you are working on at any time should be the one with the highest projected value returns (think weighted graphs).

Sure there might be difficulties but instead of skipping around you should dig in and solve the challenges. Ever wondered why you only get the urge to check email only when a tricky question comes up? Aha, it’s the brain trying to ‘run’ away. Don’t check that mail – your brain is just seeking a way out. Give it all your best and push until you get a breakthrough.

5. Don’t seek perfection and ship the first rough version

No writing is perfect in the first draft; rather you get your thoughts out and then do a number of passes: cleaning up typos, grammatical structure and style in follow up sweeps. The same concept applies to writing code too.

I remember trying to beautify my code and write loads of redundant unit tests while building a particularly tricky interface. I wanted the first version to be perfect. Know what? Most of those unit tests got deleted when the feature was completed.

If you are trying to build a prototype to test how something works, it is ok to be loose with rules since it’s going to be a proof-of-concept. Once you have that done, you can then design the system, implement it and add tests.

Conclusion

I hope these help you to deliver better software more efficiently and faster. Again, this can be summarized in the following points:

  • Define what the finished product is
  • Break it down into chunks
  • Ruthlessly focus on delivering the high-impact tasks – don’t multitask!
  • Deliver fast, eliminate all distractions.

Do let me know your thoughts.