Chrome dev tools deep dive : Elements


1. Search

CTRL + F allows you to search for strings in the DOM but you can also search using CSS selectors and XPath expressions.

searchElements

2. Color picker

Ever wanted to figure out what colours exist on a web page? Or prefer some other colour format?

Here comes the color picker:

  • Shift click on the color box to change colour formats (e.g. hsl, rgb)
  • Pop up the color picker by clicking on the box

colorPicker

3. Force Element State

You want to inspect the hover style of some DOM element – you can hover over it but you lose the pseudo-state as soon as you exit the element boundaries. A better way is to force the element into the desired state.

There are two ways:

  • Right-click the element
  • Using the styles panel

forceState

4. Drag / Drop elements to new positions

Want to quickly move some element to some new location? So drag drop it. This comes in handy for issues relating to z-indices and ordering.

dragdrop

5. Debug on DOM modifications

This allows you to break whenever a DOM element or its children are modified. Very useful for debugging dynamically-injected DOM elements. There are three possible options:

  • Subtree modifications – DOM hierarchy changes under the chosen node (e.g. child insertions and deletions)
  • Attributes – inline style changes, attribute changes, animations
  • Node removal – when the node is deleted

You can use the DOM breakpoints tab to view all such breakpoints.

When an event triggers the breakpoint, the browser will stop at that point. This is nearly always jQuery; you can then walk up the stack to find the code that triggered the DOM modification.

DOMBreak

6. The Styles tab

Allows you to edit existing CSS styles, add new ones and also view the source. CTRL + Click on a particular rule will open it up in the sources definition. Clicking on the full style too will open up that file.

styles

7. Computed Styles

This shows the active styles applied to an element. It also comes in handy for figuring out what rules come from the browser’s default stylesheet.

To see all the applied rules on an element, check the show all checkbox and it expands. It’s also a very good way to learn more CSS.

computed

Next? The console tab – one of my favorites!

Quick estimation tips for engineers


The Millau Viaduct. The Guggenheim Museum Bilbao. The Dubai Palm Islands.

Architectural masterpieces – beautiful art with solid engineering foundations.

Bash. Hadoop. Git.

Great software – beautiful code with solid architectural themes.

Is software development art? Pretty much. Is it engineering? Pretty much as well.

Dams, tunnel-boring machines and turbines are similar to operating systems, high-performance load-balancers and big data systems. Regardless of the end product’s tangibility – good engineering is necessary for delivering great results.

Engineers need to estimate system performance and simulate real-life scenarios. For most engineering fields, there are rich banks of proven theories and mathematical relations to rely upon. Unfortunately, software engineering – the new kid on the block – has a few rigorous rules, most times we rely on heuristics and handed-down wisdom.

The good news is that most of these rules can be successfully tailored to software engineering.

This post examines a few of these rules and shows how to use them.

1. Rule of 72

This is great for estimating exponential growth rate scenarios. It is well known in the financial sector but can be easily applied to algorithmic growth. The rule:

An exponential process with a growth rate will roughly double its value in time if r X t = 72.

The rule has its roots in logarithms (log 2 ~ 0.693). 69 is more accurate however it doesn’t have as many factors. 68 and 70, which are just as close, have the same flaw too. The closest easy-to-factor number is 72 (factors are 1,2,3,4,6,8,9,12,18,24,36 and 72).

Example

An exponential algorithm with a growth rate of 4% (i.e. it takes 1.04 more time to run a problem size of n + 1 compared to n) will have its run time doubled when the problem size increases by a factor of 18. Why? Because 4 * 18 = 72.

What if the problem size increases by 90?

4 * 90 = 360

360 / 72 = 5

Thus, we can expect the run time to double 5 times, a 32-fold (2 ^ 5) increase in run time.

Test: What problem size would cause a 1000-fold increase in runtime? Hint 1000 ~ 1024, (2^ 10).

2. π seconds make a nanocentury

Well, not exactly but close enough for back-of-the-envelope calculations and easy to remember.

π, the ubiquitous constant, is approximately 3.1427 thus π seconds is ~3.1427 seconds. A nano-century is 10-9 * 100 years which is 10-7 years. There are approximately 3.154 x 10seconds in a year and thus about 3.154 seconds in a nano-century.

The 0.3% difference between 3.154 and 3.142 is safe enough for quick estimates. Thus, π can be used for such quick calculations (with some minor accuracy losses).

Example

Let’s build on rules 1 and 2 with a possible real-world scenario.

An exponential program with a growth rate of 72% takes 300 seconds to run on a sample size n; would it be wise to run the program on a problem with n = 1000000?

Using the rule of 72, a 1000-fold increase in leads to a 10x increase in run time.

0.72 * 1000 = 720.

720 / 72 = 10

Doubling the run time 10 times gives a factor of 1024. The 1000-size problem should take about 1024 * 300 seconds ~ 300 000 seconds.

Let’s invoke the π seconds rule next, 300 seconds ~ 100 π seconds which in turn gives about 3.6 days. If a 1000-fold increase will cause a 4-day wait period; you can well imagine what a million-fold increase would lead to. Spending 3 days on a better algorithm might be worth it…

3. Little’s law

Little’s law states that the average number of things in a system L, is equal to the average rate at which things leave (or enter) the system λ, multiplied by the average amount of time things spend in the system, W. i.e. L = λ * W.

Imagine a processing system with a peak capacity of 2000 requests per second. Let’s further assume that is a 5-second processing time for each request. Using Little’s law, the system needs to robustly accommodate 10,000 requests (you better start thinking about memory too).

One fix might be to drop all messages that have spent more than 5 seconds in the system, thereby allowing new requests to come in. Otherwise, the system would need extra processing capability or storage under heavy loads.

A better approach would be to over-engineer and design a system with a higher capacity e.g. one with a 2500 requests per second limit. Thus 2000 requests per second spikes would push the system to  a comfortable 80% of max capacity.

Is over-engineering bad? Doesn’t it go against YAGNI? Well, think of this:  over-engineering is one of the reasons why the 132-year old Brooklyn Bridge is still standing today.

Little’s law is very useful when building web servers, real-time systems and batch processors. Stable complex systems can be built by linking several small systems that obey Little’s law. It has also been applied to Kanban work-in-progress (WIP) limits.

The beauty of Little’s law is in its simplicity and versatility – you can even use it to estimate queue wait times!

Note: The Guava RateLimiter mentions the rule, not sure if it implements it though.

4. Casting out 9s

A quick rule of thumb for verifying results of mathematical operations. Let’s take summation as an example.

Given x + y = z; the sum of all the digits in and modulo 9 must be equal to the sum of all the digits of modulo 9. Difficult? Nope, just cast out 9s as you go along.

Let’s take an addition example:

  1242

+3489

+4731

 _____

 9462

The sum of digits in the sum is (9 + 4 + 6 +2) = 21. 21 modulo 9 is 3. The sum of digits in the addends is (1 + 2 +4 + 2) + (3 + 4 + 8 + 9) + (4 + 7 + 3 + 1) = 9 + 24 + 15 = 48. And 48 modulo 9 is 3. Since both remainders are 3, we can assume the addition is correct.

For faster results, cast out 9s as soon as they are generated. For example, 9462 gives 9 + 4 + 6 + 2 which gives 9 + 1 + 2 = 3.

Subtractions can be verified by casting out 9s in the reverse operation. For example a – b = c turns into a = b + c. The rule also works for multiplication and division too.

P.S: Teach this to your kids…

More and more scenarios

How many books can a 1.44MB floppy disk hold?

Assuming a book has 250 pages, with each page containing 250 words and an average word length of 5 characters. This gives an average of 312500 characters per book.

A character can be stored in a byte; thus a book could be stored in about 0.325MB (~0.31 MebiBytes MiB). 4 such books will easily fit in a 1.44MB floppy disk. What about a 2GB memory stick? That can hold 6100+ books (several lifetimes of reads).

Can I physically move data faster than 100MB/s Ethernet?

How long does it take to transfer 1TeraBytes of data via a 100MB/s Ethernet link channel?

1 Terabyte = 1000000 MegaBytes; thus the Ethernet link channel would take 1000000 / 100 = 10000 seconds for a successful corruption-free transfer; which is about 2.778 hours. If physically moving a 1 TeraByte drive to the new location takes 30 mins; then it makes more sense to do that.

Need to move 10 Terabytes? Maybe a courier would be faster…

Conclusion

This post is to show the power of back-of-the-envelope and how they enable us to make quick accurate estimates.

Related

Casting out nines

Tips for printing from web applications


Web developers usually have to format user content for printing; for example, accountants might want physical copies of online ledgers while teachers might need lecture note printouts.

The challenge lies in getting consistent print output across a range of browsers and their never-ending stream of subtle nuances.

This post approaches printing from three viewpoints: tooling, JavaScript and CSS. It also describes the pitfalls and quirks to watch out for when printing across the big 5 browser platforms (i.e. Chrome, Edge, Firefox, Internet Explorer and Safari).

1. Tooling

Quick question – what does this page look like when printed?

Did you do a CTRL + P to figure that out? Print previews work excellently if you want to ‘see’ what a page looks like when printed. But what if you wanted to interact with the page and change it? Say you wanted to toy with the CSS or even debug JavaScript. Clearly, the preview option comes short; even if it doesn’t, going to preview mode every time you make changes is not fun.

Good news, Chrome provides print emulation tooling which allows you to style a page for printing. To activate the print emulation mode, do the following:

  • Press F12 to bring up the Chrome dev tools
  • Press Esc to bring up the extra tabs
  • Select the Emulation tab
  • Select the media option
  • Check the CSS media check box
  • Select print (or any other media target you desire)

chrome tools

The Chrome emulation mode should reveal the major print issues however, you should sanity check other browsers’ in print preview mode. This should catch the few remaining quirks, things like forgetting –webkit prefixes for Safari.

JavaScript

Media Listeners

The big 5 browsers all have media listener support, this allows detecting media properties such as orientation, resolution and viewport dimensions. For example, the snippet below will be triggered when the browser width falls below 960px.

var widthHandler = function(mql) {
    if(mql.matches) {
        console.log('Viewport width is <= 960px');
    } else {
        console.log('Viewport width is > 960px');
    }
}

var mql = window.matchMedia('(width: 960px)');
mql.addListener(widthHandler);

Chrome and Safari

The interesting aspect is the print media query option; Chrome and Safari can detect print events via these listeners. Unfortunately, IE, FF and Edge, even though they support other media queries, do not offer print media support.

Safari tends to fire the media listener event ‘after’ the print dialog appears. This effectively renders any desired print pre-processing useless – what’s the use if the print dialog is already visible?

Chrome offers the best support, modification of print options (e.g. paper type or layout) will still trigger appropriate print events; this is very useful if you want to restyle your page dynamically based on print options. Alas, only Chrome gives this option.

Here is how you attach handlers for the Chrome/Safari scenario.

var printHandler = function(mql) {
    if(mql.matches) {
        console.log('Print');
    } else {
        console.log('Not print');
    }
};

var mql = window.matchMedia('print');
mql.addListener(printHandler);

When you are all done, remember to clean things up by calling mql.removeListener(printHandler).

IE, FF and Edge

These 3 browsers expose the onbeforeprint and onafterprint events.

window.onbeforeprint = function () {
    console.log('Print started');
};

window.onafterprint = function () {
    console.log('PRINT DONE');
};

You would expect the onafterprint handler to be called immediately after the print dialog is disposed right? Nope, it is nearly always invoked immediately after onbeforeprint. Invocation order: onbeforeprint -> onafterprint -> print dialog opens.

Print events have a mind of their own…

Cross-browser printing

Merging both approaches gives a combination that should work well in most scenarios. See below:

function beforePrint () {
    console.log('before print');
}
function afterPrint() {
    console.log('after print');
}
window.onbeforeprint = beforePrint;
window.onafterprint = afterPrint;

var printHandler = function(mql) {
    if(mql.matches) {
        beforePrint();
    } else {
       afterPrint();
    }
};

var mql = window.matchMedia('print');
mql.addListener(printHandler);

CSS media

CSS media queries provide very powerful print styling capabilities. You should handle most of the styling issues with CSS and wrap up the thornier edge cases with JavaScript.

The introduction of the print media block allows you to control and override existing DOM styles. This makes it possible to hide certain elements on print, elements to new positions or even change the entire page’s layout.

@media print {
    body {
        width: 960px !important;
    }
}

Now in print mode, your page will have a width of 960px. Go ahead, go toy and play with this.

Note that you might need to add !important to make sure print styling overrides existing styling. For inline styles, !important might not work however increasing the specificity of the CSS selectors would eventually work.

Happy printing!

Related

Detecting Print Requests with JavaScript

MDN documentation on CSS Media

How to set up a print style sheet

Understanding and using Streams in JavaScript


Introduction

What do you think of the following code snippet?

NaturalNumbers()
.filter(function (n) { return n % 2 === 0; })
.pick(100)
.sum();

Isn’t it beautifully succinct and neat? It reads just like English! That’s the power of streams.

Streams are just like lists but offer more capabilities because they simultaneously abstract data and computation.

Streams vs Lists/Arrays?

Let’s take a scenario from Mathematics, how would you model the infinite set of natural numbers? A list? An Array? Or a Stream?

Even with infinite storage and time, lists and arrays do not work well enough for this scenario. Why? Assuming the largest possible integer an array can hold is x, then you’ve obviously missed out on x + 1. Lists, although not constrained by initialization, need to have every value defined before insertion.

Don’t get me wrong, lists and arrays are valid for a whole slew of scenarios. However, in this situation, their abstraction model comes up short. And when abstractions do not perfectly match problem models, flaws emerge.

Once again, the constraints of this problem:

  • The size of the problem set might be infinite and is not defined at initialization time  (eliminates arrays).
  • Elements of the set might not be defined at insertion time (eliminates lists).

Streams, which combine data and computation, provide a better abstraction for such infinite problem sets. Their ability to model infinite lists stems from lazy evaluation – values are only evaluated when they are needed. This can lead to significant memory and performance boosts.

The set of natural numbers starts from 1 and every subsequent number adds 1 to its predecessor (sounds recursive eh? ). So a stream that stores the current value and keeps adding one to it can model this set.

Note: As might have become obvious: extra data structures might be needed to store previously generated stream values. Streams typically only hold a current value and a generator for calculating the next value.

What is a Stream?

I published stream-js, a very small (4.1kb minified) library that provides stream processing capabilities. Grab it or read the source as the post builds on it.

Oh, do contribute to the repo too!

How do I create a stream?

The Stream constructor expects an initial value and a generator function, these two values form the stream head and tail respectively.

An empty stream has null head and tail values. In infinite streams, the tail generator will endlessly generate successive values.

var emptyStream = new Stream(null, null);

var streamOf1 = new Stream(1, null);

var streamOf2 = new Stream(1, function () {
    return new Stream(2, null);
});

var streamOf3 = Stream.create(1,2,3);

var streamFromArray = Stream.fromArray([1,2,3]);

Note: The fromArray method uses the apply pattern to partially apply the input array to the arguments function above.

Show me the code!

Now that you know how to create Streams, how about a very basic example showing operations on Streams vs Arrays in JS?

With Arrays

var arr = [1,2,3];
var sum = arr.reduce(function(a, b) {
    return a + b;
});
console.log(sum);
//6

With Streams

var s = Stream.create(1,2,3);
var sum = s.reduce(function(a, b) {
    return a + b;
});
console.log(sum);
//6

The power of streams

The power of streams lies in their ability to hold model infinite sequences with well-defined repetition patterns.

The tail generator will always return a new stream with a head value set to the next value in the sequence and a tail generator that calculates the next value in the progression.

Finite Streams

The Stream.create offers an easy way to create streams but what if this was to be done manually? It’ll look like this:

var streamOf3 = new Stream (1, function() {
    return new Stream(2, function() {
        return new Stream(3, function () {
            return new Stream(null, null);
        });
    });
});

Infinite Streams

Infinite Ones

Let’s take a dummy scenario again – generating an infinite series of ones (can be 2s too or even 2352s). How can Streams help? First the head should definitely be 1, so we have:

var ones = new Stream(1, ...);

Next, what should tail do? Since it’s a never-ending sequence of ones, we know that tail should generate functions that look like the one below:

var ones = new Stream(1, function() {
    return new Stream (1, function() {
        ...
    };
});

Have you noticed that the inner Stream definition looks like the Ones function itself? How about having Ones use itself as the tail generator? Afterall head would always be one and tail would also continue the scheme.

var Ones = function () {
    return new Stream(1, /* HEAD */
        Ones /* REST GENERATOR */);
};

Natural Numbers

Let’s take this one step further. If we can generate infinite ones, can’t we generate the set of Natural numbers too? The recurring pattern for natural numbers is that elements are larger than their preceding siblings by just 1.

Let’s define the problem constraints and add checkboxes whenever a stream can be used.

  • Set is infinite ☑
  • Set has a well-defined recurring pattern ☑
  • Definition needs an infinite set of ones ☑

 

So can streams be used to represent natural numbers? Yes, stream capabilities match the problem requirements. How do we go about it?

The set of natural numbers can be described as the union of the set {1} and the set of all numbers obtained by adding ones to elements of the set of natural numbers. Yeah, that sounds absurd but let’s walk through it.

Starting from {1}, 1 + 1 = 2 and {1} ∪ {2} = {1,2}. Now, repeating the recursion gives rise to {1, 2} ∪ {2, 3}  = {1,2,3}. Can you see that this repeats indefinitely? Converting to code:

function NaturalNumbers() {
    return new Stream(1, function () {
        return Stream.add(
            Stream.NaturalNumbers(),
            Stream.Ones()
        );
    });
};

Execution walkthrough

The first call to NaturalNumbers.head() returns 1. The tail function is given below:

function () {
    return Stream.add(
        Stream.NaturalNumbers(),
        Stream.Ones()
    );
}
  • Stream.NaturalNumbers is now a stream that has a head of 1 and a tail generator that points to itself. Think of the sets {1} and Natural numbers.
  • Stream.Ones is a stream with a head of one and a tail generator of ones.

Once invoked, this will give a new stream with a head of 1 + 1 and a new tail function that will generate the next number.

Building upon natural numbers

Generating the sets of even and odd numbers is a cinch – just filter the set of natural numbers!

var evenNaturals = NaturalNumbers().filter(function(val) {
    return val % 2 === 0;
});

var oddNaturals = NaturalNumbers().filter(function(val) {
    return val % 2 === 1;
});

Pretty simple right?

Who needs infinite sets?

Computers are constrained by storage and time limits so it’s not possible to ‘have’ infinite lists in memory. Typically only sections are needed at any time.

stream-js allows you to do that

  • Stream.pick: allows you to pick elements of a stream.
  • toArray: converts a stream to an array

A typical workflow with stream-js would involve converting an input array to a stream, processing and then converting back to an array.

For example, here is the array of the first 100 odd numbers; you need a 1000? Just pick them (pun intended).

var first100odds = oddNaturals.pick(100).toArray();

Note: Stream operations can be chained since most stream operations return new streams (i.e. are closed operations). Here is odo, v0.5.0 of stream-js.  Odo means river in Yoruba, the language of my tribe.

And that’s about it! I hope you enjoyed this, now read how to write a promise/A+ compatible library next.

What is Semantic Versioning (SemVer)?


Software Versioning

Software versioning has always been a problem for software developers, release managers and consumers since time immemorial. For developers, the challenge lies in releasing new breaking changes while simultaneously minimizing consumer upgrade pains. On the flip side; consumers, when they finally decide to upgrade to new-shiny-release-10000, want to be sure they are not buying a one-way-ticket to dejection-land. Add release managers who have to manage both parties to the mix and you’ll get a sense of the chaos.

Versioning is even more of an issue in current times with the rapid viral proliferation of tools, frameworks and libraries. This post explains what Semantic Versioning is, why it is needed and how to use it.

Semantic Versioning (SemVer)

Semantic Versioning was proposed by Tim Preston-Werner (apparently he founded Github too). The goal was to bring some sanity to the management of rapidly moving software release targets.

SemVer provides a standardized format for conveying versioning information as well as guidelines for usage. Thus, software release information becomes meaningful (cue semantic) -a glance at the number describes what to expect when consuming a new update or new piece of software. No more groping in the dark or taxing deductions…

But why SemVer? Can’t software developers just use any versioning style they like? Sure, you can update your local releases and call them whatever you like e.g. blue orange, green banana or orange coconut. These styles work fine if you live under the sea or don’t care about others using your code. In the real world, software developers collaborate and consumers need to easily know if the blue orange->green banana upgrade is not going to break their entire stack. A standardized format that everyone agrees on and understands is a likely fix and this is what SemVer does.

So SemVer…, a.b.c what?

At least three numbers are required to identify a software version; these are explained below:

major . minor . patch

For example, version 3.10.1 of lodash has a major version of 3, a minor version of 10 and a patch version of 1.

How do I use these numbers?

Each of these three numbers signifies the changes in the released version of software; i.e. before a new set of changes go out, you want to bump up one of these numbers to correctly match and convey information that describes it. The heuristic is thus:

  • New changes that break backwards compatibility require a bump of the major version number. The upcoming Angular release, which is not backwards compatible, is thus tagged 2.0 and not 1.5 or 1.6.
  • New features that do not break backward compatibility but are significant enough (e.g. adding new API methods that didn’t exist before) would bump up the Minor version number.
  • Small (maybe insignificant changes e.g. bug fixes) would bump up the patch number. These have to be backward compatible too.

How do I use the numbers?

Consuming new software

So you run into an awesome software on Github that uses semantic versioning and is at version 2.1.6. The software meets your needs and you want to integrate it right away, the following guide provides a heuristic for asking questions.

 2 – There have been two major releases.

  • What does the community think about the project’s continuity?
  • Is the first version deprecated or still being maintained?
  • How was the backward breaking major release handled? Did the authors get community support and feedback?

 1 – There has only been 1 minor update since the v2 major release.

  • Not too many features have been added, is the project active?
  • Are there plans to add more features or is the software mature?

16 –  16 patches!

  • Are the 16 patches bug fixes or just minor upgrades?
  • Does the project have a quality concern?
  • Is the software stable or does it break?

Upgrading existing software

There has been a new release of library foobaz – the core engine powering your software stack. Assuming the current consumption version is v1.2.3, the below explains likely actions based on new release version numbers:

v2.0.0 – This is a major bump and pre-existing contracts might be broken. A major update nullifies any pre-existing contractual obligations (that’s lawyerese for ye nerdy folks), so don’t jump on the bandwagon until you know the full impact of the changes.

v1.∞.∞ – This should be pretty safe, minor and patch bumps typically do not break backwards compatibility. So go in, update and start using the new features.

NPM and SemVer – the upgrade link

Open up a package.json or bower.json file, you’ll typically see a devDependencies or dependencies section with key-value pairs that looks like this:

"dependencies": {
"lodash": "~3.8.0",
"angular": "^1.3.0",
"bootstrap": "^3.2.0"
}

What do the ~ and ^ stand for? They determine what versions you upgrade to when you run npm update. Thus for the sample above, an update will allow newer versions based on the table below:

Special Character Version Allowed Upgrades Notes
^ ^1.3.0 1.*.* All backward compatible minor/patch updates
~ ~3.8.0 3.8.* All backward compatible patch updates

Note: This post only explains ~ and ^; there are more characters (e.g. >=, *, etc) and these should be explained in an upcoming post inshaaha Allaah.

3 Extra things to know about SemVer

Beware of 0.*.* 

The semver spec doesn’t track any release below 1. So if you plan to use some software tagged as 0.9.9 just be ready to redo your work any time (read my EmberJS story). These are libraries in development and terrifying things can happen, for example, the author(s) might never publish a production ready version or might simply abandon the project.

In short, SemVer does not cover libraries tagged 0.*.*. The first stable version is v1.0.0.

Start at 0.1.0

And this makes sense, doesn’t it? Projects typically start with a feature and not a bug fix. And since you would probably tweak a couple of things before becoming stable, it doesn’t make sense to start out with a major version bump. Unless of course, you are only publishing your closed-source code.

Identifiers for PreReleases And Builds

Valid identifiers are in the set [A-Za-z0-9] and cannot be empty.

Pre-release metadata is identified by appending a hyphen to the end of the SemVer sequence. Thus a pre-release version of some-awesome-software could be tagged v1.0.0-alpha. Note that hyphens are allowed in names for pre-release identifiers (in addition to the set specified above) but names cannot contain leading zeros.

Build metadata is identified by appending a + sign at the end of the patch version number or at the end of the prerelease identifier; leading zeros are allowed. Examples include v1.0.0+0017, v1.0.0-alpha+0018.

Criticism

Jeremy Askhenas (of underscore and backbone fame) has written a critique of SemVer; arguing that it is no silver bullet and most big software projects do not use it (e.g. node, jQuery etc). There is also ferver too…

Nevertheless, it is worth knowing that you SHOULDN’T absolutely rely on version numbers as implying SemVer compliance (cos developers are human afterall…), you should read up the project docs and ask the maintainers if you have questions.

Still curious? Read Why JavaScript ‘seems’ to get addition wrong.

Why JavaScript ‘seems’ to get addition wrong


Introduction

JavaScript is a dynamic weakly-typed language so it’s possible to have expressions like this:

var foo = "string" + 22 * 3 - 4;

This post explains how JavaScript evaluates such complex ‘mix-n-matches’ and at the end of this, you should know why foo is NaN.

First, a screenshot showing more funny behaviour:

Addition and Subtraction
Addition and Subtraction

A brief Maths Refresher

Associativity

The result of the mathematical operation is always same regardless of the ‘consumption’ order of the operands during the operation. Associativity deals with the operators and is important in resolving situations that involve an operand between two operators. In the examples below, there is always a number between the two mathematical operators. Associativity rules remove the ambiguity that might arise in these situations.

Addition and multiplication are associative operations.

(1 + 2) + 3  = 1 + (2 + 3);
(1 * 2) * 3  = 1 * (2 * 3);

Side Note: Mathematical operations on floating point values (IEEE 794) suffer from rounding errors and can give funny results.

Non-associativity

Order matters, opposite of associativity. Operations could be left-associative or right-associative.

5 - 3 - 2 = (5 - 3) - 2; //left associativity
var a = b = 7; // a = (b = 7); //right associativity

Commutativity

The result of the mathematical operation is always the same regardless of the position of the operands. Commutativity, as opposed to associativity, focuses more on the operands – if swapping the place of  operands does not affect the result then it is commutative. As again, addition and multiplication are commutative (and associative as well) while division and subtraction are not.

1 + 2 = 2 + 1; //commutative

3 * 5 = 5 * 3; //commutative

1 - 2 != 2 - 1; //not commutative

Mathematics and Programming: The Interesting Divide

Operators can be overloaded in Mathematics and programming and in both cases the input values (i.e. operands) determine the right operation. For example, the multiplication symbol X can either signify pure arithmetic multiplication if both values are numbers or a vector cross-product if both inputs are vectors or even scalar vector multiplication. Similarly in programming, the + operator is usually overloaded to mean both addition and string concatenation, depending on context and usage.

Overloading has constraints; for example, the expression 1 + “boy” is invalid (and quite absurd) in the mathematics realm; operands have to be members of well-defined sets in other to get meaningful results.

Operators in strongly-typed programming languages, like their Mathematical counterparts, only allow operations on compatible types. Programmers have to explicitly coerce types to expected values if they want to mix and mash.

Weakly-typed languages offer no such restrictions, rather they attempt to automatically deduce the programmer’s intent and coerce values based on some heuristics. As expected, surprises occur when the language’s interpretation differs from the programmer’s intentions.

For example, consider the expression 1 + “2” in a weakly-typed programming language, this is ambiguous since there are two possible interpretations based on the operand types (int, string) and (int int).

  • User intends to concatenate two strings, result: “12”
  • User intends to add two numbers, result: 3

The only way out of the conundrum is the use of operator precedence and associativity rules – these determine the result.

How JavaScript adds numbers

Steps in the addition algorithm

  • Coerce operands to primitive values

The JavaScript primitives are string, number, null, undefined and boolean (Symbol is coming soon in ES6). Any other value is an object (e.g. arrays, functions and objects). The coercion process for converting objects into primitive values is described thus:

* If a primitive value is returned when object.valueOf() is invoked, then return this value, otherwise continue

* If a primitive value is returned when object.toString() is invoked, then return this value, otherwise continue

* Throw a TypeError

Note: For date values, the order is to invoke toString before valueOf.

  • If any operand value is a string, then do a string concatenation
  • Otherwise, convert both operands to their numeric value and then add these values

The case for the unary + operator

The unary + operator is quite different – it forcibly casts its single operand to a number.


//Cast to number

+"3";

//Convert to string

"" + 3;

The first case uses the unary operator which will convert the string to a number while the second case casts to a string by passing a string as one of the operands to the addition operator.

But what about the – operator?

Subtraction is great because it is not overloaded to signify other operations; when used, the intent is always to subtract the RHS from the LHS. Thus, both operands are converted to numbers and then subtraction is carried out on the numeric values. And this is why you can use the – operator to cast values too.

Trying to subtract a string of characters from another string of characters is undefined and you’ll always get a NaN.


"3" - "";

; 3

//Relying on implicit conversion in - operator

Examples

The table of coercions

First, a table showing the generated values from coercion operations. This makes it very easy to deduce the result of mix-n-mash expressions.

Primitive Value String value Numeric value
null “null” 0
undefined “undefined” NaN
true “true” 1
false “false” 0
123 “123” 123
[] “” 0
{} “[object Object]” NaN

Examples – The fun starts

Some examples, try to see if you can explain the results. Believe me, this is a fun fun ride. Enjoy!

1 + 2;

Output: 3
Why?: Addition of two numbers

'1' + 2;

Output: ’12’
Why?: Addition of a number and a string – both operands are converted to strings and concatenated.

2 - 1;

Output: 1
Why?: Subtraction of two numbers

'2' - 1;

Output: 1
Why?: Subtraction of a number from a string – both operands are converted into numeric values

2 - '1a';

Output: NaN
Why?: Subtraction of a string from a number – conversion of ‘1a’ into a number value gives NaN and any Maths op involving a NaN gives a NaN.

2 + null;

Output: 2
Why?: Addition of a number and the null primitive, numeric value of null primitive is 0 (see table of coercions). 2 + 0 is 2.

2 + undefined;

Output: NaN
Why?: Addition of a number and the undefined primitive – numeric value of undefined primitive is NaN (see table of coercions) and operations involving a NaN give a NaN.

2 + true;

Output: 3
Why?: Addition of a number and the true primitive – numeric value of true primitive is 1 (see table of coercions). 2 + 1 = 3.

2 + false;

Output: 2
Why?: Addition of a number and the false primitive – numeric value of the false primitive is 0 (see table of coercions). 2 + 0 = 2.

Fun with objects

The preceding part covered mostly primitives (with the exception of strings), now on to the big objects; pun intended.

First objects

2 + {};

Output: 2[object Object]
Why?: {}.toValue returns {} (which is not a primitive) so {}.toString() is invoked and this returns the string ‘[object Object]’. String concatenation occurs.

{} + 2;

Output: 2
Why?: This one is quite tricky I admit. JavaScript sees the {} as an empty execution block, so technically the above sample is equivalent to + 2 which is 2.

var a = {};
a + 2;

Output: [object Object]2
Why?: The assignment removes the ambiguity – JavaScript knows for sure it is an object literal. The rules of conversion follow as earlier described.

Arrays next!

2 + [];

Output: “2”
Why?: [].toValue returns the array (which is not a primitive) hence [].toString() is invoked and this returns the empty string. The operation is now 2 + “” and this results in string concatenation.

[] + 2;

Output: “2”
Why?: Same as above

Associativity and Evaluation

JavaScript + operator is left-associative, this means operands are evaluated from left to right when they occur more than once in a series. Thus 1 + 2 + 3 in JavaScript (being left-associative) will be evaluated as (1 + 2) + 3 and so on. You can read more here.

Now to the samples again!

1 + 2 + "3";

Output: “33”
Why?: left-associativity ensures this is (1 + 2) + “3”, which goes to 3 + “3”, giving 33

1 + "2" + 3;

Output: “123”
Why?: This will be evaluated as (1 + “2”) + 3, and then “12” + 3

"1" + 2 + 3;

Output: “Left as an exercise ;)”.
Why?: Share your answer in the comments.

Conclusion

This post was actually motivated by Gary Bernhardt’s very popular WAT talk, at this stage I hope you have gained the following:

  • Ability to fearlessly refactor JavaScript code that is lacking parentheses or has no clear operator/operand ordering.
  • A deeper understanding of how JavaScript evaluates expressions and operations on primitives and object types

Do let me know your thoughts in the comments!

Related Posts

How to implement the Y-combinator in JavaScript


This post provides a very simple step-by-step implementation of the Y-combinator in JavaScript. You should be able to implement the Y-combinator in your language of choice after reading this post; as you’ll see – it’s that easy.

What is a combinator?

According to wikipedia,

A combinator is a particular type of higher-order function that may be used in defining functions without using variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables.

The emphasized text highlights the most interesting part of the definition – combinators allow functions to be defined without variables. Imperative programming relies heavily on variables and trying to eschew variables can be a mind-stretching exercise.

Show me the code!

The following code snippet is a Y-combinator example of the factorial function in JavaScript.

var yCombFact = function (number) {
    return (function (fn) {
        return fn(fn, number);
    })(function(f, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * f(f, (n - 1));
        }
    });
};

yCombFact(5);
//120

Looks abstruse right? No worries – lets break it down.

Two major things

There are two major concepts that help drive home the understanding of the Y-combinator:

  1. No variables allowed – this implies we have to find a way to write the factorial function without using variables.
  2. Invoking the no-variable-function defined in 1 without using variables again.

Part 1: Rewriting the factorial function without variables

Here is the factorial function definition. Can you spot the variable usage in it?

var factorial = function(n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

factorial(5);
//120

The expression n * factorial(n – 1) only succeeds because it can find the variable factorial in scope; without it, the factorial function would not be recursive. But remember, we are trying to do away with all variables.

The workaround is to pass in the variable reference as a function parameter. In the factorial example, recursion can then be achieved by using the placeholder parameter as the reference. The no-variable-factorial function looks like the following:

var noVarFactorial = function(fn, n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * fn(fn, (n - 1));
    }
}

noVarFactorial(noVarFactorial, 5);
//120

The new definition works exactly like the old one but without the internal dependency on the factorial variable. Rather, recursion succeeds by relying on the ‘injected’  parameter and the computer is none the wiser.

Part 2: Invoking the no-variable function without variables

We have rewritten the factorial function to avoid variables however we still need to store the factorial function in a variable before invoking it


var factorial = ...;

factorial(factorial, 5);

The trick? Functions to the rescue again! Let’s create a factorialCalculator function that uses the noVariableFactorial definition above.

function factorialCalculator(n) {
    //as defined earlier
    var noVarFactorial = ...;
    return noVarFactorial(noVarFactorial, n);
}

factorialCalculator(5);
//120

The noVarFactorial name has to go since we want to avoid variables. And how do we achieve this? Yes, functions once more. So lets create a wrapper function inside the factorialCalculator that invokes noVariableFactorial.

function factorialCalculator(n) {
    var wrapper = function (noVarFact) {
        return noVarFact(noVarFact, n);
    }
    return wrapper(noVarFactorial);
}

factorialCalculator(5);
//120

Unfortunately, the wrapper function has led created another wrapper variable and this has to be eliminated too. For a complete implementation, the two variables (wrapper and noVarFact) have to go.

It’s now time to leverage language specific idioms to achieve this. JavaScript has the IIFE idiom which allows you to immediately invoke a function (read about it here). Using it, we can eliminate the need for the wrapper variable as thus:


function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(noVarFactorial);
}

factorialCalculator(5);
//120

Combining all the pieces

The last thing is to insert the noVarFact definition so that it is no longer a global variable in the scope. Just as we do in Mathematics, we can just ‘substitute’ the value in. The final piece is then:

function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}

factorialCalculator(5);
//120

And that, my friends, is the yCombinator in all its glory. I have decide to leave the variables as they are to make it all clear but here is the standard format so you know it when you see it


function factorialCalculator(n) {
    return (function (fn) {
        return fn(fn, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}
factorialCalculator(5);

Conclusion

The Y-combinator is quite easy to understand – it requires understanding function invocation patterns, variable substitution by parameters and higher level functions. As an exercise, can you try implementing the fibonacci using the Y-combinator approach? Better still, why not create a Y-combinator function that accepts function that match the fn(fn, n) signature?

Related Posts

SICP Section 3.3 – 3.5 : Found a bug in memq


1. Is memq broken?

memq is an in-built list search function; it finds the first occurrence of a key in a list and returns a new list starting from that key.


(memq 3 (list 1 2 3 4))
//; '(3 4)

(memq 5 (list 1 2 3 4))
//; #f

Now that you know what memq does, lets look at some weird behaviour

(define x (list 1 2))
(define a (list 3 4))

//; append x to the list
(set! a (append a x))
(memq x a)
//; #f -> x is not in a

Building on that foundation leads to the following conundrum

(define x '(1 2 3))

//; Create a cycle: last element of x is itself
(set-cdr! x x)

//; is x in x?
(memq x x)

//; never-ending loop

memq tests whether the key exists in the list and if it does, then it returns the list starting from the first occurrence of the key. But what is the first occurrence in a cyclic list? Could this be the bug?

2. Algorithms, algorithms

  • Horner’s algorithm

This is useful for calculating polynomial values at discrete points; for example, given a polynomial function, f(x) = 7x³ + 4x² + 4; what is f(3)? A potential application (and possible interview question too) is to convert string values into numbers – 1234 is the value of x³ + 2x² + 3x + 4 when x is 10.


//assuming polynomial is represented from lowest power to highest

//i.e. 1234 -> [4, 3, 2, 1]

function horner (poly, base) {
    if(base === 0) {
        return 0;
    }

    var val = 0;
    var polyLen = poly.length;
    for(var i = 0; i < polyLen; i++ ) {
        val += poly[i] * Math.pow(base, i);
    }
    return val;
}

horner([4,3,2,1], 10);
//1234

  • Fast exponentiation

Because going twice as fast is more fun than going fast.


function exponent (base, power) {
    var val = 1;
    while(power > 0) {
        val = val * base;
        power = power - 1;
    }
    return val;
}

Now, lets look at fast exponentiation.

function fastExponent(base, power) {
    if(power === 1) {
        return base;
    }

    //handle odd powers
    if((power % 2) === 1) {
       return base * fastExponent(base, (power - 1));
    }

    var part = fastExponent(base, (power / 2));
    return part * part; //square of part also works
}

fastExponent(10,3)
//1000

Fast exponentiation grows logarithmically Ο(log N) while the normal one is Ο(N). This same concept can be reapplied to similar scenarios.

3. Streams

Functional programming offers many advantages but one potential downside is performance and needless calculation. For example, while imperative programming offers quick exit constructs (e.g. break, continue); functional programming constructs like filter, map and reduce have no such corollary – the entire list has to be processed even if only the first few items are needed.

Streams offer an elegant solution to this issue by performing only just-in-time computations. Data is lazily evaluated and this makes it possible to easily (and beautifully) represent infinite lists. Inshaaha Allaah I should explain this concept in an upcoming post. It’s very beautiful and elegant and powerful.

Related Posts on my SICP adventures

  1. SICP Review: Sections 3.1 & 3.2
  2. SICP Section 2.5

The Effective Programmer – 3 tips to maximize impact


Effectiveness, (noun) : the degree to which something is successful in producing a desired result; success.

Over the years, I have tried experiments, read books and watched several talks in a bid to improve my effectiveness. After a series of burnout and recovery cycles, I finally have a 3-pronged approach that seems to serve me well.

1. Learn to estimate well

2. Use the big picture to seek opportunities

3. Continuous Improvement

Lets discuss these three.

1. Estimation – the bane of software development

Reliable coding estimates accurately forecast when feature work will be done. But when is a feature done? Is it when it is code complete? Test complete? Or deployed? Most developers wrongly associate code complete with test completion or deployment ready. This explains arbitrary estimates like: “Oh… I’ll be done in 2 hours”; such estimates typically miss the mark by wide margins due to error compounding. Let’s take a simple bug fix scenario at a fictitious software engineering company.

  • Bug is assigned to developer John SuperSmartz
  • John SuperSmartz reads the bug description, sets up his environment and reproduces it
  • He identifies the cause but does some light investigation to find any related bugs (he’s a good engineer)
  • John designs, implements and verifies the fix
  • Gets code review feedback and then checks in

Any of the intermediate steps can take longer than estimated (e.g. code reviews might expose design flaws, check-ins might be blocked by a bad merge, newer bugs might be discovered in step 3. etc). Without such an explicit breakdown, it becomes difficult to properly give estimates. Don’t you now think the 2-hour estimate is too optimistic?

Personally, I use kanbanFlow (I love their Kanban + pomodoro integration) to decompose work into small achievable 25-minute chunks. For example, I might break down some feature work into 8 pomodoros as follows:

  • Requirements clarification – 1 pomodoro
  • Software design and test scenario planning – 2 pomodoros
  • Coding (+ unit tests) – 3 pomodoros
  • Testing and code reviews – 1 pomodoro
  • Check-in + estimation review – 1 pomodoro

Some of the things I have learnt from using this approach:

  • I grossly underestimate feature work – the good side though is that this planning enables me to improve over time
  • I know when to start looking for help – as soon as a task exceeds its planned estimate, I start considering alternative approaches or seeking the help of a senior technical lead
  • Finally, it enables me to make more accurate forecasts – e.g. I can fix x bugs per week…

2. See the big picture

A man running around in circles covers a lot of distance but has little displacement. In optimal scenarios, distance covered equals displacement while in the worst scenario, it is possible to cover an infinite distance and have a displacement of zero.

Imagine working for several days on a feature and then discovering major design flaws that necessitates a system rewrite; a lot of distance has been covered but there has been little displacement. Working on non-essential low-impact tasks that no one cares about is neither efficient nor effective. Sure they might scratch an itch but always remember that the opportunity cost is quite high; the lost time could have been invested in higher priority tasks with a larger ROI.

Whales periodically surface for air and then get back into the water to do their business; so should engineers periodically verify that priorities align with company’s goals. It’s possible to get carried away by the deluge of never-ending feature requests and bugs fixes; an occasional step back is needed to grasp the whole picture. Here are sample questions to ask:

  • Where are the team’s goals?
  • Does your current work align with company goals?
  • Skills acquisition and obsolescence check
  • Opportunities for improvement?

Personally I try to create 3 to 4 high-impact deliverables at the beginning of each week and then focus on achieving these. Of course, such forecasts rely heavily on productivity estimates.

3. Continuous Improvement

Athletes consistently hold practice sessions even if they don’t want to because it’s essential to staying on top of their game. The same applies to pretty much any human endeavor – a dip in momentum typically leads to some loss in competitive edge. The software engineering field, with its rapidly evolving landscape, is even more demanding – developers have to continuously and relentlessly learn to stay relevant.

Staying relevant requires monitoring industry trends vis-à-vis blogs, conferences and newsletters. There are a ton of resources out there and it’s impossible to follow every single resource, rather it is essential to separate the wheat from the chaff and follow a select high-quality few.

Learning and experimentation with new technologies naturally follows from keeping abreast of developments. A developer might decide to learn more about the software stack his company uses, logic programming or even computer science theory. Even if his interests are totally unrelated to his day-to-day job, independent learning would expose him to new (possibly better) ways of solving problems, broaden his capabilities and might even open up new opportunities. I prefer learning established CS concepts to diving into every new db-data-to-user-moving framework.

Opportunities abound such as learning IDE shortcut keys, terminal commands, automating mundane tasks and so on. Ideally you want to start simple by selecting the area with the highest impact-to-effort ratio and then dedicating a few minutes to it daily. Over time, the benefits start to pay off.

And that’s about it! Do you have other suggestions?

Like this post? Follow me on Twitter here or read some of my top posts.

1. Code is poetry: 5 steps to bulletproof code

2. So you want to become a better programmer

World Class Nigerian Software Engineering: Are we there yet?


Jason of Iroko recently announced mouth-watering offers for developers and this triggered a long discussion about software engineer pay in Nigeria. The discussions on techCabal’s radar got me thinking about software development in Nigeria, do we have enough world-class talent or could we be better?

The software engineering field has spawned a wide variety of roles: product managers specialize in product design and customer interaction, software engineers write code and ensure quality while devOps folks handle deployments and infrastructural issues. Interactions between these fields lead to the delivery of great software. More importantly though, these distinct fields are essential for specialization, which is critical for growth.

At small software firms and startups all over the world; engineers are expected to be responsible for the whole gamut of software development – UI design, development, testing, deployment and customer support. Not to say this is bad; sometimes there is a genuine need for generalists and developers can acquire great skills by working in such places. However, as the popular saying goes, jack of all trades, master of none. I doubt if it is possible to be well-versed in all the numerous fields of software engineering; moreover standard practices like pair programming, build-measure-learn and one-click deploy weren’t created by generalists.

Building software involves a lot more than cranking out code and shipping software artifacts; there is a need for thought leaders advocating best practices and innovating new approaches. And this is why specialization is essential. Now, looking at the Nigerian software scene, can we have world-class experts without specializing and going deep? In recent years, the number of start-ups, incubators and software shops in Nigeria has ballooned and demand for engineers has gone up. However despite the large number of excellent Nigerian software engineers, we still have a disproportionately small number of thought leaders.

Some might argue that there is a dearth of thought leaders and specialization because the Nigerian software industry is young. I disagree, rather I think we have lots of engineers with significant experience shipping high quality software and meeting deadlines. Writing software for the Nigerian market is fraught with challenges and surely there must be some best practices to share, unfortunately, we do not hear about their stories. For example, why isn’t Konga publishing on their tech blog? How about the Terragon tech gurus writing white papers about Adrenaline? Nairaland? Such efforts drive technology adoption, improve the entire field and might even bring in new talent!

Let’s talk about change…

Sadly most computer science graduates do not know how to write code; in contrast, fresh graduates in other places complete non-trivial projects before graduation. This gap puts a significant drain on firms who have to invest heavily in training and mentoring fresh employees until they are proficient.

The educational sector has to be fixed; a catch-them-young scheme aimed at motivating undergraduates and secondary school students should help ensure a good supply of trained developers. The great work being done by Andela and CTI is worthy of mention: they are creating realistic environments and this is a step in the right direction.

Top companies, incubators and the existing thought leaders can accelerate growth by creating conferences, meet-ups and talks. These will provide opportunities to share ideas, drive networking between potential employers/employees and increase collaboration. Furthermore, these create opportunities for upcoming engineers.

Developers need to up their game too – it’s just not enough being the best programmer out there, we need to contribute to community too. Do something – create a tech blog, give a talk at a meet-up or mentor upcoming developers. It also involves being open with ideas, learning on every project and driving good practices at and outside work. I’d expect programmers to be more ambitious and aim to change the world (enough of apps that move data between databases and devices). Seek challenges, learn a lot (e.g. computer science, entrepreneurship, product design, methodologies, testing etc) and then inspire people.

Our ‘unique’ environment involves piracy, intellectual property disregard and tough economic conditions; notwithstanding I believe we have the potential and can do it. Rome wasn’t built in a day and creating a world-class industry will take time. However, if we try, we might get there faster. Hopefully someday soon we’ll have respected Nigerian thought-leaders actively pushing the boundaries of software development. Our very own Brenden Eichs, Joel Spolskys and Bob Martins…

Let’s all help create a thriving Nigerian software industry – one with international acclaim. And then we can aim to get the 6-digit dollar salaries too…

This post first appeared on TechCabal.

Related

1. Opennigeria… the time is now!