JavaScript has no Else If


Surprised? Read on.

The following code might be very familiar to you, in fact I write this too a lot.

if (a > b) {
    return 1;
} else if (b > a) {
    return -1;
} else {
    return 0
}

However what JavaScript actually parses is shown below

if (a > b) {
    return 1;
} else {
    if (b < a) {
        return -1;
    } else {
        return 0
    }
}

Same end result but closer to the underlying language semantics.

Why?

JavaScript allows if/else conditionals to omit the wrapping braces – some actually argue this is a neater style due to its terseness.

A contrived example is shown below.

if (a > b)
    return 1;
else
    return -1;

Consequently it follows that the else in the else if line is actually omitting its {} wrappers.

So whenever you write else if, you are technically not following the rule of always using braces. Not saying it is a bad thing though…

References

  1. MDN article
  2. ES5 spec

Related

  1. Why JavaScript ‘seems’ to get addition wrong
  2. Three Important JavaScript Concepts
Advertisements

Efficiently shipping Big Hairy Audacious Software projects


I recently transitioned into a full-stack role – I wanted to stretch myself and step out of my comfort zone. The biggest challenge was my struggle to quell the quite nagging voice in my mind screaming ‘impostor!’.

So I got a couple of Big Hairy Audacious Software (BHAS) pieces dropped on my plate and that experience motivated this post; after all who doesn’t want to get better at delivering BHAS projects? My style of working is to consciously take notes as I run into issues, ship software pieces or meet milestones. The introspection typically involves 3 questions:

  • What to avoid:

While shipping some feature, I implemented some shared parts in the common library. Unfortunately, that caused a lot of dependencies and significantly slowed down my development speed. Learning? Implement the shared pieces in the same library and do the refactoring as a follow up task.

  • What to repeat

Typically hard-earned lessons, for example, having to debug issues in production quickly showed me what good tracing information should look like.

  • What to improve

The middle ground, things that went well but could be improved. This is mostly driven by notes I keep while developing.

So thoughts so far.

1. Define the end goal

A BHAS project contains a lot of ambiguity in addition to the technical challenges involved! A sample task might be to implement a usage tracking system for ships.

The first question should not be ‘how do I start’ or ‘where do I start from’; rather I’d re-frame that to be ‘What is the desired end product’; what does a successful launch mean? Having crisp targets and communicating those to all the stakeholders is very essential as it helps to set the right expectations.

Furthermore, once the success metric is known, then it becomes possible to breakdown the BHAS into smaller more manageable chunks with less ambiguity.  Once done, order this list and put the tasks with the highest impact at the very top; why? The Pareto principle (also the 80/20 rule) implies that a few of these tasks would provide the highest importance and value. So if you don’t get the lower tasks done, it’s still a win.

2. Take small measured steps

A favorite quote of mine:

How do you eat an elephant? One bite at a time!

Assuming you picked up the most important task to work on and are still stumped. Start with the simplest easiest thing you think will help you wrap up that task – it could even be creating a branch or implementing some simple helper function.

Why this is important is that it sets you on a roll and helps you to build momentum. The more micro-tasks you complete, the more confident you are and the faster you accelerate towards ‘cruise speed’ (aka flow).

3. One thing at a time

I have tried in the past to multi-task; on such days I would feel like I did a lot of work yet deeper examinations would imply low productivity. There are two theories that come to mind:

  1. My brain convinces me that rapidly switching is more efficient and productive.
  2. Same brain convinces me that it is possible to simultaneously do two mentally intensive tasks.

No, humans can’t multi-task; rather our brains do rapid context switching. So if you think you can carry out two very demanding tasks then you either have two brains or are losing ‘brain cycles’ to rapid switches.

Another illusion to watch out for is the effect that confuses ‘busy work’ with being ‘productive’; no you can be busy (answering emails or chatting etc) without being productive. Now, I try to do one thing at a time with full mindfulness and the results have been more meaningful outcomes.

So if you are working on a task, concentrate all efforts into finishing that task alone. When you are done or if you get blocked, then you can consider jumping on a parallel or orthogonal task.

Staying mindful is also quite difficult as there are lots of distractions – consciously evaluate if you need to be in that meeting, that discussion or reply to that group email.

4. Be persistent

Once you’ve picked a task you should stick to it. Ideally any task you are working on at any time should be the one with the highest projected value returns (think weighted graphs).

Sure there might be difficulties but instead of skipping around you should dig in and solve the challenges. Ever wondered why you only get the urge to check email only when a tricky question comes up? Aha, it’s the brain trying to ‘run’ away. Don’t check that mail – your brain is just seeking a way out. Give it all your best and push until you get a breakthrough.

5. Don’t seek perfection and ship the first rough version

No writing is perfect in the first draft; rather you get your thoughts out and then do a number of passes: cleaning up typos, grammatical structure and style in follow up sweeps. The same concept applies to writing code too.

I remember trying to beautify my code and write loads of redundant unit tests while building a particularly tricky interface. I wanted the first version to be perfect. Know what? Most of those unit tests got deleted when the feature was completed.

If you are trying to build a prototype to test how something works, it is ok to be loose with rules since it’s going to be a proof-of-concept. Once you have that done, you can then design the system, implement it and add tests.

Conclusion

I hope these help you to deliver better software more efficiently and faster. Again, this can be summarized in the following points:

  • Define what the finished product is
  • Break it down into chunks
  • Ruthlessly focus on delivering the high-impact tasks – don’t multitask!
  • Deliver fast, eliminate all distractions.

Do let me know your thoughts.

Fighting the impostor syndrome


I bet everyone has had thoughts similar to the following go through their minds at one point or the other in their careers:

side A: No, you don’t know it, in fact you don’t know anything…

side B: hmm, I think you are just beating yourself too hard, it’s a new area and you are ramping up fast actually.

side A: Why wasn’t I included in the meeting? Must be because you know nothing! See I was right!!

side B: Well, maybe your input was not needed because you are busy with task xyz

side A: I don’t know… I don’t know… I think I look like a complete newbie. Did I just say something stupid?

side B: Even the smartest people make mistakes and remember they all started somewhere…

The Impostor vs Dunning-Kruger chart

Some chart I saw drawn on one of the whiteboards a long time ago.

drawing

The Dunning-Kruger effect argues that amateurs tend to over-estimate their skills while professionals under-estimate their capabilities. On the other hand, the impostor syndrome makes people think that their accomplishments were just by chance and had nothing to do with their efforts or preparation.

In the graph above, the ‘sweet’ spot would be at the top right – where the skills and confidence are at optimum levels.

Confidence, the missing link?

There are several articles about the impostor syndrome and I must say I finally got the chance to ‘really’ experience it.

My proposed expansion to new frontiers has pushed me out of my comfort zone and exposed me to a few humbling experiences. The confidence and familiarity from countless hours shipping code in the front end domain was missing. That familiar reassurance of knowing that you could always dive into the details and find a solution to whatever was thrown at you was somewhat missing.

The good news however are that good patterns and practices are the same regardless of the domain – another good reason to learn the basics really well. Applications can vary due to environment, framework and language implementations but the core concepts will remain similar. For example, dependency injection, MVC, design patterns, algorithms etc.

Why should I leave my comfort zone?

It sure feels comfortable sticking to what you know best – in fact, this might be recommended in some scenarios. But broadening your scope opens you up to new experiences and improves you all around as a software engineer.

I remember listening to an old career talk about always being the weakest on your team. The ‘director’ talked about finding the strongest team you can find and then joining them and growing through the ranks. Over time, you’ll acquire a lot of skills and eventually become a very strong developer.

In reality, starting again as a ‘newbie’ on a team of experts might be difficult so you need to be really confident; it is easy to become disillusioned and give up. Get some support from a loved one and put the long goal in mind. You’ll eventually grow and learn; moreover you’ll bring in new perspectives, provide insight into other domains and also improve existing practices.

Everyone has these fears and even the experts don’t know it all. The biggest prize, as one of my mentors said, is gaining the confidence that you can dive into a new field, pull through and deliver something of importance inshaaha Allaah.

A simple explanation of 1s complement arithmetic


I remember taking the digital systems course in my second year of university and being exposed to concepts like k-maps, 1s and 2s complement arithmetic. These building blocks served as the foundations for bigger concepts like adders, half-adders, comparators, gates and what not.

It was pretty much fun doing the calculations then even though I didn’t fully realize why I needed to add a 1 while doing 1s complement arithmetic. Until I stumbled upon the excellent explanation in Charles Petzold’s Code; a great book that uses very lucid metaphors for explaining computing concepts. As is usually the case – the best explanations are typically so simple and straightforward that anyone can grasp them.

Even if you already know about 1s and 2s complement arithmetic; still go ahead and read this, you might find something interesting.

Subtraction – the basics

Assuming you need to find the difference between two numbers, e.g. 173 and 41; this is pretty straightforward, you do so

minuend 174
subtrahend 041
difference 133

Aside: Minuend and subtrahend are valid names, the names of parameters for mathematical operations are given below.

Expression

First number (i.e. 5)

Second number (i.e. 3)

5 + 3

Augend

Addend

5 – 3

Subtrahend

Minuend

5 * 3

Multiplier

Multiplicand

5 / 3

Dividend

Divisor

This is the simple case, how about the carry scenario when you need to ‘borrow’ from preceding digits?

minuend 135
subtrahend 049
difference 086

Aha the pesky borrowing! What if there was a way to avoid borrowing? The first thing to think of is the ceiling of all 3-digit numbers i.e. the smallest possible number that would require no borrows for any 3-digit number. We use 3-digits since we are taking a 3-digit example; were the subtrahend to be a 5-digit number, we would need the smallest 5-digit number value too.

That smallest ‘no-borrows-required’ number is 999. Unsurprisingly, it is the maximum possible value in base ten if you have only 3 digits to use in the hundreds, tens and units positions. Note: In other bases, the values would also be the penultimate value e.g. for base 8, it’ll be 777.

Now, let’s use this maximum value as the minuend

minuend 999
subtrahend 049
difference 950

Since we are using 999 as the reference value, then 49 and 950 are complements of each other; i.e. both add up to give 999. So we can say 49 is the 9s complement of 950 just as 950 is the 9s complement of 49.

Awesome! We can now avoid the annoying carries!! But knowing this is useless in itself unless we can find some way to leverage this new-found knowledge. Are there math tricks to use?  Turns out this is very possible and straightforward.

Math-fu?

Lets do some more maths tricks (remember all those crafty calculus dy/dx tricks)…

135 – 49 = 135 – 49 + 1000 – 1000

= 135 + 1000 – 49 – 1000

= 135 + 1 + 999 – 49 – 1000

= 135 + 1 + (999 – 49) – 1000

= 136 + 950 – 1000

= 1086 – 1000

= 86

QED!

What’s the use of such a long process?

We just found a very long way to avoid carries while doing subtraction. However, there is no motivation to use this since it is quite inefficient in base 10. So what’s the reason for all this?

It turns out that in computer-land, counting is done in 0s and 1s. The folks there can’t even imagine there are numbers other than 1 and 0. As you’ll see, there are some great advantages to using the 1s complement approach in this area.

Lets take a simple example e.g. 11 – 7

minuend 1011
subtrahend 0111
difference ????

Applying the same trick again (this time the minuend will be 1111 instead of 9999).

minuend 1111
subtrahend 0111
difference 1000

Do you notice a pattern between the subtrahend (0111) and the difference (1000)? The complements seem to be ‘inverses’ of each other.

The 1s complement of any numeric binary value is just the bitwise inverse of the bits in the original value. Calculation is just a matter of flipping each bit’s value, a linear  O(n) operation that can be quite fast. That’s a BIG WIN.

Continuing the process again with the addition step this time:

Augend (difference from step above) 01000
Addend (of 1) 00001
Addend (of original 11 value) 01011
Sum 10100

Finally the subtraction of the next 1* number that is larger which will be 10000 (since we used 1111).

subtrahend 10100
minuend 10000
difference 00100

And there is the 4! answer done!

How about negative numbers? Simple, just do the same process and invert the answers.

Hope you found this fascinating and enjoyed it. Let me know your thoughts in the comments.

Quick estimation tips for engineers


The Millau Viaduct. The Guggenheim Museum Bilbao. The Dubai Palm Islands.

Architectural masterpieces – beautiful art with solid engineering foundations.

Bash. Hadoop. Git.

Great software – beautiful code with solid architectural themes.

Is software development art? Pretty much. Is it engineering? Pretty much as well.

Dams, tunnel-boring machines and turbines are similar to operating systems, high-performance load-balancers and big data systems. Regardless of the end product’s tangibility – good engineering is necessary for delivering great results.

Engineers need to estimate system performance and simulate real-life scenarios. For most engineering fields, there are rich banks of proven theories and mathematical relations to rely upon. Unfortunately, software engineering – the new kid on the block – has a few rigorous rules, most times we rely on heuristics and handed-down wisdom.

The good news is that most of these rules can be successfully tailored to software engineering.

This post examines a few of these rules and shows how to use them.

1. Rule of 72

This is great for estimating exponential growth rate scenarios. It is well known in the financial sector but can be easily applied to algorithmic growth. The rule:

An exponential process with a growth rate will roughly double its value in time if r X t = 72.

The rule has its roots in logarithms (log 2 ~ 0.693). 69 is more accurate however it doesn’t have as many factors. 68 and 70, which are just as close, have the same flaw too. The closest easy-to-factor number is 72 (factors are 1,2,3,4,6,8,9,12,18,24,36 and 72).

Example

An exponential algorithm with a growth rate of 4% (i.e. it takes 1.04 more time to run a problem size of n + 1 compared to n) will have its run time doubled when the problem size increases by a factor of 18. Why? Because 4 * 18 = 72.

What if the problem size increases by 90?

4 * 90 = 360

360 / 72 = 5

Thus, we can expect the run time to double 5 times, a 32-fold (2 ^ 5) increase in run time.

Test: What problem size would cause a 1000-fold increase in runtime? Hint 1000 ~ 1024, (2^ 10).

2. π seconds make a nanocentury

Well, not exactly but close enough for back-of-the-envelope calculations and easy to remember.

π, the ubiquitous constant, is approximately 3.1427 thus π seconds is ~3.1427 seconds. A nano-century is 10-9 * 100 years which is 10-7 years. There are approximately 3.154 x 10seconds in a year and thus about 3.154 seconds in a nano-century.

The 0.3% difference between 3.154 and 3.142 is safe enough for quick estimates. Thus, π can be used for such quick calculations (with some minor accuracy losses).

Example

Let’s build on rules 1 and 2 with a possible real-world scenario.

An exponential program with a growth rate of 72% takes 300 seconds to run on a sample size n; would it be wise to run the program on a problem with n = 1000000?

Using the rule of 72, a 1000-fold increase in leads to a 10x increase in run time.

0.72 * 1000 = 720.

720 / 72 = 10

Doubling the run time 10 times gives a factor of 1024. The 1000-size problem should take about 1024 * 300 seconds ~ 300 000 seconds.

Let’s invoke the π seconds rule next, 300 seconds ~ 100 π seconds which in turn gives about 3.6 days. If a 1000-fold increase will cause a 4-day wait period; you can well imagine what a million-fold increase would lead to. Spending 3 days on a better algorithm might be worth it…

3. Little’s law

Little’s law states that the average number of things in a system L, is equal to the average rate at which things leave (or enter) the system λ, multiplied by the average amount of time things spend in the system, W. i.e. L = λ * W.

Imagine a processing system with a peak capacity of 2000 requests per second. Let’s further assume that is a 5-second processing time for each request. Using Little’s law, the system needs to robustly accommodate 10,000 requests (you better start thinking about memory too).

One fix might be to drop all messages that have spent more than 5 seconds in the system, thereby allowing new requests to come in. Otherwise, the system would need extra processing capability or storage under heavy loads.

A better approach would be to over-engineer and design a system with a higher capacity e.g. one with a 2500 requests per second limit. Thus 2000 requests per second spikes would push the system to  a comfortable 80% of max capacity.

Is over-engineering bad? Doesn’t it go against YAGNI? Well, think of this:  over-engineering is one of the reasons why the 132-year old Brooklyn Bridge is still standing today.

Little’s law is very useful when building web servers, real-time systems and batch processors. Stable complex systems can be built by linking several small systems that obey Little’s law. It has also been applied to Kanban work-in-progress (WIP) limits.

The beauty of Little’s law is in its simplicity and versatility – you can even use it to estimate queue wait times!

Note: The Guava RateLimiter mentions the rule, not sure if it implements it though.

4. Casting out 9s

A quick rule of thumb for verifying results of mathematical operations. Let’s take summation as an example.

Given x + y = z; the sum of all the digits in and modulo 9 must be equal to the sum of all the digits of modulo 9. Difficult? Nope, just cast out 9s as you go along.

Let’s take an addition example:

  1242

+3489

+4731

 _____

 9462

The sum of digits in the sum is (9 + 4 + 6 +2) = 21. 21 modulo 9 is 3. The sum of digits in the addends is (1 + 2 +4 + 2) + (3 + 4 + 8 + 9) + (4 + 7 + 3 + 1) = 9 + 24 + 15 = 48. And 48 modulo 9 is 3. Since both remainders are 3, we can assume the addition is correct.

For faster results, cast out 9s as soon as they are generated. For example, 9462 gives 9 + 4 + 6 + 2 which gives 9 + 1 + 2 = 3.

Subtractions can be verified by casting out 9s in the reverse operation. For example a – b = c turns into a = b + c. The rule also works for multiplication and division too.

P.S: Teach this to your kids…

More and more scenarios

How many books can a 1.44MB floppy disk hold?

Assuming a book has 250 pages, with each page containing 250 words and an average word length of 5 characters. This gives an average of 312500 characters per book.

A character can be stored in a byte; thus a book could be stored in about 0.325MB (~0.31 MebiBytes MiB). 4 such books will easily fit in a 1.44MB floppy disk. What about a 2GB memory stick? That can hold 6100+ books (several lifetimes of reads).

Can I physically move data faster than 100MB/s Ethernet?

How long does it take to transfer 1TeraBytes of data via a 100MB/s Ethernet link channel?

1 Terabyte = 1000000 MegaBytes; thus the Ethernet link channel would take 1000000 / 100 = 10000 seconds for a successful corruption-free transfer; which is about 2.778 hours. If physically moving a 1 TeraByte drive to the new location takes 30 mins; then it makes more sense to do that.

Need to move 10 Terabytes? Maybe a courier would be faster…

Conclusion

This post is to show the power of back-of-the-envelope and how they enable us to make quick accurate estimates.

Related

Casting out nines

How to implement the Y-combinator in JavaScript


This post provides a very simple step-by-step implementation of the Y-combinator in JavaScript. You should be able to implement the Y-combinator in your language of choice after reading this post; as you’ll see – it’s that easy.

What is a combinator?

According to wikipedia,

A combinator is a particular type of higher-order function that may be used in defining functions without using variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables.

The emphasized text highlights the most interesting part of the definition – combinators allow functions to be defined without variables. Imperative programming relies heavily on variables and trying to eschew variables can be a mind-stretching exercise.

Show me the code!

The following code snippet is a Y-combinator example of the factorial function in JavaScript.

var yCombFact = function (number) {
    return (function (fn) {
        return fn(fn, number);
    })(function(f, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * f(f, (n - 1));
        }
    });
};

yCombFact(5);
//120

Looks abstruse right? No worries – lets break it down.

Two major things

There are two major concepts that help drive home the understanding of the Y-combinator:

  1. No variables allowed – this implies we have to find a way to write the factorial function without using variables.
  2. Invoking the no-variable-function defined in 1 without using variables again.

Part 1: Rewriting the factorial function without variables

Here is the factorial function definition. Can you spot the variable usage in it?

var factorial = function(n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

factorial(5);
//120

The expression n * factorial(n – 1) only succeeds because it can find the variable factorial in scope; without it, the factorial function would not be recursive. But remember, we are trying to do away with all variables.

The workaround is to pass in the variable reference as a function parameter. In the factorial example, recursion can then be achieved by using the placeholder parameter as the reference. The no-variable-factorial function looks like the following:

var noVarFactorial = function(fn, n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * fn(fn, (n - 1));
    }
}

noVarFactorial(noVarFactorial, 5);
//120

The new definition works exactly like the old one but without the internal dependency on the factorial variable. Rather, recursion succeeds by relying on the ‘injected’  parameter and the computer is none the wiser.

Part 2: Invoking the no-variable function without variables

We have rewritten the factorial function to avoid variables however we still need to store the factorial function in a variable before invoking it


var factorial = ...;

factorial(factorial, 5);

The trick? Functions to the rescue again! Let’s create a factorialCalculator function that uses the noVariableFactorial definition above.

function factorialCalculator(n) {
    //as defined earlier
    var noVarFactorial = ...;
    return noVarFactorial(noVarFactorial, n);
}

factorialCalculator(5);
//120

The noVarFactorial name has to go since we want to avoid variables. And how do we achieve this? Yes, functions once more. So lets create a wrapper function inside the factorialCalculator that invokes noVariableFactorial.

function factorialCalculator(n) {
    var wrapper = function (noVarFact) {
        return noVarFact(noVarFact, n);
    }
    return wrapper(noVarFactorial);
}

factorialCalculator(5);
//120

Unfortunately, the wrapper function has led created another wrapper variable and this has to be eliminated too. For a complete implementation, the two variables (wrapper and noVarFact) have to go.

It’s now time to leverage language specific idioms to achieve this. JavaScript has the IIFE idiom which allows you to immediately invoke a function (read about it here). Using it, we can eliminate the need for the wrapper variable as thus:


function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(noVarFactorial);
}

factorialCalculator(5);
//120

Combining all the pieces

The last thing is to insert the noVarFact definition so that it is no longer a global variable in the scope. Just as we do in Mathematics, we can just ‘substitute’ the value in. The final piece is then:

function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}

factorialCalculator(5);
//120

And that, my friends, is the yCombinator in all its glory. I have decide to leave the variables as they are to make it all clear but here is the standard format so you know it when you see it


function factorialCalculator(n) {
    return (function (fn) {
        return fn(fn, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}
factorialCalculator(5);

Conclusion

The Y-combinator is quite easy to understand – it requires understanding function invocation patterns, variable substitution by parameters and higher level functions. As an exercise, can you try implementing the fibonacci using the Y-combinator approach? Better still, why not create a Y-combinator function that accepts function that match the fn(fn, n) signature?

Related Posts

JS and Scheme, the similarities


So I have been reading the awesome SICP book for some time now and the striking thing is the resemblance between JS and Scheme in some areas. Not surprising considering that Scheme was one of the major influences for the JS language. Thus, another reason to learn more languages to become a better programmer.

1. begin and comma operators

Scheme has the begin operator which allows you to evaluate a series of expressions and return the value of the last evaluated expression. This immediately struck me as being related to the JS comma operator which does exactly the same thing!

Code Samples


//JS

var x = 1,2,3,4,5;

x; //5

//Scheme

(define a

    (begin 1 2 3 4 5))

a; 5

One difference though is the fact that I find begin more useful in Scheme than its JS comma counterpart.

2. Type coercion

Due to JavaScript’s loose typing and implicit coercion, values can be truthy or falsy (e.g. undefined, null or 0). Idiomatic JavaScript leverages this coercion into false values and that’s why you see expressions like the below:


if(value) {

   console.log("Value is truthy!);

}

Scheme behaves in a similar way too – values are coerced albeit with more rigidity than JS.

(if null
   (display "Null coerced to true!")
   3)
; Null coerced to true!

3. Lambdas, first-class functions, closures and maybe lexical Scope

Some say JavaScript helped fuel the widespread adoption of lambdas in mainstream languages. It made people see the value of the hidden gem which hitherto had been languishing in the murky depths of academia.

Scheme and JavaScript do share first-class functions support, closures and lambdas. Although there is no lambda keyword in JS, anonymous functions essentially do the exact same thing. The introduction of let in ES6 should bring JS to par with Scheme’s let 

And that’s about it for now.

How to write a Promise/A+ compatible library


I decided to write Adehun after the series of promise posts. Adehun means ‘Promise’ in Yoruba, the language of my West African tribe. After a lot of rewrites, I can say Alhamdulillaah,  Adehun passes all the compliance tests of the promise/tests repo.

And here is how I did it.

1. Promise States & Validity

An object mapping the various states to integers and an isValidStates function to verify that only valid states are accepted.

2. Utils

A couple of helper functions including runAsync (to run functions asynchronously), isObject, isFunction and isPromise helper functions.

3. The Transition function – Gatekeeper for State Transition 

Gatekeeper function; ensures that state transitions occur when all required conditions are met.

If conditions are met, this function updates the promise’s state and value. It then triggers the process function for further processing.

The process function carries out the right action based on the transition (e.g. pending to fulfilled) and is explained later.


function transition (state, value) {
  if (this.state === state ||
    this.state !== validStates.PENDING ||
    !isValidState(state)) {
      return;
    }

  this.value = value;
  this.state = state;
  this.process();
}

4. The Then function

The then function takes in two optional arguments (onFulfill and onReject handlers) and must return a new promise. Two major requirements:

1. The base promise (the one on which then is called) needs to create a new promise using the passed in handlers; the base also stores an internal reference to this created promise so it can be invoked once the base promise is fulfilled/rejected.

2. If the base promise  is settled (i.e. fulfilled or rejected), then the appropriate handler should be called immediately. Adehun.js handles this scenario by calling process in the then function.

function then (onFulfilled, onRejected) {
 var queuedPromise = new Adehun();
 if (Utils.isFunction(onFulfilled)) {
   queuedPromise.handlers.fulfill =
                        onFulfilled;
 }

 if (Utils.isFunction(onRejected)) {
   queuedPromise.handlers.reject =
                        onRejected;
 }

 this.queue.push(queuedPromise);
 this.process();

 return queuedPromise;
}

5. The Process function – Processing Transitions

This is called after state transitions or when the then function is invoked. Thus it needs to check for pending promises since it might have been invoked from the then function.

Process runs the Promise Resolution procedure on all internally stored promises (i.e. those that were attached to the base promise through the then function) and enforces the following Promise/A+ requirements:

1. Invoking the handlers asynchronously using the Utils.runAsync helper (a thin wrapper around setTimeout (setImmediate will also work)).

2. Creating fallback handlers for the onSuccess and onReject handlers if they are missing.

3. Selecting the correct handler function based on the promise state e.g. fulfilled or rejected.

4. Applying the handler to the base promise’s value. The value of this operation is passed to the Resolve function to complete the promise processing cycle.

5. If an error occurs, then the attached promise is immediately rejected.


function process () {
 var that = this,
     fulfillFallBack = function (value) {
       return value;
     },
     rejectFallBack = function (reason) {
       throw reason;
     }; 

 if (this.state === validStates.PENDING) {
   return;
 }

 Utils.runAsync(function () {
   while (that.queue.length) {
     var queuedP = that.queue.shift(),
     handler = null,
     value;

     if(that.state===validStates.FULFILLED){
       handler = queuedP.handlers.fulfill ||
                 fulfillFallBack;
     }
     if(that.state===validStates.REJECTED){
       handler = queuedP.handlers.reject ||
                 rejectFallBack;
     }

     try {
       value = handler(that.value);
     } catch (e) {
       queuedP.reject(e);
       continue;
     }

   Resolve(queuedP, value);
  }
 });
}

6. The Resolve function – Resolving Promises

This is probably the most important part of the promise implementation since it handles promise resolution. It accepts two parameters – the promise and its resolution value.

While there are lots of checks for various possible resolution values; the interesting resolution scenarios are two – those involving a promise being passed in and a thenable  (an object with a then value).

1. Passing in a Promise value

If the resolution value is another promise, then the promise must adopt this resolution value’s state. Since this resolution value can be pending or settled, the easiest way to do this is to attach a new then handler to the resolution value and handle the original promise therein. Whenever it settles, then the original promise will be resolved or rejected.

2. Passing in a thenable value

The catch here is that the thenable value’s then function must be invoked  only once (a good use for the once wrapper from functional programming). Likewise, if the retrieval of the then function throws an Exception, the promise is to be rejected immediately.

Like before, the then function is invoked with functions that ultimately resolve or reject the promise but the difference here is the called flag which is set on the first call and turns subsequent calls are no ops.

function Resolve(promise, x) {
  if (promise === x) {
    var msg = "Promise can't be value";
    promise.reject(new TypeError(msg));
  }
  else if (Utils.isPromise(x)) {
    if (x.state === validStates.PENDING){
      x.then(function (val) {
        Resolve(promise, val);
      }, function (reason) {
        promise.reject(reason);
      });
    } else {
      promise.transition(x.state, x.value);
    }
  }
  else if (Utils.isObject(x) ||
           Utils.isFunction(x)) {
    var called = false,
        thenHandler;

    try {
      thenHandler = x.then;

      if (Utils.isFunction(thenHandler)){
        thenHandler.call(x,
          function (y) {
            if (!called) {
              Resolve(promise, y);
              called = true;
            }
          }, function (r) {
            if (!called) {
              promise.reject(r);
              called = true;
            }
       });
     } else {
       promise.fulfill(x);
       called = true;
     }
   } catch (e) {
     if (!called) {
       promise.reject(e);
       called = true;
     }
   }
 }
 else {
   promise.fulfill(x);
 }
}

7.  The Promise Constructor

And this is the one that puts it all together. The fulfill and reject functions are syntactic sugar that pass no-op functions to resolve and reject.


var Adehun = function (fn) {
 var that = this;

 this.value = null;
 this.state = validStates.PENDING;
 this.queue = [];
 this.handlers = {
   fulfill : null,
   reject : null
 };

 if (fn) {
   fn(function (value) {
     Resolve(that, value);
   }, function (reason) {
     that.reject(reason);
   });
 }
};

I hope this helped shed more light into the way promises work.

AdehunJS on Github.

Liked this post? Here are a couple more exciting posts!

1. The Differences between jQuery Deferreds and the Promises/A+ spec

2. Programming Language Type Systems II

3. Three Important JavaScript Concepts

SICP Section 2.2: New ideas and thoughts about programming


Here are 5 points to think about, hopefully they’ll trigger an ‘aha’ moment.

1. Leveraging the ‘Closure’ concept in programming

I am not talking about the ‘closure’ concept from programming (the one that involves free variables). This refers to the ‘closure‘ concept from mathematics and the power it brings to programming languages.

Mathematically, a set is closed under an operation if carrying out that operation on elements of the set will always give a result that is also in the set. For example, addition of positive integers is closed under the set of positive integers (subtraction is not: 1 – 3 = -2; which is not positive). Thus you can infinitely add positive numbers without worrying about ending up with non-positive numbers.

Lets take this to programming: closed operations allow chaining because the result will be always be a valid value for further operations. Does this strike a bell? Think fluent programming, think jQuery chains. This simple concept allows very complex actions by leveraging simple data and procedures.

2. Simple it might seem, difficult it might be

I initially thought Ex 2.18 would be dead easy; it was simple: reverse a scheme list. I realized my folly after spending the better part of an hour battling the task. Unlike pointer-based lists in C-like languages, the Scheme list is a chain of pairs, each pair points to another pair and so on. This negated walking down the list and reversing pointer directions.

Recursion was tricky since getting the second half of a list pair (i.e. cdr list) brought the entire sublist instead of just the element! The reversal task involved the following restrictions:

  • No variables allowed
  • Only the top element of a sublist can be retrieved while walking down the list
  • The procedure should recursively construct a new list simultaneously
  • Retrieval of elements at arbitrary positions is difficult; although possible, the absence of variables makes this useless.

My first approach worked but created a wrong data structure; Alhamdulillaah I eventually got the right solution. An elegant solution I found online solved the reversal problem simply – reversing a list is equivalent to appending the first element of the list to the reversal of the remaining elements.

3. Languages influence thinking and problem modelling

Languages play a big role in the way we think and write code, most of us see programming tasks through an ‘imperative’ pair of glasses. Alhamdulillah the SICP book is giving me a new pair of glasses. Don’t we all need a new pair of glasses every now and then?

Imagine a simple task: calculate the sum of the squares of the odd numbers in an array. The snippets below show both the imperative and functional styles of solving the problem.

function isOdd(num) {
    return (num %2 ) === 1;
}

function square(num) {
    return num * num;
}

//imperative style
function sumOddSq(arr) {
  var sumOddSq = 0;
  for(var i=0, l=arr.length; i < l; i++) {
      if(isOdd(arr[i])) {
          sumOddSq += square(arr[i]);
      }
  }
  return sumOddSq;
}

//functional style
function sumOddSq2(arr) {
  return arr.filter(isOdd)
            .map(square)
            .reduce(function (val, acc) {
                return val + acc;
            }, 0);
}

sumOddSq([1,2,3]);   //10
sumOddSq2([1,2,3]);  //10

The imperative approach involves walking through the array, checking for odd numbers, squaring those odd numbers and updating the sum as you go along. Using a functional programming approach, the task involves filtering the sequence for odd numbers, mapping the filtered numbers to their squares and then reducing these squares to a sum.

Functional programming relies heavily on the map, reduce and filter concepts and most problems can be solved based on these building blocks. For tasks involving non-list-like structures, converters can be used to create sequences and also generate structures from list results.

Did you notice? sumOddSq2 chains array operations; the map, reduce and filter operations can be said to be ‘closed’ over the set of arrays. Thus the result of any of these operations is always an array and can be reused immediately.

4. Software engineering is still engineering

Even though software engineers manipulate bits and not ‘real’ hardware; general engineering concepts still apply. There are a couple of interesting similarities; for example in electronics engineering, the diode can be viewed as an if statement while filters can be viewed as ‘filter‘ logic. The electronics engineer uses a filter, the software engineer uses a filter function. Both combine them to build even more complex abstractions.

Another example is the computer system and the various dependent layers; a simplified complexity order is shown below:

Transistors -> Logic gates  -> Circuits -> Processors -> Computers -> Distributed Systems

The same applies to software engineering, an application can be seen as:

Values -> Methods -> Classes -> Models -> Subsystems -> Modules -> Applications

We all start from something simple and progressively add complexity, each layer being shielded from the layer below it. A computer engineer can swap processors without breaking a computer; so should a software engineer be able to swap out layers in his code without bringing down the application.

5. Recursion

The 8 queens problem is one that a lot of people are familiar with and as you guessed, there was an exercise on that. The solution was quite simple – recursively build the board starting out with one queen. As the board grows, filter out positions that have queens attacking each other, finally return the list of all positions that have safe queens.

The functional-programming styled solution involved enumerating all positions, creating a safe? function to determine a valid board, passing this function to the filter and then outputting the results. It is great to see how functions can be composed and how the basic functional programming support for map, reduce and filter enable even higher levels of complexity.

I must admit I struggled a lot with this problem as it was difficult to understand and debug – walking up the call stack did expose some of its intricacies but it’s amazing how simple and how yet powerful the results of combining these simple concepts.

Conclusion

And that’s about it. Next is section 2.3 with about 20 questions; I skipped about 2 or 3 questions in 2.2 since I didn’t really feel they were worth solving. I hope I can get 2.3 done by mid January insha Allaah and create another post then!

Related Posts

Here are a couple more of my SICP-themed posts

1. SICP Section 2.1 : Thoughts

2. 5 things I have gained from SICP: Section 1.2

3. SICP Section 1.3 : Thoughts and Concepts

The Differences between jQuery Deferreds and the Promises/A+ spec


A lot of developers use jQuery Deferreds to achieve promise-like behaviour. Since deferreds work for most scenarios, most do not know that jQuery deferreds are not compliant with the Promise/A+ spec. Surprised? Well, there are probably promise implementations that fall short of the spec.

The schism is minor and might not really matter if promise libraries are not mixed. However, it is definitely good to know the difference – you never know when it’ll come in handy. So what’s the big issue and how does it affect developers?

The first difference is in the implementation of then; according to the Promise/A+ spec, then MUST return a promise regardless of what happens in its onresolved and onrejected handlers. Thus explicit reject calls, exceptions or syntax errors will all lead to a rejected promise. jQuery deferreds have a different view of the world altogether – unhandled errors will bubble up until they are caught or reach window.onerror.

Lets examine both scenarios, first the promise:

//dummy resolved promise
var p1 = Promise.resolve();

var p2 = p1.then(function() {
    throw new Error("Exception!");
});

console.log(p2);
//Promise {[[PromiseStatus]]: "rejected",
//[[PromiseValue]]: Error: Exception!}

And now, jQuery deferreds

var foo = new jQuery.Deferred();
var bar = foo.then(function (rslv) {
    throw Error('Exception!');
});

foo.resolve();
//Uncaught -> Error: Exception!

bar.state();
//pending

Another minor difference is the then function’s arity: the Promise/A+ specification says then should be dyadic while the Deferred’s then function is triadic. The jQuery implementation probably goes all the way back to the first promise proposal.

//Promise/A+
jsPromise.then(onresove,onreject);

//jQuery Deferreds
deferred.then(onresolve,
              onreject,
              onprogress);

Why should I care?
Assume you want to try a potentially disruptive operation when a promise resolves. If you’re using a Promise/A+ compliant library, all you have to do is check for rejected promises. The resolution state value will contain the information about success/failures. This is simple as there is no need to explicitly handle errors and consistent as you use the asynchronous programming promise style all through.

Deferreds will require you to explicitly handle all failures (e.g. by using try-catch blocks). This leads to a weird mixture of the asynchronous (promise-style) and synchronous (error-handling) programming styles. Moreover, if the error is unhandled, you can bid bye to all queued-up chained operations.

I am not going to say which approach is better – that’s your decision.

Yes! Workarounds

There are two workarounds : converting deferreds to Promises or ensuring the deferred then handler returns a promise.

The promise API will handle ‘thenables‘ (objects with a then function) as promises so mixing between different promise implementations is OK. Deferreds can be converted to promises (most libraries expose methods to do this).

To ensure deferreds always return promises, this involves wrapping error-throwing operations in try/catch handlers and rejecting promises when exceptions occur.

Let see a code example below:

var log = console.log.bind(console);
var foo = new jQuery.Deferred();
var bar = foo.then(function (rslv) {
 var tmp = new jQuery.Deferred();
 try {
    throw Error('Exception!');
 }
 catch (e) {
    tmp.reject();
 }
 return tmp.promise();
});
bar.fail(function (val) {
 log('Exception thrown and handled');
});
foo.resolve();

In case you are wondering, the jQuery promise derives from the jQuery deferred and has the same issues while fail is syntactic sugar for handling promise failures. The promises-tests repo can be used to evaluate implementations for Promise/At+ compliance :).

tl;dr?

jQuery deferreds are not Promise/A+ compliant.