The Differences between jQuery Deferreds and the Promises/A+ spec


A lot of developers use jQuery Deferreds to achieve promise-like behaviour. Since deferreds work for most scenarios, most do not know that jQuery deferreds are not compliant with the Promise/A+ spec. Surprised? Well, there are probably promise implementations that fall short of the spec.

The schism is minor and might not really matter if promise libraries are not mixed. However, it is definitely good to know the difference – you never know when it’ll come in handy. So what’s the big issue and how does it affect developers?

The first difference is in the implementation of then; according to the Promise/A+ spec, then MUST return a promise regardless of what happens in its onresolved and onrejected handlers. Thus explicit reject calls, exceptions or syntax errors will all lead to a rejected promise. jQuery deferreds have a different view of the world altogether – unhandled errors will bubble up until they are caught or reach window.onerror.

Lets examine both scenarios, first the promise:

//dummy resolved promise
var p1 = Promise.resolve();

var p2 = p1.then(function() {
    throw new Error("Exception!");
});

console.log(p2);
//Promise {[[PromiseStatus]]: "rejected",
//[[PromiseValue]]: Error: Exception!}

And now, jQuery deferreds

var foo = new jQuery.Deferred();
var bar = foo.then(function (rslv) {
    throw Error('Exception!');
});

foo.resolve();
//Uncaught -> Error: Exception!

bar.state();
//pending

Another minor difference is the then function’s arity: the Promise/A+ specification says then should be dyadic while the Deferred’s then function is triadic. The jQuery implementation probably goes all the way back to the first promise proposal.

//Promise/A+
jsPromise.then(onresove,onreject);

//jQuery Deferreds
deferred.then(onresolve,
              onreject,
              onprogress);

Why should I care?
Assume you want to try a potentially disruptive operation when a promise resolves. If you’re using a Promise/A+ compliant library, all you have to do is check for rejected promises. The resolution state value will contain the information about success/failures. This is simple as there is no need to explicitly handle errors and consistent as you use the asynchronous programming promise style all through.

Deferreds will require you to explicitly handle all failures (e.g. by using try-catch blocks). This leads to a weird mixture of the asynchronous (promise-style) and synchronous (error-handling) programming styles. Moreover, if the error is unhandled, you can bid bye to all queued-up chained operations.

I am not going to say which approach is better – that’s your decision.

Yes! Workarounds

There are two workarounds : converting deferreds to Promises or ensuring the deferred then handler returns a promise.

The promise API will handle ‘thenables‘ (objects with a then function) as promises so mixing between different promise implementations is OK. Deferreds can be converted to promises (most libraries expose methods to do this).

To ensure deferreds always return promises, this involves wrapping error-throwing operations in try/catch handlers and rejecting promises when exceptions occur.

Let see a code example below:

var log = console.log.bind(console);
var foo = new jQuery.Deferred();
var bar = foo.then(function (rslv) {
 var tmp = new jQuery.Deferred();
 try {
    throw Error('Exception!');
 }
 catch (e) {
    tmp.reject();
 }
 return tmp.promise();
});
bar.fail(function (val) {
 log('Exception thrown and handled');
});
foo.resolve();

In case you are wondering, the jQuery promise derives from the jQuery deferred and has the same issues while fail is syntactic sugar for handling promise failures. The promises-tests repo can be used to evaluate implementations for Promise/At+ compliance :).

tl;dr?

jQuery deferreds are not Promise/A+ compliant.

Advertisements

A peek into JavaScript’s Array.prototype.map and jQuery.map


The map function comes from JavaScript’s functional programming roots: it applies a function to every array element and returns a new array of the results without mutating the original array. So lets look at the native JS and jQuery map implementations.

Array.prototype.map

The signature for the native JS implementation is:

array.map(callback, thisObject)

The callback is the transforming function that changes elements of array into new elements while thisObject will be this inside the callback (some cool applications exist). Most browsers allow Array.protototype.map  (support got added in JavaScript 1.6) however a few browsers still do not.

jQuery’s map

The signature for the jQuery implementation is:

$.map(array, callback, arg)

The array and callback parameters mean the same as above while the arg parameter allows you to send in extra arguments into the function ( I still haven’t found a use for this yet).  Unfortunately this inside the callback refers to the Global object (Window); if you need to get around this, you can wrap the callback in a $.proxy  call.

What’s the Callback?

The callback function is triadic in both implementations; the three parameters being the array element, the element’s index and the entire array (why this is needed still puzzles me).

jQuery callback example

var numbers = [1,2,3,4],
 squareNumbers = function (number) {
     return number * number;
 },
 squares = $.map(numbers, squareNumbers);

console.log(squares);//logs [1,4,9,16]

Array.prototype.map callback example

var numbers = [1,2,3,4],
    squares = numbers.map(squareNumbers);

console.log(squares);//logs [1,4,9,16]

Using element indices

If the element indices matter to you, take it into consideration the subtle differences between both implementations.

jQuery example

The jQuery’s map method will always return a flattened array which does not contain null/undefined values.

var numbers = [1,2,3,4],
    getAllEvenIndices = function(number, indexInArray){
        if(indexInArray % 2 === 0) return number
    },
    evenIndexedNumbers = $.map(numbers, getAllEvenIndices);

console.log(evenIndexedNumbers); //logs [1,3]

Native JS map example

The native implementation does not filter out undefined values.

var numbers = [1,2,3,4],
    evenIndexedNumbers = numbers.map(getAllEvenIndices);

console.log(evenIndexedNumbers);
//logs [1, undefined, 3, undefined]

Can I use Objects?

Surprisingly yes! You can call the jQuery.Map function on an object, for example, you have a JSON payload coming in from the server and you want an array of the values or keys, it is simple and easy to do this:

var payload = { id : 1, username : "xyz", points : 10}
    retrieveKeys = function (value, key) {
        return key;
    },
    payloadKeys = $.map(payload, retrieveKeys);

console.log(payloadKeys);//logs ["id", "username", "points"]

I don’t know of any simple way to do this using the native JS Array.prototype.map (I don’t know if it is even possible, maybe with some JS kung-fu. :) ).

So why use jQuery’s array.map if the JS language supports it implicitly? Here are a couple of reasons: the jQuery version strips off both undefined and null values, will work fine in all browsers (IE7 + IE8 do not have native support). But you can always write a polyfill too…

Here’s a puzzler: what would the snippet below return?

var result = $.map ([[1,2], [3,6]], function(elem) {
    return elem;
})

Did you enjoy this post? Check out my other posts on JS, JS Events and JS functional Programing.

EmberJS vs Backbone


Although I have never tried out the Backbone framework, I had to review it some time ago when I had to select the JS framework to use. I wrote this last year so if anything has changed please let me know.

EmberJS

Strengths

  • Allows developers to control the entire page at runtime and not just small sections.
  • Two way data binding and computed properties.
  • Auto-updating templates.
  • Reduces the amount of boilerplate code developers have to write.
  • Well-designed framework
  • Better suited to really complex applications
  • Good documentation and really strongly-knit community.

Weaknesses

  • Rigid conventions.
  • Small community.
  • Lacks a data persistence layer although the sophisticated data access library Ember.data (still in development) looks very promising.
  • Quite large ~ 37Kb.

Backbone

Strengths

  • Fast, small and compact – one simple file.
  • Small impact on architecture and/or file layout.
  • Can be embedded in small sections of a webpage.
  • Persistence layer synchronization support over REST.
  • Easier to learn for people who already know jQuery.
  • Strong community and very popular.

Weaknesses

  • Issues with memory management sometimes occur – e.g. zombie views.
  • Users have to write more boilerplate code.
  • Doesn’t scale easily – complexity grows somewhat linearly.

Related

EmberJS: The Rant


So I started on EmberJS some time last year; after spending an inordinate amount of time trying to design a prototype with people located all across the world. Finally, after several dreary demanding iterations and lots of work, we finally agreed on an implementation.

One of the dev members suggested using EmberJS or backbone. Based on his review, backbone was the easier choice: it had more support, good documentation, lots of tutorials, books and thousands of stackoverflow questions (a good measure of  tool/framework/concept popularity).

My own review confirmed these but I stubbornly stuck with EmberJS – I don’t know why, maybe I like challenges or maybe I just wanted to be different. Furthermore, my review revealed a few advantages of using EmberJS over Backbone so I dived into it.

I soon came to regret my decision; after a couple of tutorials, I seemed to be getting no where closer to understanding the framework. The problem was the volatile nature of the EmberJS framework, most tutorials would only work if you used the ‘matching’ framework release. The Ember team was most probably working themselves to the death to get it stable, however, for me, it was challenging, frustrating and annoying to use the framework.

Alhamdulilah I finally got it to work – my first attempt felt ugly and inelegant (add whatever else you like ) but it did work and that was fine enough for me. I soon fell in love with all the extra goodies and automatic help it provided; my colleague was soon inundated by my fawning; most probably he got tired of hearing me say ‘EmberJS is coooooooool’.

Organizing the application proved to be another challenge too; I had a huge ugly proliferation of files – controller, templates, views etc. RequireJS, an AMD loader provided a solution but not before nearly driving me crazy; I inadvertently swapped some parameters in the module specification and had to spent the better part of an hour trying to understand why the login object was a registration object. (Imagine trying to find out why a dog is a cat).

So after weeks of development and finally getting the app nicely running, I got another shocker. There was a new release of EmberJS and guess what? It was not backward compatible! To the credit of the guys though; the new release was awesome, had excellent documentation and was much easier but all I saw was the fact that I’d need to rework everything. All my sweats, efforts and tears were going to go down the drain just like that! No way!! I stuck with the old way and old api – it was much easier for me and upgrades aren’t compulsory.

I have learnt at least two things: one, if you use code that is in rapid flux then you are O.Y.O (ON your OWN) and two, stop testing the waters with your feet, just jump in! If you want to try something new, go get it done!

Back to the story; we did a system redesign again (yes, for the umpteenth time). I believe we are in good shape to do a great job now insha Allaah and yes we are upgrading to the latest EmberJS release – we have some veterans now.

Enough said, I have to get back to work :)

Asynchronous Module Definition (AMD)


AMD (No, not the chip maker) stands for Asynchronous Module Definition – a cool new way of loading scripts. AMD attempts to solve some of the limitations associated with the orthodox approach of loading scripts. The usual approach is to load script files based on their dependencies (e.g. files depending on jQuery have to be specified after the jQuery source file is included).

Browsers, however, load script files synchronously (see image below) and a project having a large number of scripts will take more time to load; clearly, this approach will not scale with project size. Furthermore, subtle dependencies in pre-ordered scripts might cause weird bugs and writing unit tests will be difficult. To add salt to injury, manually adding script tags to files is simultaneously boring and error-prone.

Although it is possible to load JavaScript through AJAX and call eval on the returned script, this approach poses a couple of problems: it is difficult to debug such applications as new code is injected into files, there are performance concerns (from calling eval) and restrictions exist on cross-domain AJAX calls.

The other obvious alternative would be to have a single script file containing all the source code for the project. While this might be a good idea for production, you’ll agree with me that having a single 200kline file in development bodes no good.

AMD to the Rescue!

Most languages have embedded code organization constructs – such as Java’s famous import and package. Unfortunately JavaScript doesn’t have such and developers have been tormented by the perils of messy disorganized code (bad code shows no mercy and respects no authority when it is on its turf). AMD emerged as a response to this – albeit after a lot of trials, mistakes and hard work by devs.

AMD allows developers to define modules and their dependencies so that everything can be asynchronously loaded and resolved. The benefits of this approach include improved performance and load times, decoupling of concerns, easier testing and the restoration of sanity to huge web projects.

AMD allows the bypassing of global variable usage too; since module dependencies are passed as parameters to the factory function. By wrapping the main AMD definition in an IIFE (Immediately Invoked Function Expressions, pronounced iffy), it is possible to avoid polluting global scope. Refactoring has never been easier, all that needs to be changed is the offending file.

And all this is not limited to JavaScript file alone! Developers can leverage AMD plugins to load other types of resources too e.g. text files, css files etc.

AMD Visualized. Source: Wikipedia

How It works?

The AMD style requires you to define each module discretely (quite similar to the Java style of standalone class files). While this might be overkill for small projects, some form of code organization is a MUST for medium-to-large projects to avoid getting stuck in a tarpit of code.

I am currently using a MVC framework for front end development coupled with requireJS (a popular AMD loader). This means that each controller, template, view and route is defined in a separate file; the benefits of this are obvious – navigating through code is easier and ripple effects are minimized.

However, there is performance tradeoff: loading 20+ files instead of one file is terribly inefficient; in my case, it takes about 1-3s for everything to load (which is almost a sacrilege in web dev). The good news is that there are optimizers that compress and obfuscate the code, they give you that single  file.

AMD loaders

Here are a couple of AMD loaders: requireJS, curl.js. almond and inject.

What is the future of AMD?

AMD might be overkill for small projects but as web applications continue to evolve, it should become a mainstay – the upcoming ECMAScript 6 (i.e. Harmony) will support modules. I believe it is pretty cool and it is worth learning if you are a web developer.

Disclaimer

No processors were hurt in the production of this piece (well, mine was subjected to some work though). :D

Design Patterns: PubSub Explained


I actually wanted to write about PubSub alone: it’s a fascinating design pattern to me however, the thought occurred to me, why not write a design patterns’ series? It’ll be good knowledge for me and give good information. So here goes the first: PubSub.

Introduction

The pattern works using a middleman; an agent that bridges publishers and subscribers. Publishers are the objects that fire events when they are done doing some processing and subscribers are objects who wish to be notified when the publisher is done – i.e. they are interested in the work of publishers.

A good example is that of a radio station where people tune in to their favourite programs. The publisher has no knowledge of the subscribers or what programs they are listening to; he only needs to publish his program. Subscribers too have no way of knowing what goes on during program production; when their favourite program is running, they can respond by tuning in or informing a friend.

PubSub achieves very loose coupling: instead of looking for ways to link up two discrete systems; you can have one hand off messages and have the second part consume these messages.

Advantages

  • Loose coupling

Publishers do not need to know about the number of subscribers, what topics a subscriber is listening to or how subscribers work; they can work independently and this allows you to develop both separately without worrying about ripple effects, state or implementation.

  • Scalability

PubSub allows  systems to scale however it buckles under load.

  • Cleaner Design

To make the best use of PubSub, you have to think deeply about how the various components will interact and this usually leads to a clean design because of the emphasis on decoupling and looseness.

  • Flexibility

You don’t need to worry about how the various parts will fit; just make sure they agree to the one contract or the other i.e. publisher or subscriber.

  • Easy Testing

You can easily figure out if a publisher or subscriber is getting the wrong messages.

Disadvantages

PubSub’s greatest strength – decoupling – is also its biggest disadvantage.

  • The middleman might not notify the system of message delivery status; so there is no way to know of failed or successful deliveries. Tighter coupling is needed to guarantee this.
  • Publishers have no knowledge of the status of the subscriber and vice versa. How can you be sure everything is alright on the other end? You never can say…
  • As the number of subscribers and publishers increase, the increasing number of messages being exchanged leads to instabilities in this architecture; it buckles under load.
  • Intruders (malicious publishers) can invade the system and breach it; this can lead to bad messages being published and subscribers having access to messages that they shouldn’t normally receive.
  • Update relationships between subscribers and publishers can be a thorny issue – afterall they don’t know about each other.
  • The need for a middleman/broker, message specification and participant rules adds some more complexity to the system.

Conclusion

There are no silver bullets but this is one excellent way of designing loosely coupled systems. The same concepts drive RSS, Atom and PubSubHubbub.

PubSub Example (JavaScript)

Events in JavaScript


JavaScript events are created in response to user actions such as clicks, mouse moves or key presses; not all events are triggered by user actions though, some are automatic such as the onPageLoad event. JavaScript’s event model allows developers to write event handlers which respond to these events and provide the interactivity we have come to love.

Most JavaScript code is written around events; when an event is fired, your handling code is triggered and a response is created for that action. For example, when a user clicks a send button, an AJAX call is initiated and the page is updated with new data. Or when you like this blog post, event handling is also involved. :)

Event models were created while the browser wars were raging – hence it’s not surprising both parties chose incompatible models; the W3C model came out even more recently. Due to these inconsistencies, it is quite difficult to get event handlers to work across all browsers however the good news is that a lot of libraries (e.g. the ubiquitous jQuery) handle the gory details for you.

Whose Event is it?

Suppose an element and one of its parents have event handlers for the same event (say a click or hover event), which one should fire first? The parent? The child?

As you’ll expect, the Microsoft and Netscape models differ: the Microsoft approach is called event bubbling i.e. events start from the child and propagate up to the parent (i.e. bubbling). The Netscape approach is called event capturing, the parent’s handler is triggered first and then the event is passed on to children.

Assuming a click event happens on the click area below:

<div>
    <span>Click area </span>
</div>

Capture mode: The div’s click handlers will be triggered first followed by the span’s handlers.

Bubble mode: The span’s click handlers will be triggered first and then the div’s handlers will fire.

The W3C specification supports both approaches: events are first ‘captured’ progressively until the target element is reached; next, the events bubble up from the target element. I wonder why this two-stage approach is preferred; sounds like more work to me.

Catch that event!

The addEventListener(type, listener, useCapture) method is the way to go about it, you specify the event type, handler function and whether to use the capture or bubble mode as the three parameters. Passing in false as the third argument creates the bubbling mode while passing in true initiates the capture model.

You can also stop events from bubbling or propagating beyond a certain element. Just call event.stopPropagation()  (W3C browsers) or set event.cancelBubble = true (for IE < 9) in the event handler.

Whose event art thou?

To access the original source of the event, i.e. the element on which the event was triggered, you use the event.target (Netscape-y browsers) or event.srcElement (IE-y browsers). What if it’s being captured or bubbling? It doesn’t matter – this reference always points to the culprit.

However, say your event is bubbling up or capturing down ( I just invented that ), and you want to check the current element, you can use the event’s currentTarget property. This has a reference to the HTML element that the event is being handled by, for example a parent element. Unfortunately, this is not supported in the Microsoft event model.

To Capture or to Bubble?

Capturing has been shown to have a slight performance advantage over bubbling while bubbling can be used to cut the number of event handlers you need. For example, you can put a handler on a container that will capture all events on child elements when they bubble up. I like this approach because it saves you from having to add new listeners whenever a new child is added to the container.

Excellent Event Model Demo

You most probably won’t need to write your event handling code these days; but knowing how things work is always good, not so?

Do you like this post? Check out my other posts on JavaScript, its functional programming parts and a book review.

The language series: JavaScript


I was pretty much amazed to see a JavaScript library for Arduino last year; it’s common knowledge that the language powers uncountable web sites, mobile applications and even Windows 8 apps, but Arduino? Mind-blowing. The ever-growing need for powerful web experiences propels the adoption and development of this remarkable language. 

JavaScript was influenced by C (syntax), Java (naming conventions), Scheme (functional programming and lambdas) and Self (Prototypical inheritance). The rushed development and deployment of the language explains some of the flaws in the language;  however, it exposes a powerful and beautiful core once you know it well enough to fully leverage the benefits of prototypical inheritance (good for programmers to know), functional programming and its unique development idioms.

My JavaScript Story

I started learning JavaScript in 2011 after copy-pasting a gazillion snippets off the internet (yes, I was guilty of that).  My first read was jqfundamentals and then eloquent JavaScript; both are excellent books by the way. However, I learnt nearly all the JavaScript I now build on during a 12-week internship; it was an awesome but sometimes grueling experience – I had to read Douglas Crockford’s book, dig into jQuery and follow first-rate dev methods and practices.

The bad parts

  • What is this? (Pun intended). It can refer to the window object or the object instance it was called on. Be careful with this. :D
  • Addition and subtraction can be string concatenation or mathematical operations. See below for examples.
  • The == and != operators carry out implicit type coercion, you have to use the === and !== operators which check for type and value instead.
  • The evil eval has led to security issues, avoid it totally. A few good uses exist though but you should still avoid it.
  • The parser automatically inserts semi-colons after statements; leads to weird bugs. Make sure you add semi-colons where they should be.
  • Global variables, you forgot to declare some variable properly? No problem, JavaScript stuffs it in the global object. Good luck finding these.
  • Inconsistencies in browser support.

The Good parts

  • First-class functions and lambdas; whoot!
  • Prototypical inheritance; a powerful object model as objects are not limited to being instances of a single class.
  • Loose typing boosts expressiveness and ease of use.
  • Cross-platform and widely-used; in fact you’re using JavaScript now.
  • The good subset of the language is really beautiful and powerful.
  • Closures are pretty cool.
  • Lots of support, there seems to be a never-ending stream of new fancy libraries, frameworks and utilities.

Weird JavaScript

Now some of these are truly incredible; if you don’t believe me, fire up the console in your browser and type them in!

  • ‘5’ + 3 = ’53’ while ‘5’ – 3 = 2.
  • “” == 0 and 0 == “0” evaluate to true but “0” == “” is false, shouldn’t equality be transitive?
  • 0.1 + 0.2 !== 0.3; in my browser, the result is 0.30000000000000004; however I really don’t trust floating point operations anyway as computers have issues with them. Best thing is to always to use some epsilon value as a buffer range.
  • typeof([]) === typeof({}) === typeof(null) === “object”.
  • [] + {} = “[object Object]” but {} + [] = 0!
  • NaN !== NaN

How Most People learn JavaScript

A lot of people get into JavaScript by copying useful code chunks on the internet and plunking them into their codebases (I still haven’t met anyone who explicitly chose to learn JavaScript). Afterwards, they need more control and expressive power so they decide to integrate and leverage a library (most probably jQuery). Ultimately, most people find themselves learning more about their preferred library.

Next, they go ahead to learn the language itself by reading a couple of books about it and attending conferences. The JS enlightenment comes handy in understanding the quirky behaviour of the language and makes it easy to use huge big frameworks and libraries (e.g EmberJS, backbone, etc). In the long run, motivated programmers eventually start writing their own plugins and frameworks.

Tip

Great tools like JSHint and JSLint help spot potential pitfalls and save you from tearing out all your hair in frustration, agony and annoyance. They also reduce the chance that there’ll be some irate developer in the future looking for ways to hurt you or cursing you every single day.

Rating

8.1/10

An easy language to use – most programmers actually started out by copying code snippets without fully understanding the language and slowly grew to learn it. It runs on the client side, server-side and there are wrappers for most devices. JavaScript is a dynamic prototypical loosely typed functional programming language with some ‘swagger‘ at its core.

Did you enjoy this post? Check out my earlier posts on C, PythonJava and PHP.

learning jQuery


I stumbled upon Rebecca Murphy‘s jqfundamentals and found it to be a gentle introduction to jQuery. Well, I use jQuery a lot – well, let’s just say I copy prefabricated solutions – so I felt it’ll do no harm to learn how to write jQuery itself. So far it’s been lovely, Rebecca’s piece is great and I must confess I’m impressed by JavaScript‘s capabilities.

There are exercises at the end of some of the chapters in the book; I tried out some and got exposed to the raw power of jQuery. One of the tasks involved removing the label for a search input from the DOM and setting that label’s text as the search input’s value. It also involved clearing the search input’s text whenever it was in focus and resetting the text if the user didn’t type in anything.

Here’s my code…

$(document).ready(function () {
var $search = $('#search'); //Get the search form
var $input = $search.find('input.input_text'); // search input
var label = $search.find('label').remove().text(); //remove the label
$input
.val(label)
.addClass('hint')
.bind('focus',function() {
$(this).removeClass('hint').val('');
})
.bind('blur', function(){
if (!$.trim($(this).val())) // Check if the user didn't enter text and reset the text
{
$(this).val(label).addClass('hint');
}
});
});

Well, I know this isn’t much but it’s a start – I’m still a novice.

If you’re interested in learning, do check out her site: jqfundamentals.com. If you’ll like to learn about software engineering; try watching the CS**** videos on Stanford’s YouTube channel -> they’ve got lots of interesting stuff that’ll improve and boost your knowledge.

Next week, I’ll write about what I learnt again insha Allah ( God willing ) :D