Debugging, JavaScript

How to watch variables in JavaScript


We all have to track variables while debugging; generally the easier it is to monitor changes, the faster bugs can be detected and fixed.

Web developer tools expose various methods for tracking changes in variable values. There are a couple of drawbacks e.g. non-uniform support across platforms) but again, half-bread is better than none :).

Curious about such methods? Let’s take a walk…

1. Chrome’s Object.Observe

First on the list is Chrome’s Object.observe static method. The observe/unobserve functions are static functions – they can only be called on Object and do not exist on object instances. Lets take an example:

var obj = {};
Object.observe(obj, function(changes) {
    console.log(changes[0]);
});

obj.val = "Hello"

// [{  name: "val",
//     object: { val: "Hello" },
//     type: "add" }]

That was pretty cool right? How about observing primitives?

var val = 2;

Object.observe(val, function(changes) {
    console.log(changes[0]);
});

//Console
//TypeError: Object.observe cannot
//observe non-object

Disappointed? Maybe we should check out Firefox’s watch function then…

TIP: Chrome also has experimental Array.observe and Array.unobserve methods. Don’t use in production!

2. Firefox’s Object.prototype.watch method

Firefox differs from Chrome because it exists on object instances; thus any object can watch for changes to its inner variables. Confused? Lets see an example:

window.val = 2;
//equal to var val = 2;

window.watch("val",function(id, old, cur) {
 console.log("Changed property: ", id);
 console.log("Original val: ", old);
 console.log("New val: ", cur);
});

val = 4;

// Changed property: val
// Original val: 2
// New val: 4

Advantages? Well, you can now watch primitives values. I also like the fact that object instances can watch their own properties for changes rather than having a global static watcher. But then again, it’s up to you :).

3. Can I haz polyfill?

A few browsers (including IE) do not have support for watch or observe; sometimes you do not have the free choice of selecting a browser, so let’s move on to finding an implementation that can be used as a polyfill and work across browsers.

First, lets talk about getters and setters in JS; JavaScript has the concept of getters and setters which allow you to define functions that are invoked when you try to retrieve a value or set its value.

var obj = {};

Object.defineProperty(obj, "name", {
    get : function(){ return this._name;},
    set: function(val){  this._name = val;}
});

obj.name;
//undefined

obj.name = "name";
obj.name;
//name

TIP: Can you figure out the bug in the following snippet?

var obj = {};

Object.defineProperty(obj, "name", {
    get : function(){ return this.name;},
    set: function(val){  this.name = val;}
});

obj.name;
//What happens?

Just as you thought, how about using this concept to fix this issue? Various browsers implement the getter/setter syntax differently, IE expects this format while Chrome expects this format. You could write a façade to hide this but again there is ugliness lurking around under the covers.

But great news! There is an excellent 5-year old polyfill by Eli Grey and it might meet your needs. Here is the gist and lets dive into its brilliance.

if (!Object.prototype.watch) {
 Object.defineProperty(
   Object.prototype,
   "watch", {
     enumerable: false,
     configurable: true,
     writable: false,
     value: function (prop, handler) {
       var old = this[prop];
       var cur = old;
       var getter = function () {
          return cur;
       };
       var setter = function (val) {
        old = cur;
        cur =
          handler.call(this,prop,old,val);
        return cur;
       };

       // can't watch constants
       if newval(delete this[prop]) {
        Object.defineProperty(this,prop,{
            get: getter,
            set: setter,
            enumerable: true,
            configurable: true
        });
       }
    }
 });
}

How it works?

The function assumes knowledge of JavaScript’s accessor support (surprised that JS has getters and setters, yes there are subtle pockets of beauty in the language). The object.watch polyfill function accepts a property to watch and a handler to call when that function changes.

The handler function uses a beautiful trick – it redefines the watched value in the same scope, this ensures that the handler is always invoked when that value changes.

The trick to understanding the polyfill is in the second Object.defineProperty call. The value to be watched is first deleted to verify it is not a constant but this deletion will remove it from the scope. To counteract this, Eli’s brilliantly redefines the object in the same scope using Object.defineProperty. The getter is a plain getter while the setter is more involved – when the watched value is overridden or set to a new value, the setter is called. The closure in the watch object will invoke the handler after setting property to its new value. Neat huh?

Conclusion

Tip: A good way to debug is to use the debugger with an observed variable as shown below:

Object.observe("value", 
   function() { debugger; });

This will cause the debugger to kick in whenever the value changes and this helps a lot in tracking down bugs.

1. Eli’s polyfill didn’t fire in a deterministic manner when I tried it on Chrome but it did work.

2. Remember there are going to be performance implications but again this assumes you are in debugging mode :)

And that’s it! Feel free to copy over Eli’s implementation and use for your code.

Standard
SICP

SICP Review: Sections 3.1 & 3.2


Here are my thoughts on Sections 3.1 and 3.2 of my SICP journey; the deeper I go in the book, the more I appreciate the effort, style and work the authors put into it. Each section builds on earlier sections and it is amazing how it forces you to see software development from a new angle. Enjoy…

1. Perception and Design

Our perceptions of the real world influence the way we model objects in software development. The example in the book uses two objects – a rational number and a bank account. Modifications of a rational number always generate a new rational number that is different from the original (e.g. adding 1/2 to 3/8).  We do not ‘view’ rational numbers as being capable of changing – i.e. the value of 3/8 is always 3/8.

Conversely a bank account object has a balance that can ‘change'; withdrawals and deposit change this value even though we still say it’s the same bank account object! There appears to be a double standard right? If internal details (i.e. the numerator or denominator) of a rational number changes, we might end up with a new rational number however if a bank account balance changes we still have the same bank object.

Strange isn’t it, the way we think does influence the way we write code.

2. Environment

The environment (context) is very important in programming languages even though we just sometimes forget about it. Every language has some global environment that determines what expressions, values and parameters mean. If you change the environment, then it becomes highly possible that the expression has a different meaning.

Lets take an example from human languages, the word ‘wa’ means ‘come’ in Yoruba but can mean ‘and’ in Arabic. The same applies to programming languages, have you ever wondered why and where the + (a free variable) is defined? It has to be in the global environment so the programming language knows what it means. Now if you overrode the ‘+’ (why you’ll do this beats me) then it can mean something else.

3. Execution Order 

Exercise 3.08 was pretty fun, it was a simple ask – write a procedure such that evaluating (+ (f 0) (f 1)) will return 0 if the arguments to + are evaluated from left to right but will return 1 if the arguments are evaluated from right to left.

While the exercise is trivial and probably just an academic exercise, it shows how assignments can influence program execution. My solution uses a closure to track the number of calls and returns a unary lambda. The lambda returns the value of its argument on its first call and a zero on subsequent calls.

Thus if the execution order is left-to-right, then the first call will have a parameter of 0 and further calls will return 0 leading to (+ 0 0) while if it was right-to-left, then the first call will have a value of 1 and further calls will evaluate to 0.

Standard
Miscellaneous

JS and Scheme, the similarities


So I have been reading the awesome SICP book for some time now and the striking thing is the resemblance between JS and Scheme in some areas. Not surprising considering that Scheme was one of the major influences for the JS language. Thus, another reason to learn more languages to become a better programmer.

1. begin and comma operators

Scheme has the begin operator which allows you to evaluate a series of expressions and return the value of the last evaluated expression. This immediately struck me as being related to the JS comma operator which does exactly the same thing!

Code Samples


//JS

var x = 1,2,3,4,5;

x; //5

//Scheme

(define a

    (begin 1 2 3 4 5))

a; 5

One difference though is the fact that I find begin more useful in Scheme than its JS comma counterpart.

2. Type coercion

Due to JavaScript’s loose typing and implicit coercion, values can be truthy or falsy (e.g. undefined, null or 0). Idiomatic JavaScript leverages this coercion into false values and that’s why you see expressions like the below:


if(value) {

   console.log("Value is truthy!);

}

Scheme behaves in a similar way too – values are coerced albeit with more rigidity than JS.

(if null
   (display "Null coerced to true!")
   3)
; Null coerced to true!

3. Lambdas, first-class functions, closures and maybe lexical Scope

Some say JavaScript helped fuel the widespread adoption of lambdas in mainstream languages. It made people see the value of the hidden gem which hitherto had been languishing in the murky depths of academia.

Scheme and JavaScript do share first-class functions support, closures and lambdas. Although there is no lambda keyword in JS, anonymous functions essentially do the exact same thing. The introduction of let in ES6 should bring JS to par with Scheme’s let 

And that’s about it for now.

Standard
SICP

SICP Section 2.5


So four months after I started and more than 90 solved exercises, I can say Alhamdulillaah chapter 2 is done! Section 2.5 was one of the most challenging so far; the exercises revolved around building large easily extensible software systems. And here are the thoughts again :)

1. Coercion

The section revealed the importance of coercion in software development by creating a maths system for various numeric types (e.g. integers, rational, real and complex numbers). Since Scheme is not an OOP language, procedures were used to encapsulate each numeric type’s functionality and ‘install’ them into the system. A generic apply function multiplexes routing calls to the matching number type interface.

The challenge was in handling mixed operations (e.g. adding an integer to a complex number or vice versa). Since numeric types are progressively subsets of one another (i.e. an integer is also a complex number but not vice versa), it is possible to build the following hierarchy of types.

integer -> rational -> real ->complex

By leveraging this hierarchy, it becomes possible to do arithmetic operations on a mixed variety of number types. The system just needs to ‘raise’ all the numbers to topmost type in the hierarchy and then invoke the arithmetic operation.

Lets take it further one bit; if ‘raise’ is possible, then surely ‘demote’ can be done too! A good application of this is ensuring that the results of ‘raised’ operations are in their simplest terms. For example, the result of adding these mixed numbers types: 1, 1 – i, 2.0, -1 + i is 3 + 0i. The result can be progressively reduced to the integer 3 as shown below:

3 + 0i -> 3.0 -> 3 / 1 -> 3

And the icing on the cake? There is no need to write a demote function! The earlier raise procedure should demote number types if its hierarchy of types is reversed.

These assumes the system already supports two-way converters for each boundary of the hierarchy (e.g. integer->rational and rational->integer). There is no need for an integer->real converter since integers can be promoted to rational numbers and then to real numbers.

Neat huh?

2. Extending a system

Assume the system above has to support polynomial arithmetic. It turns out it is quite simple to do – why not view polynomials as another layer in the hierarchy of types? This new hierarchy is shown below:

integer -> rational -> real ->complex->polynomial

An interesting abstraction isn’t it? It works too once all the interfaces are implemented. Designing software is indeed art…

Some extra learning points were the representation of polynomials: the sparse and dense approaches both had their pros and cons. A better fix was to enable the system to support both representation formats (simple trick: convert one form to the other and everything should be all good :) ).

Another interesting trick was polynomial subtraction; there is really no need to implement subtract if you already have addition implemented. All you have to do is add ‘negated’ versions of each polynomial term. Simple but powerful.

On to chapter 3 insha Allaah!

Standard
JavaScript, Languages, Miscellaneous, Software Development

How to write a Promise/A+ compatible library


I decided to write Adehun after the series of promise posts. Adehun means ‘Promise’ in Yoruba, the language of my West African tribe. After a lot of rewrites, I can say Alhamdulillaah,  Adehun passes all the compliance tests of the promise/tests repo.

And here is how I did it.

1. Promise States & Validity

An object mapping the various states to integers and an isValidStates function to verify that only valid states are accepted.

2. Utils

A couple of helper functions including runAsync (to run functions asynchronously), isObject, isFunction and isPromise helper functions.

3. The Transition function – Gatekeeper for State Transition 

Gatekeeper function; ensures that state transitions occur when all required conditions are met.

If conditions are met, this function updates the promise’s state and value. It then triggers the process function for further processing.

The process function carries out the right action based on the transition (e.g. pending to fulfilled) and is explained later.


function transition (state, value) {
  if (this.state === state ||
    this.state !== validStates.PENDING ||
    !isValidState(state)) {
      return;
    }

  this.value = value;
  this.state = state;
  this.process();
}

4. The Then function

The then function takes in two optional arguments (onFulfill and onReject handlers) and must return a new promise. Two major requirements:

1. The base promise (the one on which then is called) needs to create a new promise using the passed in handlers; the base also stores an internal reference to this created promise so it can be invoked once the base promise is fulfilled/rejected.

2. If the base promise  is settled (i.e. fulfilled or rejected), then the appropriate handler should be called immediately. Adehun.js handles this scenario by calling process in the then function.

function then (onFulfilled, onRejected) {
 var queuedPromise = new Adehun();
 if (Utils.isFunction(onFulfilled)) {
   queuedPromise.handlers.fulfill =
                        onFulfilled;
 }

 if (Utils.isFunction(onRejected)) {
   queuedPromise.handlers.reject =
                        onRejected;
 }

 this.queue.push(queuedPromise);
 this.process();

 return queuedPromise;
}

5. The Process function – Processing Transitions

This is called after state transitions or when the then function is invoked. Thus it needs to check for pending promises since it might have been invoked from the then function.

Process runs the Promise Resolution procedure on all internally stored promises (i.e. those that were attached to the base promise through the then function) and enforces the following Promise/A+ requirements:

1. Invoking the handlers asynchronously using the Utils.runAsync helper (a thin wrapper around setTimeout (setImmediate will also work)).

2. Creating fallback handlers for the onSuccess and onReject handlers if they are missing.

3. Selecting the correct handler function based on the promise state e.g. fulfilled or rejected.

4. Applying the handler to the base promise’s value. The value of this operation is passed to the Resolve function to complete the promise processing cycle.

5. If an error occurs, then the attached promise is immediately rejected.


function process () {
 var that = this,
     fulfillFallBack = function (value) {
       return value;
     },
     rejectFallBack = function (reason) {
       throw reason;
     }; 

 if (this.state === validStates.PENDING) {
   return;
 }

 Utils.runAsync(function () {
   while (that.queue.length) {
     var queuedP = that.queue.shift(),
     handler = null,
     value;

     if(that.state===validStates.FULFILLED){
       handler = queuedP.handlers.fulfill ||
                 fulfillFallBack;
     }
     if(that.state===validStates.REJECTED){
       handler = queuedP.handlers.reject ||
                 rejectFallBack;
     }

     try {
       value = handler(that.value);
     } catch (e) {
       queuedP.reject(e);
       continue;
     }

   Resolve(queuedP, value);
  }
 });
}

6. The Resolve function – Resolving Promises

This is probably the most important part of the promise implementation since it handles promise resolution. It accepts two parameters – the promise and its resolution value.

While there are lots of checks for various possible resolution values; the interesting resolution scenarios are two – those involving a promise being passed in and a thenable  (an object with a then value).

1. Passing in a Promise value

If the resolution value is another promise, then the promise must adopt this resolution value’s state. Since this resolution value can be pending or settled, the easiest way to do this is to attach a new then handler to the resolution value and handle the original promise therein. Whenever it settles, then the original promise will be resolved or rejected.

2. Passing in a thenable value

The catch here is that the thenable value’s then function must be invoked  only once (a good use for the once wrapper from functional programming). Likewise, if the retrieval of the then function throws an Exception, the promise is to be rejected immediately.

Like before, the then function is invoked with functions that ultimately resolve or reject the promise but the difference here is the called flag which is set on the first call and turns subsequent calls are no ops.

function Resolve(promise, x) {
  if (promise === x) {
    var msg = "Promise can't be value";
    promise.reject(new TypeError(msg));
  }
  else if (Utils.isPromise(x)) {
    if (x.state === validStates.PENDING){
      x.then(function (val) {
        Resolve(promise, val);
      }, function (reason) {
        promise.reject(reason);
      });
    } else {
      promise.transition(x.state, x.value);
    }
  }
  else if (Utils.isObject(x) ||
           Utils.isFunction(x)) {
    var called = false,
        thenHandler;

    try {
      thenHandler = x.then;

      if (Utils.isFunction(thenHandler)){
        thenHandler.call(x,
          function (y) {
            if (!called) {
              Resolve(promise, y);
              called = true;
            }
          }, function (r) {
            if (!called) {
              promise.reject(r);
              called = true;
            }
       });
     } else {
       promise.fulfill(x);
       called = true;
     }
   } catch (e) {
     if (!called) {
       promise.reject(e);
       called = true;
     }
   }
 }
 else {
   promise.fulfill(x);
 }
}

7.  The Promise Constructor

And this is the one that puts it all together. The fulfill and reject functions are syntactic sugar that pass no-op functions to resolve and reject.


var Adehun = function (fn) {
 var that = this;

 this.value = null;
 this.state = validStates.PENDING;
 this.queue = [];
 this.handlers = {
   fulfill : null,
   reject : null
 };

 if (fn) {
   fn(function (value) {
     Resolve(that, value);
   }, function (reason) {
     that.reject(reason);
   });
 }
};

I hope this helped shed more light into the way promises work.

AdehunJS on Github.

Liked this post? Here are a couple more exciting posts!

1. The Differences between jQuery Deferreds and the Promises/A+ spec

2. Programming Language Type Systems II

3. Three Important JavaScript Concepts

Standard
SICP

SICP Sections 2.3 & 2.4: Thoughts and Ideas


1. Top-Down Design

Most of the problems in the SICP book are solved in a top-down way with lower level details deferred until needed. The focus on high level details makes for expressive flexible code since implementation is based on well-defined interfaces and not implementations. Consequently, swapping and improving interfaces is a cinch – the dependency on high level interfaces shields consumers from tight coupling to details.

Picture a calculus system for integer operations (e.g. differentiation); as expected, these higher order functions need support for addition/subtraction/division operators. A top-down design will implicitly assume the add/subtract operators will work appropriately and focus on the higher level operators first. These low-level details can even be mocked and the system unit tested to prove logic accuracy.

The benefit of such a delayed implementation comes when the system needs to be extended to support complex numbers – the higher level operations are still the same, only the low level add/subtract operators need to be changed. Once done, the entire system should work as before.

Design is subjective and this is one approach; there might be better ways for other scenarios but again this can be tried and leveraged.

2. Generics: Data-directed vs Message-Passing Styles

Generics are useful for maintaining and extending existing systems.

The data-directed style maintains a huge table of known procedures and looks up desired operations based on keys. Extending such systems involves adding the new function signatures to the table; this allows the entire system to start using the newly added functions (sounds like the adapter pattern).

The downsides include the extra overhead of a procedures’ table, the need for consumers to know about available operations and the enforcement of contracts in the system. However, benefits such as flexibility, encapsulation and ease of accretion of new features can’t be ignored.

Message passing involves sending messages to objects to trigger the right operations. Sounds familiar? Yes, it does sound like OOP and is one of its major tenets. An object keeps a list of possible operations and invokes the right method based on the message it receives (think of invoking an instance method in an OOP language as sending a ‘message’ to the object e.g. a.b() ).

Languages like Python and JavaScript allow for a clearer example of the message passing metaphor since they allow creating hashtables of functions. With such a hashtable, you can invoke a function by sending a ‘message’ to such functions e.g. a[“b”]().

The question thus comes to mind, how do programming languages handle their primitive operations at a lower level? Do they do message passing? Keep tables?

3. Context

The importance of context and how they affect semantics in all languages (both human and programming).

Using the example from the book; three is equal to one plus two however ‘three’ is not equal to “one plus two”. The context determines what is meant and humans have long used this to create various literary forms such as puns, sarcasm etc.

The same applies to programming, context does determine if your program will run perfectly or if it will have subtle but difficult to find errors. One obvious example is the JavaScript that = this pattern.

4. Huffman Encoding

Some computer science and maths eh, the Huffman encoding scheme allows you to use much less space provided the conditions are met.

5. Understanding the problem deeply before writing any code

Coding should be simple. Unfortunately, a shallow understanding the problem being solved makes it a difficult task.

Spending a good chunk of time to really understand the programming task at hand has great benefits, it leads to well-thought out designs, simple models (simple is difficult!) and less bugs. It also reduces the chances of providing the wrong solution. It makes for a more fulfilling experience too – what beats the thrill of having zero compile errors on the first attempt?

If you liked this post then do check out a couple more exciting discoveries from my SICP journey:

1. SICP Section 2.2: New ideas and thoughts about programming
2. SICP Section 2.1 : Thoughts
3. SICP Section 1.3 : Thoughts and Concepts

Standard
JavaScript, Languages

Manipulating lists with JavaScript Array methods


Lists of all sorts (e.g. grocery, to-do, grades etc) are a normal facet of our daily lives. Their ubiquitousness extends to software engineering as many programming tasks require manipulating lists. This post exposes new ways of doing popular list manipulation tasks in JavaScript and a little more too.

Be warned! JavaScript array methods routinely expose amazing surprises. Other languages generally tend not to have such hidden surprises, but well, don’t you just love the extra excitement from JavaScript? Afterall, it’s the language we all love to hate ;).

1. Appending arrays

One of the most common tasks is appending an array to the end of another array; Array.prototype.concat works but forces you to create a temporary variable or overwrite one of the arrays.

A clever use of Array.prototype.push allows getting by both limitations.

var a = [1,2,3];
var b = [4,5,6];
//requires temp value or overwriting
a = a.concat(b);

//using push
a.push.apply(a, b);
//similar variations
[].push.apply(a,b);
Array.prototype.push.apply(a,b);

How does this work? The Array.prototype.push method will append its arguments to the end of the array it’s called on.

a=[];
a.push(1,2,3);
a;
//[1,2,3]

The Function.prototype.apply method applies a method to an object and overrides arguments (an array-like object containing the arguments passed to a function). Apply allows you to invoke a function on any object with whatever parameters you want.

The problem is thus solved by applying push to and providing b as the arguments to be pushed. The code snippet is the same as shown below:

a.push(b1, b2, ..., bn);

This trick can be generalized and used anywhere once the second array can be transformed into an arguments object. The Math.max and Math.min methods, which return the max/min value of a set, provide another example.

Using the trick above, an array’s max value can be found by applying the Math.max method on the array.

var a = [1,2,3,4,5];
var max = Math.max.apply(Math, a);

max;
//5

Note that the object has to be Math here since it contains the max function. Isn’t this neater than iterating over the array to find the max value?

Tip: Apply differs from Call which expects an array; if Call were to be used, then gets inserted as a single element. Can you figure out how to use Call to achieve the same results?

2. Pushing into objects

Do you know Array.prototype.push works on non-array objects too? Huh? Didn’t I warn about JS Array methods earlier on?

Push is a generic function and provided the object it is applied to meets its requirements it’ll work; otherwise, tell me when you find out.

An example is shown below (anyone that does this in real-life code should be barred from touching code for weeks…).

var obj = {};

[].push.apply(obj, [1]);
Array.prototype.push.apply(obj, [2,3,4]);
obj;
//{0: 1, 1: 2, 2: 3, 3: 4, length: 4}

How did the length property get there?! The ES5 push spec explains this: the 6th step of the push operation will put length and the number of elements as a key-value pair into the object push was called on. Do you now know why it is [].length and not [].length()?

The tale continues, how about splicing some lists?

3. Merging arrays and removing elements at specified locations

The Array.prototype.splice method is probably mostly used for removing elements from JS arrays. Why it’s called splice and not remove is baffling but since it allows you to simultaneously insert elements I guess that name works. You can splice rope strands huh…

You can actually ‘delete’ array entries using the delete operator, however delete will leave behind undefined at the deleted element’s index; typically not what you want in most scenarios.

SideTip: The delete operator accepts expressions as operands and will evaluate such expression operands! E.g. delete (b=4) works!!

var a = [1,2,3];
delete a[1];
a;
//[1, undefined, 3]
a.length;
//3

Fix? Lets splice!

var a = [1,2,3];
var removed = a.splice(1,1);
removed;
//[2]
a;
[1,3]

The splice operation can also be used to insert an array’s elements into another array at a certain index, how? You probably guessed that now – it is the apply trick again.

var a =[1,2,6];
var b = [3,4,5]

a.splice.apply(a, [].concat(2, 0, b));
a;
//[1,2,3,4,5,6]

Confused about the need for [].concat? Apply expects an array of parameters that match the function’s signature; these are passed through as the arguments. The splice call to merge the arrays has the  2, the position to insert at; 0, the number of elements to remove and b, the array of elements to be merged.

The concat call translates into [2,0,4,5,6] and this is equivalent to the call below:

a.splice(2,0,4,5,6);

4. Truncating and Emptying arrays

var a = [1,2,3];
a.length = 0;

//Truncating
a;
//[]

a.length = 2;
a;
//[1,2]

//Or filling up arrays with undefined
a.length = 3;
a;
[1,2,undefined]

The ES5 spec explain this, once length is set to a value less than the original length, all outlying elements will be deleted.

So why did I write the post?

1. So you know what to do if you ever run into these ‘smart’ pieces of code

2. To expose some of the other side of JS.

Hope you liked it, here are some more fun posts ;)

1. Three Important JavaScript Concepts
2. Understanding Tail Recursion
3. Understanding Partial Application and Currying

Standard