How to implement the Y-combinator in JavaScript


This post provides a very simple step-by-step implementation of the Y-combinator in JavaScript. You should be able to implement the Y-combinator in your language of choice after reading this post; as you’ll see – it’s that easy.

What is a combinator?

According to wikipedia,

A combinator is a particular type of higher-order function that may be used in defining functions without using variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables.

The emphasized text highlights the most interesting part of the definition – combinators allow functions to be defined without variables. Imperative programming relies heavily on variables and trying to eschew variables can be a mind-stretching exercise.

Show me the code!

The following code snippet is a Y-combinator example of the factorial function in JavaScript.

var yCombFact = function (number) {
    return (function (fn) {
        return fn(fn, number);
    })(function(f, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * f(f, (n - 1));
        }
    });
};

yCombFact(5);
//120

Looks abstruse right? No worries – lets break it down.

Two major things

There are two major concepts that help drive home the understanding of the Y-combinator:

  1. No variables allowed – this implies we have to find a way to write the factorial function without using variables.
  2. Invoking the no-variable-function defined in 1 without using variables again.

Part 1: Rewriting the factorial function without variables

Here is the factorial function definition. Can you spot the variable usage in it?

var factorial = function(n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

factorial(5);
//120

The expression n * factorial(n – 1) only succeeds because it can find the variable factorial in scope; without it, the factorial function would not be recursive. But remember, we are trying to do away with all variables.

The workaround is to pass in the variable reference as a function parameter. In the factorial example, recursion can then be achieved by using the placeholder parameter as the reference. The no-variable-factorial function looks like the following:

var noVarFactorial = function(fn, n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * fn(fn, (n - 1));
    }
}

noVarFactorial(noVarFactorial, 5);
//120

The new definition works exactly like the old one but without the internal dependency on the factorial variable. Rather, recursion succeeds by relying on the ‘injected’  parameter and the computer is none the wiser.

Part 2: Invoking the no-variable function without variables

We have rewritten the factorial function to avoid variables however we still need to store the factorial function in a variable before invoking it


var factorial = ...;

factorial(factorial, 5);

The trick? Functions to the rescue again! Let’s create a factorialCalculator function that uses the noVariableFactorial definition above.

function factorialCalculator(n) {
    //as defined earlier
    var noVarFactorial = ...;
    return noVarFactorial(noVarFactorial, n);
}

factorialCalculator(5);
//120

The noVarFactorial name has to go since we want to avoid variables. And how do we achieve this? Yes, functions once more. So lets create a wrapper function inside the factorialCalculator that invokes noVariableFactorial.

function factorialCalculator(n) {
    var wrapper = function (noVarFact) {
        return noVarFact(noVarFact, n);
    }
    return wrapper(noVarFactorial);
}

factorialCalculator(5);
//120

Unfortunately, the wrapper function has led created another wrapper variable and this has to be eliminated too. For a complete implementation, the two variables (wrapper and noVarFact) have to go.

It’s now time to leverage language specific idioms to achieve this. JavaScript has the IIFE idiom which allows you to immediately invoke a function (read about it here). Using it, we can eliminate the need for the wrapper variable as thus:


function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(noVarFactorial);
}

factorialCalculator(5);
//120

Combining all the pieces

The last thing is to insert the noVarFact definition so that it is no longer a global variable in the scope. Just as we do in Mathematics, we can just ‘substitute’ the value in. The final piece is then:

function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}

factorialCalculator(5);
//120

And that, my friends, is the yCombinator in all its glory. I have decide to leave the variables as they are to make it all clear but here is the standard format so you know it when you see it


function factorialCalculator(n) {
    return (function (fn) {
        return fn(fn, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}
factorialCalculator(5);

Conclusion

The Y-combinator is quite easy to understand – it requires understanding function invocation patterns, variable substitution by parameters and higher level functions. As an exercise, can you try implementing the fibonacci using the Y-combinator approach? Better still, why not create a Y-combinator function that accepts function that match the fn(fn, n) signature?

Related Posts

SICP Section 3.3 – 3.5 : Found a bug in memq


1. Is memq broken?

memq is an in-built list search function; it finds the first occurrence of a key in a list and returns a new list starting from that key.


(memq 3 (list 1 2 3 4))
//; '(3 4)

(memq 5 (list 1 2 3 4))
//; #f

Now that you know what memq does, lets look at some weird behaviour

(define x (list 1 2))
(define a (list 3 4))

//; append x to the list
(set! a (append a x))
(memq x a)
//; #f -> x is not in a

Building on that foundation leads to the following conundrum

(define x '(1 2 3))

//; Create a cycle: last element of x is itself
(set-cdr! x x)

//; is x in x?
(memq x x)

//; never-ending loop

memq tests whether the key exists in the list and if it does, then it returns the list starting from the first occurrence of the key. But what is the first occurrence in a cyclic list? Could this be the bug?

2. Algorithms, algorithms

  • Horner’s algorithm

This is useful for calculating polynomial values at discrete points; for example, given a polynomial function, f(x) = 7x³ + 4x² + 4; what is f(3)? A potential application (and possible interview question too) is to convert string values into numbers – 1234 is the value of x³ + 2x² + 3x + 4 when x is 10.


//assuming polynomial is represented from lowest power to highest

//i.e. 1234 -> [4, 3, 2, 1]

function horner (poly, base) {
    if(base === 0) {
        return 0;
    }

    var val = 0;
    var polyLen = poly.length;
    for(var i = 0; i < polyLen; i++ ) {
        val += poly[i] * Math.pow(base, i);
    }
    return val;
}

horner([4,3,2,1], 10);
//1234

  • Fast exponentiation

Because going twice as fast is more fun than going fast.


function exponent (base, power) {
    var val = 1;
    while(power > 0) {
        val = val * base;
        power = power - 1;
    }
    return val;
}

Now, lets look at fast exponentiation.

function fastExponent(base, power) {
    if(power === 1) {
        return base;
    }

    //handle odd powers
    if((power % 2) === 1) {
       return base * fastExponent(base, (power - 1));
    }

    var part = fastExponent(base, (power / 2));
    return part * part; //square of part also works
}

fastExponent(10,3)
//1000

Fast exponentiation grows logarithmically Ο(log N) while the normal one is Ο(N). This same concept can be reapplied to similar scenarios.

3. Streams

Functional programming offers many advantages but one potential downside is performance and needless calculation. For example, while imperative programming offers quick exit constructs (e.g. break, continue); functional programming constructs like filter, map and reduce have no such corollary – the entire list has to be processed even if only the first few items are needed.

Streams offer an elegant solution to this issue by performing only just-in-time computations. Data is lazily evaluated and this makes it possible to easily (and beautifully) represent infinite lists. Inshaaha Allaah I should explain this concept in an upcoming post. It’s very beautiful and elegant and powerful.

Related Posts on my SICP adventures

  1. SICP Review: Sections 3.1 & 3.2
  2. SICP Section 2.5

The Effective Programmer – 3 tips to maximize impact


Effectiveness, (noun) : the degree to which something is successful in producing a desired result; success.

Over the years, I have tried experiments, read books and watched several talks in a bid to improve my effectiveness. After a series of burnout and recovery cycles, I finally have a 3-pronged approach that seems to serve me well.

1. Learn to estimate well

2. Use the big picture to seek opportunities

3. Continuous Improvement

Lets discuss these three.

1. Estimation – the bane of software development

Reliable coding estimates accurately forecast when feature work will be done. But when is a feature done? Is it when it is code complete? Test complete? Or deployed? Most developers wrongly associate code complete with test completion or deployment ready. This explains arbitrary estimates like: “Oh… I’ll be done in 2 hours”; such estimates typically miss the mark by wide margins due to error compounding. Let’s take a simple bug fix scenario at a fictitious software engineering company.

  • Bug is assigned to developer John SuperSmartz
  • John SuperSmartz reads the bug description, sets up his environment and reproduces it
  • He identifies the cause but does some light investigation to find any related bugs (he’s a good engineer)
  • John designs, implements and verifies the fix
  • Gets code review feedback and then checks in

Any of the intermediate steps can take longer than estimated (e.g. code reviews might expose design flaws, check-ins might be blocked by a bad merge, newer bugs might be discovered in step 3. etc). Without such an explicit breakdown, it becomes difficult to properly give estimates. Don’t you now think the 2-hour estimate is too optimistic?

Personally, I use kanbanFlow (I love their Kanban + pomodoro integration) to decompose work into small achievable 25-minute chunks. For example, I might break down some feature work into 8 pomodoros as follows:

  • Requirements clarification – 1 pomodoro
  • Software design and test scenario planning – 2 pomodoros
  • Coding (+ unit tests) – 3 pomodoros
  • Testing and code reviews – 1 pomodoro
  • Check-in + estimation review – 1 pomodoro

Some of the things I have learnt from using this approach:

  • I grossly underestimate feature work – the good side though is that this planning enables me to improve over time
  • I know when to start looking for help – as soon as a task exceeds its planned estimate, I start considering alternative approaches or seeking the help of a senior technical lead
  • Finally, it enables me to make more accurate forecasts – e.g. I can fix x bugs per week…

2. See the big picture

A man running around in circles covers a lot of distance but has little displacement. In optimal scenarios, distance covered equals displacement while in the worst scenario, it is possible to cover an infinite distance and have a displacement of zero.

Imagine working for several days on a feature and then discovering major design flaws that necessitates a system rewrite; a lot of distance has been covered but there has been little displacement. Working on non-essential low-impact tasks that no one cares about is neither efficient nor effective. Sure they might scratch an itch but always remember that the opportunity cost is quite high; the lost time could have been invested in higher priority tasks with a larger ROI.

Whales periodically surface for air and then get back into the water to do their business; so should engineers periodically verify that priorities align with company’s goals. It’s possible to get carried away by the deluge of never-ending feature requests and bugs fixes; an occasional step back is needed to grasp the whole picture. Here are sample questions to ask:

  • Where are the team’s goals?
  • Does your current work align with company goals?
  • Skills acquisition and obsolescence check
  • Opportunities for improvement?

Personally I try to create 3 to 4 high-impact deliverables at the beginning of each week and then focus on achieving these. Of course, such forecasts rely heavily on productivity estimates.

3. Continuous Improvement

Athletes consistently hold practice sessions even if they don’t want to because it’s essential to staying on top of their game. The same applies to pretty much any human endeavor – a dip in momentum typically leads to some loss in competitive edge. The software engineering field, with its rapidly evolving landscape, is even more demanding – developers have to continuously and relentlessly learn to stay relevant.

Staying relevant requires monitoring industry trends vis-à-vis blogs, conferences and newsletters. There are a ton of resources out there and it’s impossible to follow every single resource, rather it is essential to separate the wheat from the chaff and follow a select high-quality few.

Learning and experimentation with new technologies naturally follows from keeping abreast of developments. A developer might decide to learn more about the software stack his company uses, logic programming or even computer science theory. Even if his interests are totally unrelated to his day-to-day job, independent learning would expose him to new (possibly better) ways of solving problems, broaden his capabilities and might even open up new opportunities. I prefer learning established CS concepts to diving into every new db-data-to-user-moving framework.

Opportunities abound such as learning IDE shortcut keys, terminal commands, automating mundane tasks and so on. Ideally you want to start simple by selecting the area with the highest impact-to-effort ratio and then dedicating a few minutes to it daily. Over time, the benefits start to pay off.

And that’s about it! Do you have other suggestions?

Like this post? Follow me on Twitter here or read some of my top posts.

1. Code is poetry: 5 steps to bulletproof code

2. So you want to become a better programmer

World Class Nigerian Software Engineering: Are we there yet?


Jason of Iroko recently announced mouth-watering offers for developers and this triggered a long discussion about software engineer pay in Nigeria. The discussions on techCabal’s radar got me thinking about software development in Nigeria, do we have enough world-class talent or could we be better?

The software engineering field has spawned a wide variety of roles: product managers specialize in product design and customer interaction, software engineers write code and ensure quality while devOps folks handle deployments and infrastructural issues. Interactions between these fields lead to the delivery of great software. More importantly though, these distinct fields are essential for specialization, which is critical for growth.

At small software firms and startups all over the world; engineers are expected to be responsible for the whole gamut of software development – UI design, development, testing, deployment and customer support. Not to say this is bad; sometimes there is a genuine need for generalists and developers can acquire great skills by working in such places. However, as the popular saying goes, jack of all trades, master of none. I doubt if it is possible to be well-versed in all the numerous fields of software engineering; moreover standard practices like pair programming, build-measure-learn and one-click deploy weren’t created by generalists.

Building software involves a lot more than cranking out code and shipping software artifacts; there is a need for thought leaders advocating best practices and innovating new approaches. And this is why specialization is essential. Now, looking at the Nigerian software scene, can we have world-class experts without specializing and going deep? In recent years, the number of start-ups, incubators and software shops in Nigeria has ballooned and demand for engineers has gone up. However despite the large number of excellent Nigerian software engineers, we still have a disproportionately small number of thought leaders.

Some might argue that there is a dearth of thought leaders and specialization because the Nigerian software industry is young. I disagree, rather I think we have lots of engineers with significant experience shipping high quality software and meeting deadlines. Writing software for the Nigerian market is fraught with challenges and surely there must be some best practices to share, unfortunately, we do not hear about their stories. For example, why isn’t Konga publishing on their tech blog? How about the Terragon tech gurus writing white papers about Adrenaline? Nairaland? Such efforts drive technology adoption, improve the entire field and might even bring in new talent!

Let’s talk about change…

Sadly most computer science graduates do not know how to write code; in contrast, fresh graduates in other places complete non-trivial projects before graduation. This gap puts a significant drain on firms who have to invest heavily in training and mentoring fresh employees until they are proficient.

The educational sector has to be fixed; a catch-them-young scheme aimed at motivating undergraduates and secondary school students should help ensure a good supply of trained developers. The great work being done by Andela and CTI is worthy of mention: they are creating realistic environments and this is a step in the right direction.

Top companies, incubators and the existing thought leaders can accelerate growth by creating conferences, meet-ups and talks. These will provide opportunities to share ideas, drive networking between potential employers/employees and increase collaboration. Furthermore, these create opportunities for upcoming engineers.

Developers need to up their game too – it’s just not enough being the best programmer out there, we need to contribute to community too. Do something – create a tech blog, give a talk at a meet-up or mentor upcoming developers. It also involves being open with ideas, learning on every project and driving good practices at and outside work. I’d expect programmers to be more ambitious and aim to change the world (enough of apps that move data between databases and devices). Seek challenges, learn a lot (e.g. computer science, entrepreneurship, product design, methodologies, testing etc) and then inspire people.

Our ‘unique’ environment involves piracy, intellectual property disregard and tough economic conditions; notwithstanding I believe we have the potential and can do it. Rome wasn’t built in a day and creating a world-class industry will take time. However, if we try, we might get there faster. Hopefully someday soon we’ll have respected Nigerian thought-leaders actively pushing the boundaries of software development. Our very own Brenden Eichs, Joel Spolskys and Bob Martins…

Let’s all help create a thriving Nigerian software industry – one with international acclaim. And then we can aim to get the 6-digit dollar salaries too…

This post first appeared on TechCabal.

Related

1. Opennigeria… the time is now!

Code is Poetry : 5 steps to bulletproof code


I recently read a Quora post about a meticulous cleaner who would not let the smallest blemish evade him – he’ll stop, fix the flaw and then step back to admire his work. The narrator admitted that the cleaner’s platform was one of the neatest he had ever seen around the world.

Morale of the story? The same principles apply to software development – programmers have to love their craft and put their best into making it stand out. Going the extra mile in simplifying complex code, removing dead code or writing unit tests help to increase the life of a code base. These actions might appear to impede development speed but again, what is the definition of slow? As a former colleague used to say: “we sometimes have to slow down to go faster”.

Poorly written software always comes back to haunt developers in unimaginable ways; such code will effortlessly conjure monster bugs of all types (e.g. heisenbugs, mutatorbugs, linkerbugs and more). Removing bugs in bad software is also a nightmare as fixes might spawn even more abominable bug monstrosities (without unit tests, how do you verify? ). Which would you prefer? Disappointed customers, having to work weekends and eventually rewriting the same code versus doing it well upfront. The latter does sound faster to me…

Enough ranting, how about a couple of suggestions on how to write bulletproof code?

1. Know what you’re doing

Have you ever had to work on impossibly cryptic code? To all programmers who disparage others who do not get ‘smart’ code, here is a beautiful quote from E.F.Schumacher:

“Any intelligent fool can make things bigger and more complex… It takes a touch of genius – and a lot of courage to move in the opposite direction.”

Most ‘smart’ code pieces are indicative of shallow problem domain knowledge. Most times, these pieces can be simplified by taking away pieces until the problem the author was trying to solve becomes clear.

The  first thing you want to do when approaching a problem is to stop and think deeply. Stop! Don’t even start that IDE!! Do you understand the problem thoroughly? Do you know what issues might arise? How about testing and completion criteria? Once brainstorming is done, then find the simplest, fastest and most effective way to clearly convey your solution. Always remember that programming languages are vehicles for expressing your thoughts.

Without fully understanding the task, it is possible to end up with over-complicated code or even worse: ‘smart’ code. Please do not create breeding grounds for evil bugs – developers have better things to do with productive hours. Simplicity is the true smartness…

2. State modification is the root of all most evil

Functional programming purists tout FP as being less susceptible to bugs because it avoids modifying state. The best functions are like Mathematical functions, e.g y = x². Why are they good? Well they always give the same results for the same input and have no extra hidden internal state lying around.

I like using functions a lot; composing them into larger pieces and building up systems based on them. Such systems rely heavily on piping – function outputs are passed to other functions as inputs. Clearly designed functions are great as they communicate intent (e.g. square vs x²), make it easy to follow the author’s thought process and provide simplifying abstractions.

Watch out for the double-edged sword of modifying state in functions, it is the root of most evil bugs. Having the wrong assumptions about state changes, obscure manipulations (i.e. inadvertently modifying loop invariants in functions) and improper naming inevitably lead to trouble.

Whenever possible, use a functional programming style that relies on stateless functions. Hopefully this saves you a lot of debugging pain – you can thank me later…

3. Clean up as you go

What is your typical reaction when you come across horrible code? Do you sidestep the mess (I didn’t write it…) or do you attempt to fix it?

Just as people are more likely to drop trash on a dirty sidewalk compared to a clean one, developers are also more likely to write horrible code if the original source code is bad. This attitude is one to avoid at all costs; if you can’t invest in cleaning up the dirt, at least don’t add to it! Remember the boy scouts’ rule – always leave the park better than you found it.

Ideally you should clean up the code as you work in it – it shows that you care and helps prolong the life of the code base. If you need to strike a balance between adding new features and fixing technical debt (which might not bring new business value), then carve out a small block of time (e.g. 30 minutes).

If you want to know why this is important, then you should read about the broken windows theory (not yet, finish the post).

4. Love your craft

The best craftsmen are immensely proud of their work and achievements; they try to show it off to everyone and won’t stop talking about it (even if we do not want to hear about it ).

You should be proud of your code. Seek feedback; what looks beautiful to you might be terrible code – perspective, as they say, is subjective. Don’t get defensive and hurt over review feedback too – it’s the code dude! Not you!! So swallow your pride, learn and improve the code. Growth involves some pain.

Spend the time to think about elegance, simplicity and brilliance; the extra time and effort pays off. Which would you prefer of the two outcomes?

  • Creating a masterpiece in 6 hours
  • Creating a ‘hacker-piece’ in 2 hours and then spending 10 hours in maintenance (bug fixes, extensions and changes)

It’s a trade-off; I don’t like hunting down bugs – I do it if I have to but I’d rather not…

5. Break your software yourself

There was a dramatic moment during the début of the Microsoft Surface – the presenter dropped it (intentionally) and everyone gasped! Why would anyone want to drop a tablet? There are countless woeful tales of smashed screens, internal damage and memory loss. However, the presenter knew the product’s limits and knew he could do that. Can you break your own software? Do you know its limits?

Go ahead! Break it!! How else would you know its limits? Nearly every piece of code can be broken – a function that calculates 1 + 1 will give a wrong result on a computer with only 1-bit of memory (yeah, I wonder how you’ll even get such a computer). 

It’s a shame if customers find obvious bugs that should never have gotten into production. Customers who run into show-stopper bugs are typically not happy and unhappy customers can consider alternatives. Broken software can cause huge losses (goodwill, money, credibility, tarnished brand images etc) so you do not want to take this lightly – unless you fancy going out of business.

In these days of web applications, your customers will do a lot of testing for you – the more users you have the more weird scenarios people would put your software through. Do not worry about the extreme cases, the normal expected way has to work well though.

So try to break your code yourself before you call it done. This makes it more robust and you get to know of the corner cases. Even if you do not fix them, such knowledge enables you to handle customer complaints.

Inshaaha Allaah I hope to write about how to break software some time soon.

Now go and write some better code…

Related

How to watch variables in JavaScript


We all have to track variables while debugging; generally the easier it is to monitor changes, the faster bugs can be detected and fixed.

Web developer tools expose various methods for tracking changes in variable values. There are a couple of drawbacks e.g. non-uniform support across platforms) but again, half-bread is better than none :).

Curious about such methods? Let’s take a walk…

1. Chrome’s Object.Observe

First on the list is Chrome’s Object.observe static method. The observe/unobserve functions are static functions – they can only be called on Object and do not exist on object instances. Lets take an example:

var obj = {};
Object.observe(obj, function(changes) {
    console.log(changes[0]);
});

obj.val = "Hello"

// [{  name: "val",
//     object: { val: "Hello" },
//     type: "add" }]

That was pretty cool right? How about observing primitives?

var val = 2;

Object.observe(val, function(changes) {
    console.log(changes[0]);
});

//Console
//TypeError: Object.observe cannot
//observe non-object

Disappointed? Maybe we should check out Firefox’s watch function then…

TIP: Chrome also has experimental Array.observe and Array.unobserve methods. Don’t use in production!

2. Firefox’s Object.prototype.watch method

Firefox differs from Chrome because it exists on object instances; thus any object can watch for changes to its inner variables. Confused? Lets see an example:

window.val = 2;
//equal to var val = 2;

window.watch("val",function(id, old, cur) {
 console.log("Changed property: ", id);
 console.log("Original val: ", old);
 console.log("New val: ", cur);
});

val = 4;

// Changed property: val
// Original val: 2
// New val: 4

Advantages? Well, you can now watch primitives values. I also like the fact that object instances can watch their own properties for changes rather than having a global static watcher. But then again, it’s up to you :).

3. Can I haz polyfill?

A few browsers (including IE) do not have support for watch or observe; sometimes you do not have the free choice of selecting a browser, so let’s move on to finding an implementation that can be used as a polyfill and work across browsers.

First, lets talk about getters and setters in JS; JavaScript has the concept of getters and setters which allow you to define functions that are invoked when you try to retrieve a value or set its value.

var obj = {};

Object.defineProperty(obj, "name", {
    get : function(){ return this._name;},
    set: function(val){  this._name = val;}
});

obj.name;
//undefined

obj.name = "name";
obj.name;
//name

TIP: Can you figure out the bug in the following snippet?

var obj = {};

Object.defineProperty(obj, "name", {
    get : function(){ return this.name;},
    set: function(val){  this.name = val;}
});

obj.name;
//What happens?

Just as you thought, how about using this concept to fix this issue? Various browsers implement the getter/setter syntax differently, IE expects this format while Chrome expects this format. You could write a façade to hide this but again there is ugliness lurking around under the covers.

But great news! There is an excellent 5-year old polyfill by Eli Grey and it might meet your needs. Here is the gist and lets dive into its brilliance.

if (!Object.prototype.watch) {
 Object.defineProperty(
   Object.prototype,
   "watch", {
     enumerable: false,
     configurable: true,
     writable: false,
     value: function (prop, handler) {
       var old = this[prop];
       var cur = old;
       var getter = function () {
          return cur;
       };
       var setter = function (val) {
        old = cur;
        cur =
          handler.call(this,prop,old,val);
        return cur;
       };

       // can't watch constants
       if newval(delete this[prop]) {
        Object.defineProperty(this,prop,{
            get: getter,
            set: setter,
            enumerable: true,
            configurable: true
        });
       }
    }
 });
}

How it works?

The function assumes knowledge of JavaScript’s accessor support (surprised that JS has getters and setters, yes there are subtle pockets of beauty in the language). The object.watch polyfill function accepts a property to watch and a handler to call when that function changes.

The handler function uses a beautiful trick – it redefines the watched value in the same scope, this ensures that the handler is always invoked when that value changes.

The trick to understanding the polyfill is in the second Object.defineProperty call. The value to be watched is first deleted to verify it is not a constant but this deletion will remove it from the scope. To counteract this, Eli’s brilliantly redefines the object in the same scope using Object.defineProperty. The getter is a plain getter while the setter is more involved – when the watched value is overridden or set to a new value, the setter is called. The closure in the watch object will invoke the handler after setting property to its new value. Neat huh?

Conclusion

Tip: A good way to debug is to use the debugger with an observed variable as shown below:

Object.observe("value", 
   function() { debugger; });

This will cause the debugger to kick in whenever the value changes and this helps a lot in tracking down bugs.

1. Eli’s polyfill didn’t fire in a deterministic manner when I tried it on Chrome but it did work.

2. Remember there are going to be performance implications but again this assumes you are in debugging mode :)

And that’s it! Feel free to copy over Eli’s implementation and use for your code.

SICP Review: Sections 3.1 & 3.2


Here are my thoughts on Sections 3.1 and 3.2 of my SICP journey; the deeper I go in the book, the more I appreciate the effort, style and work the authors put into it. Each section builds on earlier sections and it is amazing how it forces you to see software development from a new angle. Enjoy…

1. Perception and Design

Our perceptions of the real world influence the way we model objects in software development. The example in the book uses two objects – a rational number and a bank account. Modifications of a rational number always generate a new rational number that is different from the original (e.g. adding 1/2 to 3/8).  We do not ‘view’ rational numbers as being capable of changing – i.e. the value of 3/8 is always 3/8.

Conversely a bank account object has a balance that can ‘change’; withdrawals and deposit change this value even though we still say it’s the same bank account object! There appears to be a double standard right? If internal details (i.e. the numerator or denominator) of a rational number changes, we might end up with a new rational number however if a bank account balance changes we still have the same bank object.

Strange isn’t it, the way we think does influence the way we write code.

2. Environment

The environment (context) is very important in programming languages even though we just sometimes forget about it. Every language has some global environment that determines what expressions, values and parameters mean. If you change the environment, then it becomes highly possible that the expression has a different meaning.

Lets take an example from human languages, the word ‘wa’ means ‘come’ in Yoruba but can mean ‘and’ in Arabic. The same applies to programming languages, have you ever wondered why and where the + (a free variable) is defined? It has to be in the global environment so the programming language knows what it means. Now if you overrode the ‘+’ (why you’ll do this beats me) then it can mean something else.

3. Execution Order 

Exercise 3.08 was pretty fun, it was a simple ask – write a procedure such that evaluating (+ (f 0) (f 1)) will return 0 if the arguments to + are evaluated from left to right but will return 1 if the arguments are evaluated from right to left.

While the exercise is trivial and probably just an academic exercise, it shows how assignments can influence program execution. My solution uses a closure to track the number of calls and returns a unary lambda. The lambda returns the value of its argument on its first call and a zero on subsequent calls.

Thus if the execution order is left-to-right, then the first call will have a parameter of 0 and further calls will return 0 leading to (+ 0 0) while if it was right-to-left, then the first call will have a value of 1 and further calls will evaluate to 0.

JS and Scheme, the similarities


So I have been reading the awesome SICP book for some time now and the striking thing is the resemblance between JS and Scheme in some areas. Not surprising considering that Scheme was one of the major influences for the JS language. Thus, another reason to learn more languages to become a better programmer.

1. begin and comma operators

Scheme has the begin operator which allows you to evaluate a series of expressions and return the value of the last evaluated expression. This immediately struck me as being related to the JS comma operator which does exactly the same thing!

Code Samples


//JS

var x = 1,2,3,4,5;

x; //5

//Scheme

(define a

    (begin 1 2 3 4 5))

a; 5

One difference though is the fact that I find begin more useful in Scheme than its JS comma counterpart.

2. Type coercion

Due to JavaScript’s loose typing and implicit coercion, values can be truthy or falsy (e.g. undefined, null or 0). Idiomatic JavaScript leverages this coercion into false values and that’s why you see expressions like the below:


if(value) {

   console.log("Value is truthy!);

}

Scheme behaves in a similar way too – values are coerced albeit with more rigidity than JS.

(if null
   (display "Null coerced to true!")
   3)
; Null coerced to true!

3. Lambdas, first-class functions, closures and maybe lexical Scope

Some say JavaScript helped fuel the widespread adoption of lambdas in mainstream languages. It made people see the value of the hidden gem which hitherto had been languishing in the murky depths of academia.

Scheme and JavaScript do share first-class functions support, closures and lambdas. Although there is no lambda keyword in JS, anonymous functions essentially do the exact same thing. The introduction of let in ES6 should bring JS to par with Scheme’s let 

And that’s about it for now.

SICP Section 2.5


So four months after I started and more than 90 solved exercises, I can say Alhamdulillaah chapter 2 is done! Section 2.5 was one of the most challenging so far; the exercises revolved around building large easily extensible software systems. And here are the thoughts again :)

1. Coercion

The section revealed the importance of coercion in software development by creating a maths system for various numeric types (e.g. integers, rational, real and complex numbers). Since Scheme is not an OOP language, procedures were used to encapsulate each numeric type’s functionality and ‘install’ them into the system. A generic apply function multiplexes routing calls to the matching number type interface.

The challenge was in handling mixed operations (e.g. adding an integer to a complex number or vice versa). Since numeric types are progressively subsets of one another (i.e. an integer is also a complex number but not vice versa), it is possible to build the following hierarchy of types.

integer -> rational -> real ->complex

By leveraging this hierarchy, it becomes possible to do arithmetic operations on a mixed variety of number types. The system just needs to ‘raise’ all the numbers to topmost type in the hierarchy and then invoke the arithmetic operation.

Lets take it further one bit; if ‘raise’ is possible, then surely ‘demote’ can be done too! A good application of this is ensuring that the results of ‘raised’ operations are in their simplest terms. For example, the result of adding these mixed numbers types: 1, 1 – i, 2.0, -1 + i is 3 + 0i. The result can be progressively reduced to the integer 3 as shown below:

3 + 0i -> 3.0 -> 3 / 1 -> 3

And the icing on the cake? There is no need to write a demote function! The earlier raise procedure should demote number types if its hierarchy of types is reversed.

These assumes the system already supports two-way converters for each boundary of the hierarchy (e.g. integer->rational and rational->integer). There is no need for an integer->real converter since integers can be promoted to rational numbers and then to real numbers.

Neat huh?

2. Extending a system

Assume the system above has to support polynomial arithmetic. It turns out it is quite simple to do – why not view polynomials as another layer in the hierarchy of types? This new hierarchy is shown below:

integer -> rational -> real ->complex->polynomial

An interesting abstraction isn’t it? It works too once all the interfaces are implemented. Designing software is indeed art…

Some extra learning points were the representation of polynomials: the sparse and dense approaches both had their pros and cons. A better fix was to enable the system to support both representation formats (simple trick: convert one form to the other and everything should be all good :) ).

Another interesting trick was polynomial subtraction; there is really no need to implement subtract if you already have addition implemented. All you have to do is add ‘negated’ versions of each polynomial term. Simple but powerful.

On to chapter 3 insha Allaah!

How to write a Promise/A+ compatible library


I decided to write Adehun after the series of promise posts. Adehun means ‘Promise’ in Yoruba, the language of my West African tribe. After a lot of rewrites, I can say Alhamdulillaah,  Adehun passes all the compliance tests of the promise/tests repo.

And here is how I did it.

1. Promise States & Validity

An object mapping the various states to integers and an isValidStates function to verify that only valid states are accepted.

2. Utils

A couple of helper functions including runAsync (to run functions asynchronously), isObject, isFunction and isPromise helper functions.

3. The Transition function – Gatekeeper for State Transition 

Gatekeeper function; ensures that state transitions occur when all required conditions are met.

If conditions are met, this function updates the promise’s state and value. It then triggers the process function for further processing.

The process function carries out the right action based on the transition (e.g. pending to fulfilled) and is explained later.


function transition (state, value) {
  if (this.state === state ||
    this.state !== validStates.PENDING ||
    !isValidState(state)) {
      return;
    }

  this.value = value;
  this.state = state;
  this.process();
}

4. The Then function

The then function takes in two optional arguments (onFulfill and onReject handlers) and must return a new promise. Two major requirements:

1. The base promise (the one on which then is called) needs to create a new promise using the passed in handlers; the base also stores an internal reference to this created promise so it can be invoked once the base promise is fulfilled/rejected.

2. If the base promise  is settled (i.e. fulfilled or rejected), then the appropriate handler should be called immediately. Adehun.js handles this scenario by calling process in the then function.

function then (onFulfilled, onRejected) {
 var queuedPromise = new Adehun();
 if (Utils.isFunction(onFulfilled)) {
   queuedPromise.handlers.fulfill =
                        onFulfilled;
 }

 if (Utils.isFunction(onRejected)) {
   queuedPromise.handlers.reject =
                        onRejected;
 }

 this.queue.push(queuedPromise);
 this.process();

 return queuedPromise;
}

5. The Process function – Processing Transitions

This is called after state transitions or when the then function is invoked. Thus it needs to check for pending promises since it might have been invoked from the then function.

Process runs the Promise Resolution procedure on all internally stored promises (i.e. those that were attached to the base promise through the then function) and enforces the following Promise/A+ requirements:

1. Invoking the handlers asynchronously using the Utils.runAsync helper (a thin wrapper around setTimeout (setImmediate will also work)).

2. Creating fallback handlers for the onSuccess and onReject handlers if they are missing.

3. Selecting the correct handler function based on the promise state e.g. fulfilled or rejected.

4. Applying the handler to the base promise’s value. The value of this operation is passed to the Resolve function to complete the promise processing cycle.

5. If an error occurs, then the attached promise is immediately rejected.


function process () {
 var that = this,
     fulfillFallBack = function (value) {
       return value;
     },
     rejectFallBack = function (reason) {
       throw reason;
     }; 

 if (this.state === validStates.PENDING) {
   return;
 }

 Utils.runAsync(function () {
   while (that.queue.length) {
     var queuedP = that.queue.shift(),
     handler = null,
     value;

     if(that.state===validStates.FULFILLED){
       handler = queuedP.handlers.fulfill ||
                 fulfillFallBack;
     }
     if(that.state===validStates.REJECTED){
       handler = queuedP.handlers.reject ||
                 rejectFallBack;
     }

     try {
       value = handler(that.value);
     } catch (e) {
       queuedP.reject(e);
       continue;
     }

   Resolve(queuedP, value);
  }
 });
}

6. The Resolve function – Resolving Promises

This is probably the most important part of the promise implementation since it handles promise resolution. It accepts two parameters – the promise and its resolution value.

While there are lots of checks for various possible resolution values; the interesting resolution scenarios are two – those involving a promise being passed in and a thenable  (an object with a then value).

1. Passing in a Promise value

If the resolution value is another promise, then the promise must adopt this resolution value’s state. Since this resolution value can be pending or settled, the easiest way to do this is to attach a new then handler to the resolution value and handle the original promise therein. Whenever it settles, then the original promise will be resolved or rejected.

2. Passing in a thenable value

The catch here is that the thenable value’s then function must be invoked  only once (a good use for the once wrapper from functional programming). Likewise, if the retrieval of the then function throws an Exception, the promise is to be rejected immediately.

Like before, the then function is invoked with functions that ultimately resolve or reject the promise but the difference here is the called flag which is set on the first call and turns subsequent calls are no ops.

function Resolve(promise, x) {
  if (promise === x) {
    var msg = "Promise can't be value";
    promise.reject(new TypeError(msg));
  }
  else if (Utils.isPromise(x)) {
    if (x.state === validStates.PENDING){
      x.then(function (val) {
        Resolve(promise, val);
      }, function (reason) {
        promise.reject(reason);
      });
    } else {
      promise.transition(x.state, x.value);
    }
  }
  else if (Utils.isObject(x) ||
           Utils.isFunction(x)) {
    var called = false,
        thenHandler;

    try {
      thenHandler = x.then;

      if (Utils.isFunction(thenHandler)){
        thenHandler.call(x,
          function (y) {
            if (!called) {
              Resolve(promise, y);
              called = true;
            }
          }, function (r) {
            if (!called) {
              promise.reject(r);
              called = true;
            }
       });
     } else {
       promise.fulfill(x);
       called = true;
     }
   } catch (e) {
     if (!called) {
       promise.reject(e);
       called = true;
     }
   }
 }
 else {
   promise.fulfill(x);
 }
}

7.  The Promise Constructor

And this is the one that puts it all together. The fulfill and reject functions are syntactic sugar that pass no-op functions to resolve and reject.


var Adehun = function (fn) {
 var that = this;

 this.value = null;
 this.state = validStates.PENDING;
 this.queue = [];
 this.handlers = {
   fulfill : null,
   reject : null
 };

 if (fn) {
   fn(function (value) {
     Resolve(that, value);
   }, function (reason) {
     that.reject(reason);
   });
 }
};

I hope this helped shed more light into the way promises work.

AdehunJS on Github.

Liked this post? Here are a couple more exciting posts!

1. The Differences between jQuery Deferreds and the Promises/A+ spec

2. Programming Language Type Systems II

3. Three Important JavaScript Concepts