JavaScript, Languages, Miscellaneous, Software Development

How to write a Promise/A+ compatible library


I decided to write Adehun after the series of promise posts. Adehun means ‘Promise’ in Yoruba, the language of my West African tribe. After a lot of rewrites, I can say Alhamdulillaah,  Adehun passes all the compliance tests of the promise/tests repo.

And here is how I did it.

1. Promise States & Validity

An object mapping the various states to integers and an isValidStates function to verify that only valid states are accepted.

2. Utils

A couple of helper functions including runAsync (to run functions asynchronously), isObject, isFunction and isPromise helper functions.

3. The Transition function – Gatekeeper for State Transition 

Gatekeeper function; ensures that state transitions occur when all required conditions are met.

If conditions are met, this function updates the promise’s state and value. It then triggers the process function for further processing.

The process function carries out the right action based on the transition (e.g. pending to fulfilled) and is explained later.


function transition (state, value) {
  if (this.state === state ||
    this.state !== validStates.PENDING ||
    !isValidState(state)) {
      return;
    }

  this.value = value;
  this.state = state;
  this.process();
}

4. The Then function

The then function takes in two optional arguments (onFulfill and onReject handlers) and must return a new promise. Two major requirements:

1. The base promise (the one on which then is called) needs to create a new promise using the passed in handlers; the base also stores an internal reference to this created promise so it can be invoked once the base promise is fulfilled/rejected.

2. If the base promise  is settled (i.e. fulfilled or rejected), then the appropriate handler should be called immediately. Adehun.js handles this scenario by calling process in the then function.

function then (onFulfilled, onRejected) {
 var queuedPromise = new Adehun();
 if (Utils.isFunction(onFulfilled)) {
   queuedPromise.handlers.fulfill =
                        onFulfilled;
 }

 if (Utils.isFunction(onRejected)) {
   queuedPromise.handlers.reject =
                        onRejected;
 }

 this.queue.push(queuedPromise);
 this.process();

 return queuedPromise;
}

5. The Process function – Processing Transitions

This is called after state transitions or when the then function is invoked. Thus it needs to check for pending promises since it might have been invoked from the then function.

Process runs the Promise Resolution procedure on all internally stored promises (i.e. those that were attached to the base promise through the then function) and enforces the following Promise/A+ requirements:

1. Invoking the handlers asynchronously using the Utils.runAsync helper (a thin wrapper around setTimeout (setImmediate will also work)).

2. Creating fallback handlers for the onSuccess and onReject handlers if they are missing.

3. Selecting the correct handler function based on the promise state e.g. fulfilled or rejected.

4. Applying the handler to the base promise’s value. The value of this operation is passed to the Resolve function to complete the promise processing cycle.

5. If an error occurs, then the attached promise is immediately rejected.


function process () {
 var that = this,
     fulfillFallBack = function (value) {
       return value;
     },
     rejectFallBack = function (reason) {
       throw reason;
     }; 

 if (this.state === validStates.PENDING) {
   return;
 }

 Utils.runAsync(function () {
   while (that.queue.length) {
     var queuedP = that.queue.shift(),
     handler = null,
     value;

     if(that.state===validStates.FULFILLED){
       handler = queuedP.handlers.fulfill ||
                 fulfillFallBack;
     }
     if(that.state===validStates.REJECTED){
       handler = queuedP.handlers.reject ||
                 rejectFallBack;
     }

     try {
       value = handler(that.value);
     } catch (e) {
       queuedP.reject(e);
       continue;
     }

   Resolve(queuedP, value);
  }
 });
}

6. The Resolve function – Resolving Promises

This is probably the most important part of the promise implementation since it handles promise resolution. It accepts two parameters – the promise and its resolution value.

While there are lots of checks for various possible resolution values; the interesting resolution scenarios are two – those involving a promise being passed in and a thenable  (an object with a then value).

1. Passing in a Promise value

If the resolution value is another promise, then the promise must adopt this resolution value’s state. Since this resolution value can be pending or settled, the easiest way to do this is to attach a new then handler to the resolution value and handle the original promise therein. Whenever it settles, then the original promise will be resolved or rejected.

2. Passing in a thenable value

The catch here is that the thenable value’s then function must be invoked  only once (a good use for the once wrapper from functional programming). Likewise, if the retrieval of the then function throws an Exception, the promise is to be rejected immediately.

Like before, the then function is invoked with functions that ultimately resolve or reject the promise but the difference here is the called flag which is set on the first call and turns subsequent calls are no ops.

function Resolve(promise, x) {
  if (promise === x) {
    var msg = "Promise can't be value";
    promise.reject(new TypeError(msg));
  }
  else if (Utils.isPromise(x)) {
    if (x.state === validStates.PENDING){
      x.then(function (val) {
        Resolve(promise, val);
      }, function (reason) {
        promise.reject(reason);
      });
    } else {
      promise.transition(x.state, x.value);
    }
  }
  else if (Utils.isObject(x) ||
           Utils.isFunction(x)) {
    var called = false,
        thenHandler;

    try {
      thenHandler = x.then;

      if (Utils.isFunction(thenHandler)){
        thenHandler.call(x,
          function (y) {
            if (!called) {
              Resolve(promise, y);
              called = true;
            }
          }, function (r) {
            if (!called) {
              promise.reject(r);
              called = true;
            }
       });
     } else {
       promise.fulfill(x);
       called = true;
     }
   } catch (e) {
     if (!called) {
       promise.reject(e);
       called = true;
     }
   }
 }
 else {
   promise.fulfill(x);
 }
}

7.  The Promise Constructor

And this is the one that puts it all together. The fulfill and reject functions are syntactic sugar that pass no-op functions to resolve and reject.


var Adehun = function (fn) {
 var that = this;

 this.value = null;
 this.state = validStates.PENDING;
 this.queue = [];
 this.handlers = {
   fulfill : null,
   reject : null
 };

 if (fn) {
   fn(function (value) {
     Resolve(that, value);
   }, function (reason) {
     that.reject(reason);
   });
 }
};

I hope this helped shed more light into the way promises work.

AdehunJS on Github.

Liked this post? Here are a couple more exciting posts!

1. The Differences between jQuery Deferreds and the Promises/A+ spec

2. Programming Language Type Systems II

3. Three Important JavaScript Concepts

Standard
SICP

SICP Sections 2.3 & 2.4: Thoughts and Ideas


1. Top-Down Design

Most of the problems in the SICP book are solved in a top-down way with lower level details deferred until needed. The focus on high level details makes for expressive flexible code since implementation is based on well-defined interfaces and not implementations. Consequently, swapping and improving interfaces is a cinch – the dependency on high level interfaces shields consumers from tight coupling to details.

Picture a calculus system for integer operations (e.g. differentiation); as expected, these higher order functions need support for addition/subtraction/division operators. A top-down design will implicitly assume the add/subtract operators will work appropriately and focus on the higher level operators first. These low-level details can even be mocked and the system unit tested to prove logic accuracy.

The benefit of such a delayed implementation comes when the system needs to be extended to support complex numbers – the higher level operations are still the same, only the low level add/subtract operators need to be changed. Once done, the entire system should work as before.

Design is subjective and this is one approach; there might be better ways for other scenarios but again this can be tried and leveraged.

2. Generics: Data-directed vs Message-Passing Styles

Generics are useful for maintaining and extending existing systems.

The data-directed style maintains a huge table of known procedures and looks up desired operations based on keys. Extending such systems involves adding the new function signatures to the table; this allows the entire system to start using the newly added functions (sounds like the adapter pattern).

The downsides include the extra overhead of a procedures’ table, the need for consumers to know about available operations and the enforcement of contracts in the system. However, benefits such as flexibility, encapsulation and ease of accretion of new features can’t be ignored.

Message passing involves sending messages to objects to trigger the right operations. Sounds familiar? Yes, it does sound like OOP and is one of its major tenets. An object keeps a list of possible operations and invokes the right method based on the message it receives (think of invoking an instance method in an OOP language as sending a ‘message’ to the object e.g. a.b() ).

Languages like Python and JavaScript allow for a clearer example of the message passing metaphor since they allow creating hashtables of functions. With such a hashtable, you can invoke a function by sending a ‘message’ to such functions e.g. a[“b”]().

The question thus comes to mind, how do programming languages handle their primitive operations at a lower level? Do they do message passing? Keep tables?

3. Context

The importance of context and how they affect semantics in all languages (both human and programming).

Using the example from the book; three is equal to one plus two however ‘three’ is not equal to “one plus two”. The context determines what is meant and humans have long used this to create various literary forms such as puns, sarcasm etc.

The same applies to programming, context does determine if your program will run perfectly or if it will have subtle but difficult to find errors. One obvious example is the JavaScript that = this pattern.

4. Huffman Encoding

Some computer science and maths eh, the Huffman encoding scheme allows you to use much less space provided the conditions are met.

5. Understanding the problem deeply before writing any code

Coding should be simple. Unfortunately, a shallow understanding the problem being solved makes it a difficult task.

Spending a good chunk of time to really understand the programming task at hand has great benefits, it leads to well-thought out designs, simple models (simple is difficult!) and less bugs. It also reduces the chances of providing the wrong solution. It makes for a more fulfilling experience too – what beats the thrill of having zero compile errors on the first attempt?

If you liked this post then do check out a couple more exciting discoveries from my SICP journey:

1. SICP Section 2.2: New ideas and thoughts about programming
2. SICP Section 2.1 : Thoughts
3. SICP Section 1.3 : Thoughts and Concepts

Standard
JavaScript, Languages

Manipulating lists with JavaScript Array methods


Lists of all sorts (e.g. grocery, to-do, grades etc) are a normal facet of our daily lives. Their ubiquitousness extends to software engineering as many programming tasks require manipulating lists. This post exposes new ways of doing popular list manipulation tasks in JavaScript and a little more too.

Be warned! JavaScript array methods routinely expose amazing surprises. Other languages generally tend not to have such hidden surprises, but well, don’t you just love the extra excitement from JavaScript? Afterall, it’s the language we all love to hate ;).

1. Appending arrays

One of the most common tasks is appending an array to the end of another array; Array.prototype.concat works but forces you to create a temporary variable or overwrite one of the arrays.

A clever use of Array.prototype.push allows getting by both limitations.

var a = [1,2,3];
var b = [4,5,6];
//requires temp value or overwriting
a = a.concat(b);

//using push
a.push.apply(a, b);
//similar variations
[].push.apply(a,b);
Array.prototype.push.apply(a,b);

How does this work? The Array.prototype.push method will append its arguments to the end of the array it’s called on.

a=[];
a.push(1,2,3);
a;
//[1,2,3]

The Function.prototype.apply method applies a method to an object and overrides arguments (an array-like object containing the arguments passed to a function). Apply allows you to invoke a function on any object with whatever parameters you want.

The problem is thus solved by applying push to and providing b as the arguments to be pushed. The code snippet is the same as shown below:

a.push(b1, b2, ..., bn);

This trick can be generalized and used anywhere once the second array can be transformed into an arguments object. The Math.max and Math.min methods, which return the max/min value of a set, provide another example.

Using the trick above, an array’s max value can be found by applying the Math.max method on the array.

var a = [1,2,3,4,5];
var max = Math.max.apply(Math, a);

max;
//5

Note that the object has to be Math here since it contains the max function. Isn’t this neater than iterating over the array to find the max value?

Tip: Apply differs from Call which expects an array; if Call were to be used, then gets inserted as a single element. Can you figure out how to use Call to achieve the same results?

2. Pushing into objects

Do you know Array.prototype.push works on non-array objects too? Huh? Didn’t I warn about JS Array methods earlier on?

Push is a generic function and provided the object it is applied to meets its requirements it’ll work; otherwise, tell me when you find out.

An example is shown below (anyone that does this in real-life code should be barred from touching code for weeks…).

var obj = {};

[].push.apply(obj, [1]);
Array.prototype.push.apply(obj, [2,3,4]);
obj;
//{0: 1, 1: 2, 2: 3, 3: 4, length: 4}

How did the length property get there?! The ES5 push spec explains this: the 6th step of the push operation will put length and the number of elements as a key-value pair into the object push was called on. Do you now know why it is [].length and not [].length()?

The tale continues, how about splicing some lists?

3. Merging arrays and removing elements at specified locations

The Array.prototype.splice method is probably mostly used for removing elements from JS arrays. Why it’s called splice and not remove is baffling but since it allows you to simultaneously insert elements I guess that name works. You can splice rope strands huh…

You can actually ‘delete’ array entries using the delete operator, however delete will leave behind undefined at the deleted element’s index; typically not what you want in most scenarios.

SideTip: The delete operator accepts expressions as operands and will evaluate such expression operands! E.g. delete (b=4) works!!

var a = [1,2,3];
delete a[1];
a;
//[1, undefined, 3]
a.length;
//3

Fix? Lets splice!

var a = [1,2,3];
var removed = a.splice(1,1);
removed;
//[2]
a;
[1,3]

The splice operation can also be used to insert an array’s elements into another array at a certain index, how? You probably guessed that now – it is the apply trick again.

var a =[1,2,6];
var b = [3,4,5]

a.splice.apply(a, [].concat(2, 0, b));
a;
//[1,2,3,4,5,6]

Confused about the need for [].concat? Apply expects an array of parameters that match the function’s signature; these are passed through as the arguments. The splice call to merge the arrays has the  2, the position to insert at; 0, the number of elements to remove and b, the array of elements to be merged.

The concat call translates into [2,0,4,5,6] and this is equivalent to the call below:

a.splice(2,0,4,5,6);

4. Truncating and Emptying arrays

var a = [1,2,3];
a.length = 0;

//Truncating
a;
//[]

a.length = 2;
a;
//[1,2]

//Or filling up arrays with undefined
a.length = 3;
a;
[1,2,undefined]

The ES5 spec explain this, once length is set to a value less than the original length, all outlying elements will be deleted.

So why did I write the post?

1. So you know what to do if you ever run into these ‘smart’ pieces of code

2. To expose some of the other side of JS.

Hope you liked it, here are some more fun posts ;)

1. Three Important JavaScript Concepts
2. Understanding Tail Recursion
3. Understanding Partial Application and Currying

Standard
Software Development, Tools

7 Cool tricks with Chrome DevTools


1. $_

$_ re-evaluates the last expression and is similar to the ‘_’ command in python’s REPL. However _ prints the last ‘non-None’ value while $_ prints the value of the last evaluated expression even if it is undefined.

$_

2. $() and $$() selectors

$() selects the first matching DOM element while $$() selects all matching DOM elements. Quite useful if jQuery is missing.

$$ selectors

Tip: $() and $$() are aliases for document.querySelector() and document.querySelectorAll() respectively; if any of them gets overridden, simply re-assign the function references. You can even create your own shortcuts!

$ = document.querySelector;
slct = document.querySelector;

3. Console.table

Very good for generating a nice view of an object in the console. Saves you from having to click about. It works with non-uniform object types too.

console.table

4. DOM Element event listeners

There are two ways to do this:

1. Select an element in the elements view and then go to the event listeners tab in the elements view.

2. Select the element in the console and call getEventListeners on the object

eventListeners2

5. $0 – $1

This  refer to the first through fifth last selected items. These shortcuts are handy references to pre-selected DOM elements.

$0

6. Drag Drop in Elements View

It is possible to re-order the DOM layout structure using drag drop gestures in the elements view. No need to get your hands dirty anymore – just drag/drop.

7. copy

Useful for directly copying JavaScript objects to the clipboard. Call copy on the object (as shown below) and paste wherever you want it.

copy

And that’s it! Hope you learnt one or two new tricks to enhance your development.

Standard
Learning, Miscellaneous, SICP

SICP Section 2.2: New ideas and thoughts about programming


Here are 5 points to think about, hopefully they’ll trigger an ‘aha’ moment.

1. Leveraging the ‘Closure’ concept in programming

I am not talking about the ‘closure’ concept from programming (the one that involves free variables). This refers to the ‘closure‘ concept from mathematics and the power it brings to programming languages.

Mathematically, a set is closed under an operation if carrying out that operation on elements of the set will always give a result that is also in the set. For example, addition of positive integers is closed under the set of positive integers (subtraction is not: 1 – 3 = -2; which is not positive). Thus you can infinitely add positive numbers without worrying about ending up with non-positive numbers.

Lets take this to programming: closed operations allow chaining because the result will be always be a valid value for further operations. Does this strike a bell? Think fluent programming, think jQuery chains. This simple concept allows very complex actions by leveraging simple data and procedures.

2. Simple it might seem, difficult it might be

I initially thought Ex 2.18 would be dead easy; it was simple: reverse a scheme list. I realized my folly after spending the better part of an hour battling the task. Unlike pointer-based lists in C-like languages, the Scheme list is a chain of pairs, each pair points to another pair and so on. This negated walking down the list and reversing pointer directions.

Recursion was tricky since getting the second half of a list pair (i.e. cdr list) brought the entire sublist instead of just the element! The reversal task involved the following restrictions:

  • No variables allowed
  • Only the top element of a sublist can be retrieved while walking down the list
  • The procedure should recursively construct a new list simultaneously
  • Retrieval of elements at arbitrary positions is difficult; although possible, the absence of variables makes this useless.

My first approach worked but created a wrong data structure; Alhamdulillaah I eventually got the right solution. An elegant solution I found online solved the reversal problem simply – reversing a list is equivalent to appending the first element of the list to the reversal of the remaining elements.

3. Languages influence thinking and problem modelling

Languages play a big role in the way we think and write code, most of us see programming tasks through an ‘imperative’ pair of glasses. Alhamdulillah the SICP book is giving me a new pair of glasses. Don’t we all need a new pair of glasses every now and then?

Imagine a simple task: calculate the sum of the squares of the odd numbers in an array. The snippets below show both the imperative and functional styles of solving the problem.

function isOdd(num) {
    return (num %2 ) === 1;
}

function square(num) {
    return num * num;
}

//imperative style
function sumOddSq(arr) {
  var sumOddSq = 0;
  for(var i=0, l=arr.length; i < l; i++) {
      if(isOdd(arr[i])) {
          sumOddSq += square(arr[i]);
      }
  }
  return sumOddSq;
}

//functional style
function sumOddSq2(arr) {
  return arr.filter(isOdd)
            .map(square)
            .reduce(function (val, acc) {
                return val + acc;
            }, 0);
}

sumOddSq([1,2,3]);   //10
sumOddSq2([1,2,3]);  //10

The imperative approach involves walking through the array, checking for odd numbers, squaring those odd numbers and updating the sum as you go along. Using a functional programming approach, the task involves filtering the sequence for odd numbers, mapping the filtered numbers to their squares and then reducing these squares to a sum.

Functional programming relies heavily on the map, reduce and filter concepts and most problems can be solved based on these building blocks. For tasks involving non-list-like structures, converters can be used to create sequences and also generate structures from list results.

Did you notice? sumOddSq2 chains array operations; the map, reduce and filter operations can be said to be ‘closed’ over the set of arrays. Thus the result of any of these operations is always an array and can be reused immediately.

4. Software engineering is still engineering

Even though software engineers manipulate bits and not ‘real’ hardware; general engineering concepts still apply. There are a couple of interesting similarities; for example in electronics engineering, the diode can be viewed as an if statement while filters can be viewed as ‘filter‘ logic. The electronics engineer uses a filter, the software engineer uses a filter function. Both combine them to build even more complex abstractions.

Another example is the computer system and the various dependent layers; a simplified complexity order is shown below:

Transistors -> Logic gates  -> Circuits -> Processors -> Computers -> Distributed Systems

The same applies to software engineering, an application can be seen as:

Values -> Methods -> Classes -> Models -> Subsystems -> Modules -> Applications

We all start from something simple and progressively add complexity, each layer being shielded from the layer below it. A computer engineer can swap processors without breaking a computer; so should a software engineer be able to swap out layers in his code without bringing down the application.

5. Recursion

The 8 queens problem is one that a lot of people are familiar with and as you guessed, there was an exercise on that. The solution was quite simple – recursively build the board starting out with one queen. As the board grows, filter out positions that have queens attacking each other, finally return the list of all positions that have safe queens.

The functional-programming styled solution involved enumerating all positions, creating a safe? function to determine a valid board, passing this function to the filter and then outputting the results. It is great to see how functions can be composed and how the basic functional programming support for map, reduce and filter enable even higher levels of complexity.

I must admit I struggled a lot with this problem as it was difficult to understand and debug – walking up the call stack did expose some of its intricacies but it’s amazing how simple and how yet powerful the results of combining these simple concepts.

Conclusion

And that’s about it. Next is section 2.3 with about 20 questions; I skipped about 2 or 3 questions in 2.2 since I didn’t really feel they were worth solving. I hope I can get 2.3 done by mid January insha Allaah and create another post then!

Related Posts

Here are a couple more of my SICP-themed posts

1. SICP Section 2.1 : Thoughts

2. 5 things I have gained from SICP: Section 1.2

3. SICP Section 1.3 : Thoughts and Concepts

Standard
Miscellaneous

The Differences between jQuery Deferreds and the Promises/A+ spec


A lot of developers use jQuery Deferreds to achieve promise-like behaviour. Since deferreds work for most scenarios, most do not know that jQuery deferreds are not compliant with the Promise/A+ spec. Surprised? Well, there are probably promise implementations that fall short of the spec.

The schism is minor and might not really matter if promise libraries are not mixed. However, it is definitely good to know the difference – you never know when it’ll come in handy. So what’s the big issue and how does it affect developers?

The first difference is in the implementation of then; according to the Promise/A+ spec, then MUST return a promise regardless of what happens in its onresolved and onrejected handlers. Thus explicit reject calls, exceptions or syntax errors will all lead to a rejected promise. jQuery deferreds have a different view of the world altogether – unhandled errors will bubble up until they are caught or reach window.onerror.

Lets examine both scenarios, first the promise:

//dummy resolved promise
var p1 = Promise.resolve();

var p2 = p1.then(function() {
    throw new Error("Exception!");
});

console.log(p2);
//Promise {[[PromiseStatus]]: "rejected",
//[[PromiseValue]]: Error: Exception!}

And now, jQuery deferreds

var foo = new jQuery.Deferred();
var bar = foo.then(function (rslv) {
    throw Error('Exception!');
});

foo.resolve();
//Uncaught -> Error: Exception!

bar.state();
//pending

Another minor difference is the then function’s arity: the Promise/A+ specification says then should be dyadic while the Deferred’s then function is triadic. The jQuery implementation probably goes all the way back to the first promise proposal.

//Promise/A+
jsPromise.then(onresove,onreject);

//jQuery Deferreds
deferred.then(onresolve,
              onreject,
              onprogress);

Why should I care?
Assume you want to try a potentially disruptive operation when a promise resolves. If you’re using a Promise/A+ compliant library, all you have to do is check for rejected promises. The resolution state value will contain the information about success/failures. This is simple as there is no need to explicitly handle errors and consistent as you use the asynchronous programming promise style all through.

Deferreds will require you to explicitly handle all failures (e.g. by using try-catch blocks). This leads to a weird mixture of the asynchronous (promise-style) and synchronous (error-handling) programming styles. Moreover, if the error is unhandled, you can bid bye to all queued-up chained operations.

I am not going to say which approach is better – that’s your decision.

Yes! Workarounds

There are two workarounds : converting deferreds to Promises or ensuring the deferred then handler returns a promise.

The promise API will handle ‘thenables‘ (objects with a then function) as promises so mixing between different promise implementations is OK. Deferreds can be converted to promises (most libraries expose methods to do this).

To ensure deferreds always return promises, this involves wrapping error-throwing operations in try/catch handlers and rejecting promises when exceptions occur.

Let see a code example below:

var log = console.log.bind(console);
var foo = new jQuery.Deferred();
var bar = foo.then(function (rslv) {
 var tmp = new jQuery.Deferred();
 try {
    throw Error('Exception!');
 }
 catch (e) {
    tmp.reject();
 }
 return tmp.promise();
});
bar.fail(function (val) {
 log('Exception thrown and handled');
});
foo.resolve();

In case you are wondering, the jQuery promise derives from the jQuery deferred and has the same issues while fail is syntactic sugar for handling promise failures. The promises-tests repo can be used to evaluate implementations for Promise/At+ compliance :).

tl;dr?

jQuery deferreds are not Promise/A+ compliant.

Standard
Learning, SICP

SICP Section 2.1 : Thoughts


Alhamdulillaah I wrapped up this section a few days ago – 13 of the solutions are here (if you are curious enough); the other three need proofs and I wasn’t in the ‘mood’ for too much interval arithmetic. So what did I learn?

1. More Scheme

This chapter introduced the concept of cons, car and cdr  for manipulating pairs which in turn can be used to create just about any data structure.

  • cons - means construct, allows you to build a pair from two arguments
  • car – contents of address part of register, returns the first element of a cons pair
  • cdr – contents of decrement part of register, returns the second element of a cons pair
  • cadr – contents of address part of contents of decrement part of register; this is equal to (car (cdr value)); you can do things like caadr too – every extra a represents a car while a d represents cdr. Thus (caadr x) is equal to (car (car (cdr x)))

2. Abstraction

Abstractions provide elegant ways of solving problems. They allow the programmer to think at a ‘higher’ level without worrying too much about implementation details. This leads to a cleaner design and more flexible code.

Lets take the ‘xor’ logic operator as an example; assuming you need to add negative integer support to a positive-integer only multiplication system, there are two ways to do this:

  • Evaluate every possible combination, have four if/else blocks and then assign the right sign afterwards
  • Leverage the xor operation.

Here is the xor table:

x y x xor y
+ + +
+ - -
- + -
- - +

This matches the four scenarios for multiplying two numbers. Thus, you can create a wrapper around the existing system for multiplication and then use the xor operator to find the sign of the output. Worth a thought isn’t it?

3. Architecture

There was a brief mention of the onion layer style of architecture which allows you to build software in layers – its main advantage is the ability to swap out any layer without ‘breaking’ the system.

Each level relies on the layer below it through a well-defined interface; you can swap layers as you think fit, you just have to follow the interface specifications. Another step in separating between data abstraction and implementation.

4. Church Numbers – a new numbering system?

The Church encoding system is a way of representing numbers and their operators in lambda calculus. The book introduces the Church zero and add-1 procedures first, I thought the zero procedure would evaluate to the value zero but was shocked to realize it was a procedure. Alhamdulillaah I finally got to understand it after reading Bill’s excellent post.

The main thing to know is this: It is possible to represent numbers even without using numbers!

Assuming a theoretical language that has only functions and no inherent support for numbers whatsoever; how do you add or represent numbers? Surprisingly this is easy to do – you could model numbers by counting the number of times you apply a function. This is what the Church numbers do – they apply input functions for a number of times that corresponds to the expected integer value.

The zero procedure takes in a procedure (which itself accepts another procedure) and calls it ‘zero’ times (never called); similarly, the one procedure evaluates its input once (that’s one right?). Generalizing these allow you to do addition and subtraction. While it was good to learn about lambda calculus however I wonder how real number arithmetic would be represented…

5. The woes of Interval Arithmetic

There were a couple of exercises on interval arithmetic; what struck me about this was the way small changes in calculation could lead to huge differences in results. Two algebraically equivalent formulas could lead to varying results due to overlooked ‘assumptions’.

For example, if for some reason x / x is not equal to one (can happen in interval arithmetic); then every operation that includes this division (implicitly or explicitly) can deviate from accurate results. The more the ‘precision-losing’ operations carried out, the more significant the deviation.

And that’s about it again. Section 2.2 has about 36 exercises and I hope to complete it by December insha Allaah. And yes, insha Allaah I’ll write about my thoughts again…

Standard