Influential Books for programmers


I try to read a lot of books. Over the years, my ‘taste’ for books has been refined and some of my criteria are listed below.

Book Impact Scale

  1. Length: The 200 – 300 page range is just about right for a technical book. Longer books contain a lot of fluff and repetition. Sometimes I get the feeling that the author had to meet some minimum page count.
  2. Thought-provoking: I love technical books that challenge the way I think, enable me to see things in alternative light and exercise my ‘thinking’ muscles. Isn’t that the goal for reading after all? To broaden horizons and learn alternative viewpoints.
  3. Inclusion of challenges: I like books that provide practice exercises. This cements the new ideas – it is easy to ‘think’ you already know something when you don’t. Solving a problem is the litmus test for understanding.
  4. Enduring: Every field has its ‘classics’ and they exist for a good reason too. If a 30-year old book is still in print, has lots of good reviews and teaches fundamentals; then chances are high it is a GOOD book. Add bonus points for language-agnostic themes.
  5. Simple: Most books fall short in knowledge delivery by employing obscure terms. This might be due to a misassumption of expected readers’ skill levels. Clear straightforward explanations make for easy reading.

Now, here come the books (listed in no particular order).

1. The Pragmatic Programmer

Excellent read. Medium sized at 352 pages but bursting at the seams with sagely advice. The authors used lots of interesting stories to drive home the points. Some of the lessons that have stuck with me include are:

  • Coincidental programming – you don’t why or how the code ‘works’. If adding that statement makes it ‘work’, then it might also cause bugs you can’t explain.
  • Learning:
    • Learn a new language every year
    • Master a single editor (got me to learn vim)
    • Become a command line expert
  • The broken windows theory – quickly leads to software rot
  • Programming close to the problem domain (SICP book did this too)
  • Being proud of your work – craftsmen are supposed to be proud of their work.
  • Bringing about the change you want
  • Test your software else your users will

Definitely influenced me in a big way and probably due for a reread now.

2. Structure and Interpretation of Computer Programmers (SICP)

A master classic – one for anyone who wants to be a master programmer and has the patience to complete all the exercises. Reading the book without solving the questions would make it difficult to fully absorb the goodness.

The authors painstakingly organized the book – the pages flow so smoothly that the increase in complexity is almost imperceptible. You get exposed to a huge slew of programming paradigms (logic, functional, OOP), build cool stuff (8 queens solver, calculus solve, an interpreter and a compiler) and also learn some CS stuff (streaming, Ackermann functions, Huffman etc).

The biggest change for me is getting to understand that programming is all about solving the right problems using the right abstractions and in a logical and well-ordered manner. Languages don’t matter that much – they are just tools for shaping processes and conveying our thoughts. Java, JavaScript, Python, C, C++ are all beautiful languages. If you don’t know how break down problems, evaluate alternatives and create solution pipelines, then it doesn’t matter what tool you use – the end result may work but wouldn’t be a masterpiece.

Give a master carpenter the minimum tools he needs and he can create something beautiful. Give me a fully stocked wood workshop and while I can craft a table it probably would not come close to what the master did with his limited tools. My table might work but I probably won’t get any appreciation for it. Think this same way about programming.

For example, here is how they explained list reversal:

To reverse a list, just append the first element to the end of the reversed sublist

Be warned though, this is probably a year-long journey but one that you must should undertake. It’s freely available and you can check out my repo of solutions too.

3. Code Complete 2

I had always wanted to read this book but I couldn’t bring myself to do that because it was over 1000 pages long. Finally, I gratefully found a strategy that worked – committing to a 30 min read window daily; mostly on the bus commute home.

The points from the book:

  • Avoid complexity at all costs; software is already difficult so make it as dumb as possible. Again, make it plain simple.
  • Always care about quality regardless of your state in software development
  • Create programming models in your language i.e. try to create domain abstractions in the language; this makes it easier to maintain and express ideas
  • Programs are meant to be read by humans first and then for computers next
  • Being open to other ideas and influences – no need getting stuck to some particular tools
  • Testing styles to ensure code coverage

It is a great book – a bit verbose at times but one that you should read too.

 4. Programming Pearls and more Programming Pearls

A bit old but still readable – it made it obvious how programming had advanced and developers could avoid knowing intricate computer details.

The two books are small and contain a lot of ‘pearls’ that make you think deeply; most revolve around events that happened during the authors’ time at the Bell labs. The coding examples might be outdated but the solid principles in the book aren’t.

For example, a book teaching the basics of motion from the 60s would probably still be applicable today; motion fundamentals haven’t changed much. Same way the author’s approach of problem definition, isolation and iterative improvements can’t be faulted.

Highlights

  • Back of the envelope calculations – made me realize how these axioms could influence the way you program. Now you know why these questions are popular at interviews.
  • Over-engineering a solution might help it cope with unforeseen load
  • Iterative improvement – first solve the problem, then improve the algorithm if needed.
  • Confirm algorithmic correction – it’s very easy to assume your algorithms work when they infact contain bugs.
  • Programming for resource-constrained environments – how do you sort 4 million nodes on a limited memory computer?

5. The Little Schemer + The Reasoned Schemer

The style was nothing like any of the others and I surprisingly found it enjoyable. It flows as a series of questions and answers although readers are encouraged to try before taking a peek at the answers.

The first few chapters were easy to breeze through (coming from a SICP background probably helped too) however things get more interesting from about chapter 9 or thereabout.

  • Provides a ‘formula’ for nailing recursion: fix the base case and then handle the recursive case

Learning recursion is easy, master the first part and then you can learn the remaining parts (pun intended).

  • Good explanation of the Y-combinator – slow build up but still challenging explanation of a hard-to-grasp concept.
  • Covers CS concepts like generators, memoizations, closures, Church encoding, currying, continuation passing and more.

Worthy of mention

1. The Algorithm Design Manual (TADM)

Normally algorithms books are all full of mathematical proofs (really important to understand). However, I like the TADM because it focuses more on applying those algorithms in practical context. The various stories by the author and his humorous quips make it all the more fun too.

2. Eloquent JavaScript

The book that exposed me to functional programming and started the long JS adventure. I always recommend it since it is an excellent beginner book, free (despite its high quality) and is just awesome.

Conclusion

Looking back, it is interesting to see the subliminal influence these books have had on me. I only realized some of these while writing this review.

What are your favorite books and how have they influenced the way you write programs? Share in the comments.

Advertisements

Understanding and using Streams in JavaScript


Introduction

What do you think of the following code snippet?

NaturalNumbers()
.filter(function (n) { return n % 2 === 0; })
.pick(100)
.sum();

Isn’t it beautifully succinct and neat? It reads just like English! That’s the power of streams.

Streams are just like lists but offer more capabilities because they simultaneously abstract data and computation.

Streams vs Lists/Arrays?

Let’s take a scenario from Mathematics, how would you model the infinite set of natural numbers? A list? An Array? Or a Stream?

Even with infinite storage and time, lists and arrays do not work well enough for this scenario. Why? Assuming the largest possible integer an array can hold is x, then you’ve obviously missed out on x + 1. Lists, although not constrained by initialization, need to have every value defined before insertion.

Don’t get me wrong, lists and arrays are valid for a whole slew of scenarios. However, in this situation, their abstraction model comes up short. And when abstractions do not perfectly match problem models, flaws emerge.

Once again, the constraints of this problem:

  • The size of the problem set might be infinite and is not defined at initialization time  (eliminates arrays).
  • Elements of the set might not be defined at insertion time (eliminates lists).

Streams, which combine data and computation, provide a better abstraction for such infinite problem sets. Their ability to model infinite lists stems from lazy evaluation – values are only evaluated when they are needed. This can lead to significant memory and performance boosts.

The set of natural numbers starts from 1 and every subsequent number adds 1 to its predecessor (sounds recursive eh? ). So a stream that stores the current value and keeps adding one to it can model this set.

Note: As might have become obvious: extra data structures might be needed to store previously generated stream values. Streams typically only hold a current value and a generator for calculating the next value.

What is a Stream?

I published stream-js, a very small (4.1kb minified) library that provides stream processing capabilities. Grab it or read the source as the post builds on it.

Oh, do contribute to the repo too!

How do I create a stream?

The Stream constructor expects an initial value and a generator function, these two values form the stream head and tail respectively.

An empty stream has null head and tail values. In infinite streams, the tail generator will endlessly generate successive values.

var emptyStream = new Stream(null, null);

var streamOf1 = new Stream(1, null);

var streamOf2 = new Stream(1, function () {
    return new Stream(2, null);
});

var streamOf3 = Stream.create(1,2,3);

var streamFromArray = Stream.fromArray([1,2,3]);

Note: The fromArray method uses the apply pattern to partially apply the input array to the arguments function above.

Show me the code!

Now that you know how to create Streams, how about a very basic example showing operations on Streams vs Arrays in JS?

With Arrays

var arr = [1,2,3];
var sum = arr.reduce(function(a, b) {
    return a + b;
});
console.log(sum);
//6

With Streams

var s = Stream.create(1,2,3);
var sum = s.reduce(function(a, b) {
    return a + b;
});
console.log(sum);
//6

The power of streams

The power of streams lies in their ability to hold model infinite sequences with well-defined repetition patterns.

The tail generator will always return a new stream with a head value set to the next value in the sequence and a tail generator that calculates the next value in the progression.

Finite Streams

The Stream.create offers an easy way to create streams but what if this was to be done manually? It’ll look like this:

var streamOf3 = new Stream (1, function() {
    return new Stream(2, function() {
        return new Stream(3, function () {
            return new Stream(null, null);
        });
    });
});

Infinite Streams

Infinite Ones

Let’s take a dummy scenario again – generating an infinite series of ones (can be 2s too or even 2352s). How can Streams help? First the head should definitely be 1, so we have:

var ones = new Stream(1, ...);

Next, what should tail do? Since it’s a never-ending sequence of ones, we know that tail should generate functions that look like the one below:

var ones = new Stream(1, function() {
    return new Stream (1, function() {
        ...
    };
});

Have you noticed that the inner Stream definition looks like the Ones function itself? How about having Ones use itself as the tail generator? Afterall head would always be one and tail would also continue the scheme.

var Ones = function () {
    return new Stream(1, /* HEAD */
        Ones /* REST GENERATOR */);
};

Natural Numbers

Let’s take this one step further. If we can generate infinite ones, can’t we generate the set of Natural numbers too? The recurring pattern for natural numbers is that elements are larger than their preceding siblings by just 1.

Let’s define the problem constraints and add checkboxes whenever a stream can be used.

  • Set is infinite ☑
  • Set has a well-defined recurring pattern ☑
  • Definition needs an infinite set of ones ☑

 

So can streams be used to represent natural numbers? Yes, stream capabilities match the problem requirements. How do we go about it?

The set of natural numbers can be described as the union of the set {1} and the set of all numbers obtained by adding ones to elements of the set of natural numbers. Yeah, that sounds absurd but let’s walk through it.

Starting from {1}, 1 + 1 = 2 and {1} ∪ {2} = {1,2}. Now, repeating the recursion gives rise to {1, 2} ∪ {2, 3}  = {1,2,3}. Can you see that this repeats indefinitely? Converting to code:

function NaturalNumbers() {
    return new Stream(1, function () {
        return Stream.add(
            Stream.NaturalNumbers(),
            Stream.Ones()
        );
    });
};

Execution walkthrough

The first call to NaturalNumbers.head() returns 1. The tail function is given below:

function () {
    return Stream.add(
        Stream.NaturalNumbers(),
        Stream.Ones()
    );
}
  • Stream.NaturalNumbers is now a stream that has a head of 1 and a tail generator that points to itself. Think of the sets {1} and Natural numbers.
  • Stream.Ones is a stream with a head of one and a tail generator of ones.

Once invoked, this will give a new stream with a head of 1 + 1 and a new tail function that will generate the next number.

Building upon natural numbers

Generating the sets of even and odd numbers is a cinch – just filter the set of natural numbers!

var evenNaturals = NaturalNumbers().filter(function(val) {
    return val % 2 === 0;
});

var oddNaturals = NaturalNumbers().filter(function(val) {
    return val % 2 === 1;
});

Pretty simple right?

Who needs infinite sets?

Computers are constrained by storage and time limits so it’s not possible to ‘have’ infinite lists in memory. Typically only sections are needed at any time.

stream-js allows you to do that

  • Stream.pick: allows you to pick elements of a stream.
  • toArray: converts a stream to an array

A typical workflow with stream-js would involve converting an input array to a stream, processing and then converting back to an array.

For example, here is the array of the first 100 odd numbers; you need a 1000? Just pick them (pun intended).

var first100odds = oddNaturals.pick(100).toArray();

Note: Stream operations can be chained since most stream operations return new streams (i.e. are closed operations). Here is odo, v0.5.0 of stream-js.  Odo means river in Yoruba, the language of my tribe.

And that’s about it! I hope you enjoyed this, now read how to write a promise/A+ compatible library next.

How to implement the Y-combinator in JavaScript


This post provides a very simple step-by-step implementation of the Y-combinator in JavaScript. You should be able to implement the Y-combinator in your language of choice after reading this post; as you’ll see – it’s that easy.

What is a combinator?

According to wikipedia,

A combinator is a particular type of higher-order function that may be used in defining functions without using variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables.

The emphasized text highlights the most interesting part of the definition – combinators allow functions to be defined without variables. Imperative programming relies heavily on variables and trying to eschew variables can be a mind-stretching exercise.

Show me the code!

The following code snippet is a Y-combinator example of the factorial function in JavaScript.

var yCombFact = function (number) {
    return (function (fn) {
        return fn(fn, number);
    })(function(f, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * f(f, (n - 1));
        }
    });
};

yCombFact(5);
//120

Looks abstruse right? No worries – lets break it down.

Two major things

There are two major concepts that help drive home the understanding of the Y-combinator:

  1. No variables allowed – this implies we have to find a way to write the factorial function without using variables.
  2. Invoking the no-variable-function defined in 1 without using variables again.

Part 1: Rewriting the factorial function without variables

Here is the factorial function definition. Can you spot the variable usage in it?

var factorial = function(n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

factorial(5);
//120

The expression n * factorial(n – 1) only succeeds because it can find the variable factorial in scope; without it, the factorial function would not be recursive. But remember, we are trying to do away with all variables.

The workaround is to pass in the variable reference as a function parameter. In the factorial example, recursion can then be achieved by using the placeholder parameter as the reference. The no-variable-factorial function looks like the following:

var noVarFactorial = function(fn, n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * fn(fn, (n - 1));
    }
}

noVarFactorial(noVarFactorial, 5);
//120

The new definition works exactly like the old one but without the internal dependency on the factorial variable. Rather, recursion succeeds by relying on the ‘injected’  parameter and the computer is none the wiser.

Part 2: Invoking the no-variable function without variables

We have rewritten the factorial function to avoid variables however we still need to store the factorial function in a variable before invoking it


var factorial = ...;

factorial(factorial, 5);

The trick? Functions to the rescue again! Let’s create a factorialCalculator function that uses the noVariableFactorial definition above.

function factorialCalculator(n) {
    //as defined earlier
    var noVarFactorial = ...;
    return noVarFactorial(noVarFactorial, n);
}

factorialCalculator(5);
//120

The noVarFactorial name has to go since we want to avoid variables. And how do we achieve this? Yes, functions once more. So lets create a wrapper function inside the factorialCalculator that invokes noVariableFactorial.

function factorialCalculator(n) {
    var wrapper = function (noVarFact) {
        return noVarFact(noVarFact, n);
    }
    return wrapper(noVarFactorial);
}

factorialCalculator(5);
//120

Unfortunately, the wrapper function has led created another wrapper variable and this has to be eliminated too. For a complete implementation, the two variables (wrapper and noVarFact) have to go.

It’s now time to leverage language specific idioms to achieve this. JavaScript has the IIFE idiom which allows you to immediately invoke a function (read about it here). Using it, we can eliminate the need for the wrapper variable as thus:


function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(noVarFactorial);
}

factorialCalculator(5);
//120

Combining all the pieces

The last thing is to insert the noVarFact definition so that it is no longer a global variable in the scope. Just as we do in Mathematics, we can just ‘substitute’ the value in. The final piece is then:

function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}

factorialCalculator(5);
//120

And that, my friends, is the yCombinator in all its glory. I have decide to leave the variables as they are to make it all clear but here is the standard format so you know it when you see it


function factorialCalculator(n) {
    return (function (fn) {
        return fn(fn, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}
factorialCalculator(5);

Conclusion

The Y-combinator is quite easy to understand – it requires understanding function invocation patterns, variable substitution by parameters and higher level functions. As an exercise, can you try implementing the fibonacci using the Y-combinator approach? Better still, why not create a Y-combinator function that accepts function that match the fn(fn, n) signature?

Related Posts

SICP Section 3.3 – 3.5 : Found a bug in memq


1. Is memq broken?

memq is an in-built list search function; it finds the first occurrence of a key in a list and returns a new list starting from that key.


(memq 3 (list 1 2 3 4))
//; '(3 4)

(memq 5 (list 1 2 3 4))
//; #f

Now that you know what memq does, lets look at some weird behaviour

(define x (list 1 2))
(define a (list 3 4))

//; append x to the list
(set! a (append a x))
(memq x a)
//; #f -> x is not in a

Building on that foundation leads to the following conundrum

(define x '(1 2 3))

//; Create a cycle: last element of x is itself
(set-cdr! x x)

//; is x in x?
(memq x x)

//; never-ending loop

memq tests whether the key exists in the list and if it does, then it returns the list starting from the first occurrence of the key. But what is the first occurrence in a cyclic list? Could this be the bug?

2. Algorithms, algorithms

  • Horner’s algorithm

This is useful for calculating polynomial values at discrete points; for example, given a polynomial function, f(x) = 7x³ + 4x² + 4; what is f(3)? A potential application (and possible interview question too) is to convert string values into numbers – 1234 is the value of x³ + 2x² + 3x + 4 when x is 10.


//assuming polynomial is represented from lowest power to highest

//i.e. 1234 -> [4, 3, 2, 1]

function horner (poly, base) {
    if(base === 0) {
        return 0;
    }

    var val = 0;
    var polyLen = poly.length;
    for(var i = 0; i < polyLen; i++ ) {
        val += poly[i] * Math.pow(base, i);
    }
    return val;
}

horner([4,3,2,1], 10);
//1234

  • Fast exponentiation

Because going twice as fast is more fun than going fast.


function exponent (base, power) {
    var val = 1;
    while(power > 0) {
        val = val * base;
        power = power - 1;
    }
    return val;
}

Now, lets look at fast exponentiation.

function fastExponent(base, power) {
    if(power === 1) {
        return base;
    }

    //handle odd powers
    if((power % 2) === 1) {
       return base * fastExponent(base, (power - 1));
    }

    var part = fastExponent(base, (power / 2));
    return part * part; //square of part also works
}

fastExponent(10,3)
//1000

Fast exponentiation grows logarithmically Ο(log N) while the normal one is Ο(N). This same concept can be reapplied to similar scenarios.

3. Streams

Functional programming offers many advantages but one potential downside is performance and needless calculation. For example, while imperative programming offers quick exit constructs (e.g. break, continue); functional programming constructs like filter, map and reduce have no such corollary – the entire list has to be processed even if only the first few items are needed.

Streams offer an elegant solution to this issue by performing only just-in-time computations. Data is lazily evaluated and this makes it possible to easily (and beautifully) represent infinite lists. Inshaaha Allaah I should explain this concept in an upcoming post. It’s very beautiful and elegant and powerful.

Related Posts on my SICP adventures

  1. SICP Review: Sections 3.1 & 3.2
  2. SICP Section 2.5

SICP Review: Sections 3.1 & 3.2


Here are my thoughts on Sections 3.1 and 3.2 of my SICP journey; the deeper I go in the book, the more I appreciate the effort, style and work the authors put into it. Each section builds on earlier sections and it is amazing how it forces you to see software development from a new angle. Enjoy…

1. Perception and Design

Our perceptions of the real world influence the way we model objects in software development. The example in the book uses two objects – a rational number and a bank account. Modifications of a rational number always generate a new rational number that is different from the original (e.g. adding 1/2 to 3/8).  We do not ‘view’ rational numbers as being capable of changing – i.e. the value of 3/8 is always 3/8.

Conversely a bank account object has a balance that can ‘change’; withdrawals and deposit change this value even though we still say it’s the same bank account object! There appears to be a double standard right? If internal details (i.e. the numerator or denominator) of a rational number changes, we might end up with a new rational number however if a bank account balance changes we still have the same bank object.

Strange isn’t it, the way we think does influence the way we write code.

2. Environment

The environment (context) is very important in programming languages even though we just sometimes forget about it. Every language has some global environment that determines what expressions, values and parameters mean. If you change the environment, then it becomes highly possible that the expression has a different meaning.

Lets take an example from human languages, the word ‘wa’ means ‘come’ in Yoruba but can mean ‘and’ in Arabic. The same applies to programming languages, have you ever wondered why and where the + (a free variable) is defined? It has to be in the global environment so the programming language knows what it means. Now if you overrode the ‘+’ (why you’ll do this beats me) then it can mean something else.

3. Execution Order 

Exercise 3.08 was pretty fun, it was a simple ask – write a procedure such that evaluating (+ (f 0) (f 1)) will return 0 if the arguments to + are evaluated from left to right but will return 1 if the arguments are evaluated from right to left.

While the exercise is trivial and probably just an academic exercise, it shows how assignments can influence program execution. My solution uses a closure to track the number of calls and returns a unary lambda. The lambda returns the value of its argument on its first call and a zero on subsequent calls.

Thus if the execution order is left-to-right, then the first call will have a parameter of 0 and further calls will return 0 leading to (+ 0 0) while if it was right-to-left, then the first call will have a value of 1 and further calls will evaluate to 0.

SICP Section 2.5


So four months after I started and more than 90 solved exercises, I can say Alhamdulillaah chapter 2 is done! Section 2.5 was one of the most challenging so far; the exercises revolved around building large easily extensible software systems. And here are the thoughts again :)

1. Coercion

The section revealed the importance of coercion in software development by creating a maths system for various numeric types (e.g. integers, rational, real and complex numbers). Since Scheme is not an OOP language, procedures were used to encapsulate each numeric type’s functionality and ‘install’ them into the system. A generic apply function multiplexes routing calls to the matching number type interface.

The challenge was in handling mixed operations (e.g. adding an integer to a complex number or vice versa). Since numeric types are progressively subsets of one another (i.e. an integer is also a complex number but not vice versa), it is possible to build the following hierarchy of types.

integer -> rational -> real ->complex

By leveraging this hierarchy, it becomes possible to do arithmetic operations on a mixed variety of number types. The system just needs to ‘raise’ all the numbers to topmost type in the hierarchy and then invoke the arithmetic operation.

Lets take it further one bit; if ‘raise’ is possible, then surely ‘demote’ can be done too! A good application of this is ensuring that the results of ‘raised’ operations are in their simplest terms. For example, the result of adding these mixed numbers types: 1, 1 – i, 2.0, -1 + i is 3 + 0i. The result can be progressively reduced to the integer 3 as shown below:

3 + 0i -> 3.0 -> 3 / 1 -> 3

And the icing on the cake? There is no need to write a demote function! The earlier raise procedure should demote number types if its hierarchy of types is reversed.

These assumes the system already supports two-way converters for each boundary of the hierarchy (e.g. integer->rational and rational->integer). There is no need for an integer->real converter since integers can be promoted to rational numbers and then to real numbers.

Neat huh?

2. Extending a system

Assume the system above has to support polynomial arithmetic. It turns out it is quite simple to do – why not view polynomials as another layer in the hierarchy of types? This new hierarchy is shown below:

integer -> rational -> real ->complex->polynomial

An interesting abstraction isn’t it? It works too once all the interfaces are implemented. Designing software is indeed art…

Some extra learning points were the representation of polynomials: the sparse and dense approaches both had their pros and cons. A better fix was to enable the system to support both representation formats (simple trick: convert one form to the other and everything should be all good :) ).

Another interesting trick was polynomial subtraction; there is really no need to implement subtract if you already have addition implemented. All you have to do is add ‘negated’ versions of each polynomial term. Simple but powerful.

On to chapter 3 insha Allaah!

SICP Sections 2.3 & 2.4: Thoughts and Ideas


1. Top-Down Design

Most of the problems in the SICP book are solved in a top-down way with lower level details deferred until needed. The focus on high level details makes for expressive flexible code since implementation is based on well-defined interfaces and not implementations. Consequently, swapping and improving interfaces is a cinch – the dependency on high level interfaces shields consumers from tight coupling to details.

Picture a calculus system for integer operations (e.g. differentiation); as expected, these higher order functions need support for addition/subtraction/division operators. A top-down design will implicitly assume the add/subtract operators will work appropriately and focus on the higher level operators first. These low-level details can even be mocked and the system unit tested to prove logic accuracy.

The benefit of such a delayed implementation comes when the system needs to be extended to support complex numbers – the higher level operations are still the same, only the low level add/subtract operators need to be changed. Once done, the entire system should work as before.

Design is subjective and this is one approach; there might be better ways for other scenarios but again this can be tried and leveraged.

2. Generics: Data-directed vs Message-Passing Styles

Generics are useful for maintaining and extending existing systems.

The data-directed style maintains a huge table of known procedures and looks up desired operations based on keys. Extending such systems involves adding the new function signatures to the table; this allows the entire system to start using the newly added functions (sounds like the adapter pattern).

The downsides include the extra overhead of a procedures’ table, the need for consumers to know about available operations and the enforcement of contracts in the system. However, benefits such as flexibility, encapsulation and ease of accretion of new features can’t be ignored.

Message passing involves sending messages to objects to trigger the right operations. Sounds familiar? Yes, it does sound like OOP and is one of its major tenets. An object keeps a list of possible operations and invokes the right method based on the message it receives (think of invoking an instance method in an OOP language as sending a ‘message’ to the object e.g. a.b() ).

Languages like Python and JavaScript allow for a clearer example of the message passing metaphor since they allow creating hashtables of functions. With such a hashtable, you can invoke a function by sending a ‘message’ to such functions e.g. a[“b”]().

The question thus comes to mind, how do programming languages handle their primitive operations at a lower level? Do they do message passing? Keep tables?

3. Context

The importance of context and how they affect semantics in all languages (both human and programming).

Using the example from the book; three is equal to one plus two however ‘three’ is not equal to “one plus two”. The context determines what is meant and humans have long used this to create various literary forms such as puns, sarcasm etc.

The same applies to programming, context does determine if your program will run perfectly or if it will have subtle but difficult to find errors. One obvious example is the JavaScript that = this pattern.

4. Huffman Encoding

Some computer science and maths eh, the Huffman encoding scheme allows you to use much less space provided the conditions are met.

5. Understanding the problem deeply before writing any code

Coding should be simple. Unfortunately, a shallow understanding the problem being solved makes it a difficult task.

Spending a good chunk of time to really understand the programming task at hand has great benefits, it leads to well-thought out designs, simple models (simple is difficult!) and less bugs. It also reduces the chances of providing the wrong solution. It makes for a more fulfilling experience too – what beats the thrill of having zero compile errors on the first attempt?

If you liked this post then do check out a couple more exciting discoveries from my SICP journey:

1. SICP Section 2.2: New ideas and thoughts about programming
2. SICP Section 2.1 : Thoughts
3. SICP Section 1.3 : Thoughts and Concepts

SICP Section 2.2: New ideas and thoughts about programming


Here are 5 points to think about, hopefully they’ll trigger an ‘aha’ moment.

1. Leveraging the ‘Closure’ concept in programming

I am not talking about the ‘closure’ concept from programming (the one that involves free variables). This refers to the ‘closure‘ concept from mathematics and the power it brings to programming languages.

Mathematically, a set is closed under an operation if carrying out that operation on elements of the set will always give a result that is also in the set. For example, addition of positive integers is closed under the set of positive integers (subtraction is not: 1 – 3 = -2; which is not positive). Thus you can infinitely add positive numbers without worrying about ending up with non-positive numbers.

Lets take this to programming: closed operations allow chaining because the result will be always be a valid value for further operations. Does this strike a bell? Think fluent programming, think jQuery chains. This simple concept allows very complex actions by leveraging simple data and procedures.

2. Simple it might seem, difficult it might be

I initially thought Ex 2.18 would be dead easy; it was simple: reverse a scheme list. I realized my folly after spending the better part of an hour battling the task. Unlike pointer-based lists in C-like languages, the Scheme list is a chain of pairs, each pair points to another pair and so on. This negated walking down the list and reversing pointer directions.

Recursion was tricky since getting the second half of a list pair (i.e. cdr list) brought the entire sublist instead of just the element! The reversal task involved the following restrictions:

  • No variables allowed
  • Only the top element of a sublist can be retrieved while walking down the list
  • The procedure should recursively construct a new list simultaneously
  • Retrieval of elements at arbitrary positions is difficult; although possible, the absence of variables makes this useless.

My first approach worked but created a wrong data structure; Alhamdulillaah I eventually got the right solution. An elegant solution I found online solved the reversal problem simply – reversing a list is equivalent to appending the first element of the list to the reversal of the remaining elements.

3. Languages influence thinking and problem modelling

Languages play a big role in the way we think and write code, most of us see programming tasks through an ‘imperative’ pair of glasses. Alhamdulillah the SICP book is giving me a new pair of glasses. Don’t we all need a new pair of glasses every now and then?

Imagine a simple task: calculate the sum of the squares of the odd numbers in an array. The snippets below show both the imperative and functional styles of solving the problem.

function isOdd(num) {
    return (num %2 ) === 1;
}

function square(num) {
    return num * num;
}

//imperative style
function sumOddSq(arr) {
  var sumOddSq = 0;
  for(var i=0, l=arr.length; i < l; i++) {
      if(isOdd(arr[i])) {
          sumOddSq += square(arr[i]);
      }
  }
  return sumOddSq;
}

//functional style
function sumOddSq2(arr) {
  return arr.filter(isOdd)
            .map(square)
            .reduce(function (val, acc) {
                return val + acc;
            }, 0);
}

sumOddSq([1,2,3]);   //10
sumOddSq2([1,2,3]);  //10

The imperative approach involves walking through the array, checking for odd numbers, squaring those odd numbers and updating the sum as you go along. Using a functional programming approach, the task involves filtering the sequence for odd numbers, mapping the filtered numbers to their squares and then reducing these squares to a sum.

Functional programming relies heavily on the map, reduce and filter concepts and most problems can be solved based on these building blocks. For tasks involving non-list-like structures, converters can be used to create sequences and also generate structures from list results.

Did you notice? sumOddSq2 chains array operations; the map, reduce and filter operations can be said to be ‘closed’ over the set of arrays. Thus the result of any of these operations is always an array and can be reused immediately.

4. Software engineering is still engineering

Even though software engineers manipulate bits and not ‘real’ hardware; general engineering concepts still apply. There are a couple of interesting similarities; for example in electronics engineering, the diode can be viewed as an if statement while filters can be viewed as ‘filter‘ logic. The electronics engineer uses a filter, the software engineer uses a filter function. Both combine them to build even more complex abstractions.

Another example is the computer system and the various dependent layers; a simplified complexity order is shown below:

Transistors -> Logic gates  -> Circuits -> Processors -> Computers -> Distributed Systems

The same applies to software engineering, an application can be seen as:

Values -> Methods -> Classes -> Models -> Subsystems -> Modules -> Applications

We all start from something simple and progressively add complexity, each layer being shielded from the layer below it. A computer engineer can swap processors without breaking a computer; so should a software engineer be able to swap out layers in his code without bringing down the application.

5. Recursion

The 8 queens problem is one that a lot of people are familiar with and as you guessed, there was an exercise on that. The solution was quite simple – recursively build the board starting out with one queen. As the board grows, filter out positions that have queens attacking each other, finally return the list of all positions that have safe queens.

The functional-programming styled solution involved enumerating all positions, creating a safe? function to determine a valid board, passing this function to the filter and then outputting the results. It is great to see how functions can be composed and how the basic functional programming support for map, reduce and filter enable even higher levels of complexity.

I must admit I struggled a lot with this problem as it was difficult to understand and debug – walking up the call stack did expose some of its intricacies but it’s amazing how simple and how yet powerful the results of combining these simple concepts.

Conclusion

And that’s about it. Next is section 2.3 with about 20 questions; I skipped about 2 or 3 questions in 2.2 since I didn’t really feel they were worth solving. I hope I can get 2.3 done by mid January insha Allaah and create another post then!

Related Posts

Here are a couple more of my SICP-themed posts

1. SICP Section 2.1 : Thoughts

2. 5 things I have gained from SICP: Section 1.2

3. SICP Section 1.3 : Thoughts and Concepts

SICP Section 2.1 : Thoughts


Alhamdulillaah I wrapped up this section a few days ago – 13 of the solutions are here (if you are curious enough); the other three need proofs and I wasn’t in the ‘mood’ for too much interval arithmetic. So what did I learn?

1. More Scheme

This chapter introduced the concept of cons, car and cdr  for manipulating pairs which in turn can be used to create just about any data structure.

  • cons – means construct, allows you to build a pair from two arguments
  • car – contents of address part of register, returns the first element of a cons pair
  • cdr – contents of decrement part of register, returns the second element of a cons pair
  • cadr – contents of address part of contents of decrement part of register; this is equal to (car (cdr value)); you can do things like caadr too – every extra a represents a car while a d represents cdr. Thus (caadr x) is equal to (car (car (cdr x)))

2. Abstraction

Abstractions provide elegant ways of solving problems. They allow the programmer to think at a ‘higher’ level without worrying too much about implementation details. This leads to a cleaner design and more flexible code.

Lets take the ‘xor’ logic operator as an example; assuming you need to add negative integer support to a positive-integer only multiplication system, there are two ways to do this:

  • Evaluate every possible combination, have four if/else blocks and then assign the right sign afterwards
  • Leverage the xor operation.

Here is the xor table:

x y x xor y
+ + +
+
+
+

This matches the four scenarios for multiplying two numbers. Thus, you can create a wrapper around the existing system for multiplication and then use the xor operator to find the sign of the output. Worth a thought isn’t it?

3. Architecture

There was a brief mention of the onion layer style of architecture which allows you to build software in layers – its main advantage is the ability to swap out any layer without ‘breaking’ the system.

Each level relies on the layer below it through a well-defined interface; you can swap layers as you think fit, you just have to follow the interface specifications. Another step in separating between data abstraction and implementation.

4. Church Numbers – a new numbering system?

The Church encoding system is a way of representing numbers and their operators in lambda calculus. The book introduces the Church zero and add-1 procedures first, I thought the zero procedure would evaluate to the value zero but was shocked to realize it was a procedure. Alhamdulillaah I finally got to understand it after reading Bill’s excellent post.

The main thing to know is this: It is possible to represent numbers even without using numbers!

Assuming a theoretical language that has only functions and no inherent support for numbers whatsoever; how do you add or represent numbers? Surprisingly this is easy to do – you could model numbers by counting the number of times you apply a function. This is what the Church numbers do – they apply input functions for a number of times that corresponds to the expected integer value.

The zero procedure takes in a procedure (which itself accepts another procedure) and calls it ‘zero’ times (never called); similarly, the one procedure evaluates its input once (that’s one right?). Generalizing these allow you to do addition and subtraction. While it was good to learn about lambda calculus however I wonder how real number arithmetic would be represented…

5. The woes of Interval Arithmetic

There were a couple of exercises on interval arithmetic; what struck me about this was the way small changes in calculation could lead to huge differences in results. Two algebraically equivalent formulas could lead to varying results due to overlooked ‘assumptions’.

For example, if for some reason x / x is not equal to one (can happen in interval arithmetic); then every operation that includes this division (implicitly or explicitly) can deviate from accurate results. The more the ‘precision-losing’ operations carried out, the more significant the deviation.

And that’s about it again. Section 2.2 has about 36 exercises and I hope to complete it by December insha Allaah. And yes, insha Allaah I’ll write about my thoughts again…

SICP Section 1.3 : Thoughts and Concepts


This section exposed me to the full power and beauty of functional programming. The expressiveness of using functions as values is awesome. For example, a sum of cubes function can use a sum function which itself relies on a reduce function. Think of higher order differentiation in mathematics and you do get the idea.

An interesting concept involves using lambda function as operators, if the value of an expression is a function, then it can be used as an operator! This pattern allows using functions the same way numbers and strings are used. A derived usage can be found in JavaScript, which allows invoking function expressions using the function invocation operator ().

A lot of programmers are familiar with callbacks (which pass functions into functions). However callbacks might mask the full untapped potential of functional programming. A callback can be passed a function as a value – it doesn’t necessarily have to be a number / string all the time. The other issue with callbacks is that they do not return values and thus limit function composition.

There are no limits to using functions as values (provided there is language support). Alhamdulillaah I got my eye-opening experience after solving exercise 1.45. The main points are

  • The repeated procedure accepts a procedure and returns a new procedure.
  • Repeated’s return value (a procedure) also accepts a procedure as input (a lambda) and returns another new procedure.
  • The final return value is then passed into the fixed-point procedure which does return a non-procedure value (although it can also be written to return a procedure).

The subtle trick was to realize that functions could be substituted with values and used in just about every way a value could be (provided you handle it right).

The amazing expressive power and flexibility comes at a cost though – first-class procedures are difficult to support in languages and this explains why few languages provide this. The main challenge is finding a way to safely store and retrieve the procedure’s free variables when it is not currently executing.

A free variable is defined on c2.com as any value that meets the following conditions:

  • It is not defined in the body of a function
  • It is not passed in as an input parameter
  • It is not available in scope at the point of usage

Alhamdulillaah I completed the chapter in two weeks (I initially though it would take four). Chapter 2 is next with about 92 exercises, about 14 weeks insha Allaah? We’ll see!