C#, Languages, Programming Languages

C# Const vs Readonly


The const keyword declares constants – values that won’t change throughout the lifetime of a program. Constants are initialized on declaration and are static by default. At compile time, the compiler replaces all usages with the constant’s literal value.

const int INCHES_IN_A_FOOT = 12;

const int CM_IN_A_METRE;
//Fails because it's
// not initialized on declaration

The constant-to-literal substitution has a downside – all external consumers of a constant have to be rebuilt if it changes. For example, should pi be a const or readonly? Using const should be fine however if precision requirements change often, then using readonly removes the need for excessive downstream re-compilation.

It is semantically correct to use the const qualifier for reference types but this serves no purpose. Constants have to be initialized at declaration and only null values are allowed for reference types (strings are the only exception). Moreover after such null initializations, the constants cannot be changed anymore. What’s the use of a null const?

const List intList = new List{1,2,3};
//fails, only null values are
//allowed for const reference types

const List intList2 = null;
//compiles but useless
// as it cannot be changed anymore


Unlike constants which MUST have values declared at initialization, readonly values can be set at declaration or in the constructor. Once set, readonly values cannot be changed anymore; such attempts will result in exceptions. Since constructors can be overloaded, a readonly value is determined by its construction path.

using System;

public class Test
   public readonly int rdOnly = 3;

   public Test(int rdVal)
     rdOnly = rdVal;

   public Test()
     rdOnly = 5;

   public Test(string s)

   public static void Main()
     Test test1 = new Test(100);

     Test test2 = new Test();

     Test test3 = new Test("Hello");

     test3.rdOnly = 2000;
     //compile error

If there is any chance that a ‘constant’ value might change and a lot of assemblies use it; it’s better to use readonly - this removes the need for re-compiling a lot of modules downstream.


  1. Const values are always implicitly static while readonly values are not.
  2. Const values have to be initialized on declaration while readonly values can be set on declaration or in the constructor – the only rule is that the value must be set before the constructor exits.
  3. On compilation, all const values are replaced with their literal values globally while readonly usages contain a reference to the sole declaration.

Caution! Usage Gotcha

Marking a reference type as readonly ‘freezes’ the reference and not the referenced value. The readonly reference cannot be replaced by another however you are free to change the contents of the reference object!

public class ReadonlyCheck
    readonly Point rdOnlyP
       = new Point(0,0);

    public ReadonlyCheck()
        rdOnlyP.xcoord = 5;
    	rdOnlyP = new Point(1,1);


This post explained the difference between the readonly and const keywords in C# and explained when and how to use them. Here are a couple more posts you might enjoy:

1. Understanding Tail Recursion

2. Programming Language Type Systems I

3. Programming Language Type Systems II

Learning, Scheme

Reading the SICP BOOK

I have been longing to read the Structure and Interpretation of Computer Programs (SICP) for a long time – a lot of great programmers tout it as one of the ‘books‘ to read. In May I completed JS Allonge (a somewhat challenging read but got to understand the Y-combinator Alhamdulillaah) and I felt it was a good time to learn more about the SICP book; afterall how many 30-year old computing books still remain relevant?

The general consensus is that SICP is a classic every programmer should read and that it’ll make you a better programmer by changing the way you think. Reading such glowing appraisals awakened my interest (who doesn’t want to become a rockstar programmer?) and I decided to do it. However, there is a catch – reading the book is not enough, you have to do all the exercises! So I resolved to do all the exercises and read for at least 25 minutes daily (1 pomodoro).

The exercises appear deceptively simple until you try to solve them! Then you realize they are just as deceptively difficult!! For example, an exercise asks you to find the two largest numbers out of  a set of three – I bet a lot of people can write this in their sleep. However since the book expects programs to be written in Scheme; and Scheme variables haven’t been discussed yet, you have to make do without variables – now try that!

The good thing though is that the exercises are designed to fit the current context and buttress learning. Mathematical concepts and computer science principles are expertly interwoven into surprisingly simple explanations and real-life scenarios. Scheme is a nice language too – provided you can get past the scourge of parentheses.

Solving the exercises has been another eye-opener; I typically have to force myself to persevere. Alhamdulillaah I have been able to solve questions I typically would think were beyond me (I am probably too lazy). If I can’t solve a question in 25 minutes, I look for a solution online (see resources below); at times, I just want to compare my approach to others.

Here are some programming-perception changing concepts I picked up in the past few months:

1. Scope

A scope is just a group of expressions in which a binding has a name; it really does not have to be a function or class.

2. Free and bound variables

Free variables are those not defined in a procedure while bound variables are defined in it. Imagine the procedure below

function variable(){
   var a =1;
   var b = 2;
   return a + b;

and are bound variables however isn’t; it’s a free variable. Now try to think of what will happen if the ‘+’ operator can be bound to another value (quite possible in some languages). The original procedure works properly because the free variables are what they are expected to be; if they are changed, then you end in no mans land. Powerful right?

3. Recursive/Iterative processes and procedures:

Most recursive procedures can be re-written to spawn iterative processes. While this involves a couple of tricks, the performance and memory gains might require such fixes. Also not every recursive procedure spawns recursive processes, some actually are iterative in execution (see tail recursion).


If you are interested in reading the SICP books; here are a couple of helpful resources

1. My SICP exercise solutions

2. Unofficial ebook version

3. Solutions to SICP Exercises

4. Bill the Lizard; he undertook the SICP journey before me

5. Eli Bendersky, he also undertook the SICP journey some time ago

6. List of solutions

Quotes from the SICP book

Programming is like chess; you might know the rules but not know the strategy, tactics and methods

The ability to visualize the consequences of the actions under consideration is crucial to becoming an expert programmer, just as it is in any synthetic, creative activity…

To become experts, we must learn to visualize the processes generated by various types of procedures. Only after we have developed such a skill can we learn to reliably construct programs that exhibit the desired behavior.


Finally, believe! You can do it,  just persevere when it gets tough…


Reading: September 2014

18 Minutes: Find Your Focus, Master Distraction, and Get the Right Things Done

Quite a good read; I especially like the tips on slowing down before reacting, doing less and improving focus. A tip I want to take away is planning my day ahead and fixing times for actions (something I do not currently do). I also like the way the author prunes his to-do list and the end-of-day reviews. Overall, I think this is a good book and I learnt some new tricks Alhamdulillaah.

A Microsoft Life

Doesn’t really teach anything new or technical; some stories about Microsoft, some humour and a lot of strong language. I found some parts off-putting.

Computer Science, JavaScript, Languages, Learning, Programming Languages, Python

Understanding Partial Application and Currying

Partial application and currying are two popular concepts from functional programming and help cut code replication in certain scenarios. This article assumes an understanding of the JavaScript bind function; binding is related to partial application in JavaScript.

Partial Application

In pure functional languages, functions are not ‘invoked’, rather a set of arguments is ‘applied’ to them. Now, if all the arguments are not passed in, then the function is being  ‘partially‘ applied.

Partial application converts variadic functions (i.e. functions with multiple parameters) into functions of lesser arity by pre-specifying certain argument values.

Lets take the exponential function (x^y) from the mathematics domain as an example; an exponent value of 2 gives the square function while a value of 3 gives the cube function. The base exponential function has one of its inputs fixed to a certain value. Partial application can lead to new functions.

function add(a,b) {
    return a + b;

//partial application
function increment(n) {
    return add.bind(null,n);

var add2 = increment(2);

var add4 = increment(4);


No, not Indian/Pakistani Curry (they taste awesome by the way… :) ). The name Curry comes from Haskell Curry, a renowned mathematician who has a lot of things named after him (Haskell is another one) .

Currying is somewhat similar – it turns a variadic function into a sequence of chained monadic (i.e. single argument) functions. For example, a curried triadic function will be invoked as a chain of 3 monadic functions.

function add(a,b) {
    return a + b;

function curriedAdd(a) {
    return function(b) {
        return add(a,b);


var add3 = curriedAdd(3);

That was a simple example, let’s look at a more generic curry function.

function curry (fn)
 var arity = fn.length,
 exec = function (args){
   args = args || [];
   return function helper(){
       var helperArgs =
       var argsTally =
       if(argsTally.length >= arity){
          return fn.apply(null,argsTally);
          return exec(argsTally);

 return exec();

function sum (x,y,z) {
    return x * y * z;

curriedProduct = curry(sum);

curriedProduct(1, 2)(3);
curriedProduct(1, 2, 3);
curriedProduct(1)(2, 3);
//All return 6

How does this work?!

Let’s dive in and take it apart! Fun!!

1. The length property of the function being curried is its arity (number of expected parameters).

2. The exec function allows currying until all parameters are matched, its args array grows during repeated invocations.

3. The helper function is returned on curry calls. Its helperArgs array allows capturing new parameters until the original function’s arguments are all matched.

If the number of parameters in helperArgs equals the original function’s arity, then all parameters have been passed in and we can execute the function successfully. Otherwise, returning the exec function with the updated arguments allows currying to continue.

This process continues batching up arguments until the curried function’s arity is matched or exceeded.

4. Note that extra parameters are dropped when functions are called with excessive arguments. The curry tastes good with extra input ;)

Let’s trace it out!

A curried function’s execution is traced out below:

curriedProduct = curry(sum);
-> exec()
-> helper() 
//Gets returned to curriedProduct
-> helper(1,2) 
[argsTally: [1,2], argsTally.length(2) < arity(3)]
-> exec([1,2])
-> helper() 
[args:[1,2]; available to helper through closure]
-> helper(3) 
[argsTally:[1,2,3]; argsTally.length(3) >= arity(3)]
-> sum.apply(null, [1,2,3])
-> 6

Note: Technically, currying allows only monadic functions…


While it does seem currying and partial application are the same thing but for two subtle differences:

1. Currying will only invoke the original function when all parameters have been passed in while partial application will invoke the base function regardless of this.

2. Curried functions will always return new functions until all arguments have been applied whereas partially applied functions will always return the same base function with some pre-determined argument values. This becomes quite obvious in functions with an arity > 2.

Why should I care about this?

1. Allows you to think in a new way about programming.

2. Both concepts allow you to create new helper functions and utilities that are useful in a lot of situations.

3. Functional programming is cool.

4. At least you now (hopefully) know the difference between partial application and currying!


What do you think about both functional programming concepts? Add your comments and do check out other interesting posts.

1. Understanding Tail Recursion

2. Programming Language Type Systems II

3. Programming Language Type Systems I

4. Quirky Quirky JavaScript: Episode One

Computer Science, JavaScript, Languages, Programming Languages, Python

Understanding Tail Recursion

A tail call is the last call executed in a function; if a tail call leads to the parent function being invoked again, then the function is tail recursive.

function bar() {};
function baz() {};
function biz() {};

function foo() {
    return baz();

function foo2() {
        return baz();
    } else {
        return biz();

In the first function – foo; the baz() function is a tail call while in foo2, the baz and biz are tail calls because both are the last calls to get executed in the function. The second example shows that the tail call doesn’t have to be the last line of the function – it just has to be the last call the function makes before it returns.

Deep Dive

Lets analyze the factorial function.

function fact(n) {
    if(n <= 1) {
        return 1;
    } else {
    	return n * fact(n-1);


//RangeError: Maximum call stack
//size exceeded

A fact(3) call is traced out below:

fact(3) -> 3 * fact(2)
3 * (2 * fact(1))
3 * (2 * 1)
3 * 2

Every recursive call has to be totally evaluated before the final value can be calculated.

Now let’s make a tail-recursive version of the same function.

function fact2(n) {
    function tailFact(n, acc) {
        if(n <= 1) { return acc; }
        return tailFact(n-1, n * acc);

    return tailFact(n, 1);

Evaluation of this function now looks like this:

fact2(3) -> tailFact(3, 1)
tailFact((3-1), (3*1))
tailFact(2, 3)
tailFact((2 -1), (2*3))
tailFact(1, 6)

The tail recursive function does not use the stack; the function calculates the factorial as it goes along! The last recursive call performs a simple calculation based on the input arguments and returns.

Thus, there is no need to push frames on to the stack because the new function  already has everything it needs. As such stack overflows would not occur with tail call optimized functions.

One neat little thing to know is that the tail-recursive calls do not change the original values; in the factorial function example above, the value of n is never changed, rather new n values are passed in to new function calls.

Optimizing tail calls
A few languages (e.g. scheme) will optimize tail calls so that they run as loops however other languages e.g. JavaScript, Python do not; even if a function is in the tail call form; it’ll still run out of stack space.

The trick to get a non-optimizing compiler to simulate tail recursion involves the use of a trampoline; lets see examples below

//tail-call version runs out of space

//RangeError: Maximum call stack
//size exceeded

function trampoline(fn) {
    var result = fn;
    while (result instanceof Function) {
        result = result();
    return result;

//Has subtle error
function fact3(n){
    function tailFact(n, acc){
        if(n <= 1) { return acc;}
        return tailFact(n-1, n*acc);

    return trampoline(tailFact(n,1));

//RangeError: Maximum call stack
//size exceeded

The fact3 function trampoline call fails because we pass in a function expression and not a function reference. The execution steps are traced out below:

trampoline(tailFact(1, 2))

The trampoline is only called after the tailFact completes its recursive operations. We need to make two changes:

1. Return a function reference from the tailFact function.

2. Pass in a function reference to the trampoline function.

The Function.prototype.bind function is perfectly suited for this purpose and the transformed functions are

//Tail recursive
function fact4(n){
    function tailFact(n, acc){
        if(n <= 1) { return acc;}
        //return function reference

    //update to function reference
    return trampoline(



The biggest advantage of using tail calls is that they allow you to do extensive operations without exceeding the call stack. This makes it possible to do a lot of work in constant space without running into out of memory exceptions; this happens because the frame for the currently executing function is re-used by the newly-executed function call.

It is possible to write the same function using tail call recursion and this will allow you to do it for arbitrarily large values. The space optimization is from O(n) to O(1) since every new function call reuses existing frames rather than pushing stuff on the stack.

Another advantage is that tail calls are essential for the continuation passing style (also called fluent programming; a popular example is jQuery function chains). Imagine chaining a huge set of calls together without needing to worry about running out of stack space – after all you need a location to store the executing functions.

The disadvantage of tail-call optimized code however is that it is difficult to debug; how would you know what function call lead to what function call if the call stack frames are being reused?


Do share your thoughts about tail recursion in the comments or check out these other exciting posts!

1. JavaScript’s Function.prototype.bind

2. Quirky Quirky JavaScript: Episode One

3. The Immediately Invoked Function Expression

4. Functional JavaScript – Tail Call Optimization and Trampolines


I spent about a week trying to figure out why my original trampoline didn’t work until I ran into Don Taylor’s excellent article; it sure did help! Can you figure out why it’s wrong too?

function trampoline(fn) {
    var res = fn();
    while (res && res instanceof Function) {
        res = res();
    return res;
JavaScript, Languages, Programming Languages

JavaScript’s Function.prototype.bind

The bind function allows you to set the context with which a function will execute. It takes in a ‘context object and returns a new function which has its context set to the passed in object. The variadic method, which exists on all function objects, provides a workaround for the annoying and elusive ‘this’ context issue in JavaScript.

As web developers; we all nearly need to create references to object methods at some time or the other. The annoying thing is that the original context is uusally lost when we need it most! Let’s see how bind can help in these scenarios.

var obj = {
    val: 10,
    getVal: function () {
         return this.val;


var valGetter = obj.getVal;

var boundValGetter = valGetter.bind(obj);

Function Signature

Function.bind(thisObj, arg1, arg2,…)

thisObj refers to the new context for the function. However if the newly returned function is used as a constructor (i.e. called using the new operator), the this context will be a new empty object as expected.

arg1, arg2,… are extra parameters that are prepended to the list of arguments that the newly bound function is called with. Assuming you declare two arguments in the original bind call and then pass three new arguments to the returned bound function, then the two original bound arguments appear before the three newly passed in arguments.

var fn = function () {};
var bound = fn.bind(window, 1,2);

#equivalent forms
fn.call(window, 1,2,3)

If the bound function is invoked as a constructor, then the bound arguments are still passed on.


The most common application would be to avoid the use of the ‘that = this’ pattern in JavaScript. This is especially the case when you need to pass in context to event handlers, use anonymous functions or do something similar.

var ordExample = {
     log : function () {
           console.log('logger called');
     asyncCall : function (callback) {
     useAsync : function () {
          var that = this;
          this.asyncCall(function () {

//Bind form
var bindExample = {
     log : function () {
          console.log('logger called');
     asyncCall : function (callback) {
     useAsync : function () {
          this.asyncCall(function () {

//logger called
//logger called

As seen from the example above, we can use this in all sort of scenarios – from timers, Ajax callback handlers, event handlers, custom objects etc.


You cannot unbind after binding a function; anytime you call the Function.prototype.bind, you get a new function which has its ‘this’ context already prefixed to the pre-supplied value.

var fn = function () {};

Ben Cowboy Alman has written a Function.protoype.unbind polyfill here.

The other issue is that there is some overhead with using bind; it adds some overhead and is not as optimal as using the ‘this = that‘ approach (variable vs function). Here is a jsperf run for four various bind styles.

So that’s it; now go bind some functions.

Software Development, Version Control

I Git! Stashing Explained

Imagine these scenarios when working in Git:

1. You do not want to commit your unfinished work but need to switch to a new branch to start some new work immediately.

2. A git pull results in conflicts however you do not want to commit local changes yet.

3. You realize you have worked in the wrong branch and want to move your uncommitted work to the right branch.

The Git stash command would help in these situations – it takes all the changes in your working directory and puts them in a stack you can always access later.

The command does two things:

1. It saves the working directory and index to a safe temporary place (the latest stash is usually at .git/refs/stash).

2. Then, it restores the working directory and index to the most recent commit (i.e. the commit pointed to by HEAD).

Thus, you can go ahead and switch to a new branch or complete the pull after a stash.

git stash

It might fail if you do not have any commit in the repository. You need to have at least a single revision as shown below:

git init
git stash
//fatal: Bad revision 'HEAD'
//fatal: Bad revision 'HEAD'
//fatal: Needed a single revision
//You do not have the initial commit yet

Now, let’s try again after making some commits to the repository

echo "Temp" > temp
git add temp
git commit -m "stash first commit"
git stash
//No local changes to save

Next, lets stash some temporary changes; note that stashing does not save changes that are being tracked (more on tracking vs staging areas in an upcoming post insha Allaah).

echo "staging" >> temp

#Add changes to staging area
git add temp
git status
#Should show staged changes to temp

echo "tracking" > tracking
git status
#shows staged and tracked changes

git stash
#Saved working directory and index state
#WIP on master: 2038ddd stash

git status
#clean staging area
#Tracked changes unmodified

Using stashed work

The stash is a stack and can contain multiple changes. The following commands show how to interact with it.

#list all stashed changes
git stash list

#apply the topmost stashed changes
git stash apply

#Show applied changes
git status

#Stash still has applied stash
git stash list

#Pop the stashed change at ref 2
git stash pop stash@{2}

#Show applied changes
git status

#Verify stash@{2} was removed
git stash list

#delete stash at ref 3
git stash drop stash@{3}

#Delete all stash entries
git stash clear

The difference between apply and pop is simple: apply is non-destructive, it preserves the stashed entry on the stack while pop will remove it. The pop command can be seen as an apply + drop combo. And yes, it is possible to create a new branch from a stashed entry.

# create a new branch from stash
git stash branch newBranchForStash

It automatically checks out the branch too; but make sure the branch name is unique!

Done! Happy Stashing!!