Why JavaScript has two zeros: -0 and +0


Do you know there are two valid zero representations in JavaScript?

posZero = +0;
negZero = -0;

In pure mathematics, zero means nothing and its sign doesn’t matter. +0 = -0 = 0. Computers can’t represent value well enough and mostly use the IEEE 754 standard.

Most languages have two zeros!

The IEEE 754 standard for floating point numbers allows for signed zeros, thus it is possible to have both -0 and +0.  Correspondingly, 1 / +0 = +∞ while 1 / -0 = -∞ and these are values at opposite ends of the number line.

  • They can be viewed as vectors with zero magnitude pointing in opposite directions.
  • In the mathematical field of limits, negative and positive zeros show how zero was reached.

These two zeros can lead to issues as shown with the disparate ∞ results.

Why two zeros occur in IEEE 754

There is a bit representing the sign of each numeric value independent of its magnitude. Consequently if the magnitude of a number goes to zero without its sign changing then it becomes a -0.

So why does this matter? Well, JavaScript implements the IEEE 754 standard and this post goes into some of the details.

Keep in mind, the default zero value in JavaScript (and most languages) is actually the signed zero (+0).

The zeros in JavaScript

1. Representation

let a = -0;
a; // -0

let b = +0;
b; // 0

2. Creation

All mathematical operations give a signed zero result (+0 or -0) that depends on the operand values.

The only exception to this rule involves addition and subtraction involving +0 and -0.

  • Adding two -0 values will always be -0
  • Subtracting a 0 from -0 will also be -0

Any other combination of zero values gives a +0. Another thing to note is that negative zeros cannot be created as a result of addition or subtraction of non-zero operands.  Thus -3 + 3 = 3 – 3 = +0.

The code below shows some more examples.

// Addition and Subtraction
 3 - 3  // 0
-3 + 3  // 0

// Addition of zero values
-0 + -0; // -0
-0 -  0; // -0
 0 -  0; //  0
 0 + -0; //  0

// Multiplication
3 *  0  //  0
3 * -0  // -0

// Division
 3  / Infinity  //  0
-3  / Infinity  // -0

// Modulus
 6 % 2  //  0
-6 % 2  // -0

3. The issue with zero strings

There is a minor niggle with stringifying -0. Calling toString will always give the result “0”. On the flip side, parseInt and parseFloat parse negative zero values.

Consequently, there is a loss of information in the stringify -> parseInt transformation. For example, if you convert values to strings (for example, via JSON.stringify), POST to some server and then retrieve those strings later.

let a = '-0';
a.toString(); // '0'

parseInt('-0', 10);   // -0
parseFloat('-0', 10); // -0

4. Differentiating between +0 and -0

How would you tell one zero value apart from the other? Let’s try comparison.

-0 === 0;  // true
-0..toString(); // '0'
0..toString();  // '0'

-0 <  0; // false
 0 < -0; // false

0..toString() is valid JavaScript. Read this to know why.

ES2015’s Object.is method works

Object.is(0, -0); //false

The ES2015’s Math.sign method for checking the sign of a number is not of too much help since it returns 0 and -0 for +0 and -0 respectively.

Since ES5 has no such helper we can use the difference in behaviour of +0 and -0 to write a helper.

function isNegativeZero(value) {
    value = +value; // cast to number
    if(value) {
        return false;
    }
    let infValue = 1 / value;
    return infValue < 0;
}

isNegativeZero(0);    // false
isNegativeZero(-0);   // true
isNegativeZero('-0'); // true

5. Applications

What is the use of knowing all this?

1. One example could be say for example if you are doing some machine learning and need to differentiate between positive and negative values for branching. If a -0 result gets coerced into a positive zero; then this could lead to a tricky branching bug.

2. Another usage scenario would be for people who write compilers and try to optimize code. Expressions that would result in zero e.g. x * 0 cannot be optimized as the result now depends on the value of x. Optimizing such expressions and replacing them with a 0 will lead to a bug.

3. And know too that there are lots of languages that support  IEEE 754. Let’s take C# and Java for example:

// Java
System.out.print(1.0 / 0.0);  // Infinity
System.out.print(1.0 / -0.0); // -Infinity
// C#
Console.WriteLine(1.0 / 0.0);  // Infinity
Console.WriteLine(1.0 / -0.0); // -Infinity;

Try it in your language too!

6. IEEE specifications

The IEEE specifications lead to the following results

Math.round(-0.4); // -0
Math.round(0.4);  //  0

Math.sqrt(-0);  // -0
Math.sqrt(0);   //  0

1 / -Infinity;  // -0
1 /  Infinity;  //  0

Rounding -0.4 leads to -0 because it is viewed as the limit of a value as it approaches 0 from the negative direction.

The square root rule is one I find  strange; the specification says: “Except that squareRoot(–0) shall be –0, every valid squareRoot shall have a positive sign.”. If you are wondering, IEEE 754 is the same reason why 0.1 + 0.2 != 0.3 in most languages; but that’s another story.

Thoughts? Do share them in the comments.

Related

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

Influential Books for programmers


I try to read a lot of books. Over the years, my ‘taste’ for books has been refined and some of my criteria are listed below.

Book Impact Scale

  1. Length: The 200 – 300 page range is just about right for a technical book. Longer books contain a lot of fluff and repetition. Sometimes I get the feeling that the author had to meet some minimum page count.
  2. Thought-provoking: I love technical books that challenge the way I think, enable me to see things in alternative light and exercise my ‘thinking’ muscles. Isn’t that the goal for reading after all? To broaden horizons and learn alternative viewpoints.
  3. Inclusion of challenges: I like books that provide practice exercises. This cements the new ideas – it is easy to ‘think’ you already know something when you don’t. Solving a problem is the litmus test for understanding.
  4. Enduring: Every field has its ‘classics’ and they exist for a good reason too. If a 30-year old book is still in print, has lots of good reviews and teaches fundamentals; then chances are high it is a GOOD book. Add bonus points for language-agnostic themes.
  5. Simple: Most books fall short in knowledge delivery by employing obscure terms. This might be due to a misassumption of expected readers’ skill levels. Clear straightforward explanations make for easy reading.

Now, here come the books (listed in no particular order).

1. The Pragmatic Programmer

Excellent read. Medium sized at 352 pages but bursting at the seams with sagely advice. The authors used lots of interesting stories to drive home the points. Some of the lessons that have stuck with me include are:

  • Coincidental programming – you don’t why or how the code ‘works’. If adding that statement makes it ‘work’, then it might also cause bugs you can’t explain.
  • Learning:
    • Learn a new language every year
    • Master a single editor (got me to learn vim)
    • Become a command line expert
  • The broken windows theory – quickly leads to software rot
  • Programming close to the problem domain (SICP book did this too)
  • Being proud of your work – craftsmen are supposed to be proud of their work.
  • Bringing about the change you want
  • Test your software else your users will

Definitely influenced me in a big way and probably due for a reread now.

2. Structure and Interpretation of Computer Programmers (SICP)

A master classic – one for anyone who wants to be a master programmer and has the patience to complete all the exercises. Reading the book without solving the questions would make it difficult to fully absorb the goodness.

The authors painstakingly organized the book – the pages flow so smoothly that the increase in complexity is almost imperceptible. You get exposed to a huge slew of programming paradigms (logic, functional, OOP), build cool stuff (8 queens solver, calculus solve, an interpreter and a compiler) and also learn some CS stuff (streaming, Ackermann functions, Huffman etc).

The biggest change for me is getting to understand that programming is all about solving the right problems using the right abstractions and in a logical and well-ordered manner. Languages don’t matter that much – they are just tools for shaping processes and conveying our thoughts. Java, JavaScript, Python, C, C++ are all beautiful languages. If you don’t know how break down problems, evaluate alternatives and create solution pipelines, then it doesn’t matter what tool you use – the end result may work but wouldn’t be a masterpiece.

Give a master carpenter the minimum tools he needs and he can create something beautiful. Give me a fully stocked wood workshop and while I can craft a table it probably would not come close to what the master did with his limited tools. My table might work but I probably won’t get any appreciation for it. Think this same way about programming.

For example, here is how they explained list reversal:

To reverse a list, just append the first element to the end of the reversed sublist

Be warned though, this is probably a year-long journey but one that you must should undertake. It’s freely available and you can check out my repo of solutions too.

3. Code Complete 2

I had always wanted to read this book but I couldn’t bring myself to do that because it was over 1000 pages long. Finally, I gratefully found a strategy that worked – committing to a 30 min read window daily; mostly on the bus commute home.

The points from the book:

  • Avoid complexity at all costs; software is already difficult so make it as dumb as possible. Again, make it plain simple.
  • Always care about quality regardless of your state in software development
  • Create programming models in your language i.e. try to create domain abstractions in the language; this makes it easier to maintain and express ideas
  • Programs are meant to be read by humans first and then for computers next
  • Being open to other ideas and influences – no need getting stuck to some particular tools
  • Testing styles to ensure code coverage

It is a great book – a bit verbose at times but one that you should read too.

 4. Programming Pearls and more Programming Pearls

A bit old but still readable – it made it obvious how programming had advanced and developers could avoid knowing intricate computer details.

The two books are small and contain a lot of ‘pearls’ that make you think deeply; most revolve around events that happened during the authors’ time at the Bell labs. The coding examples might be outdated but the solid principles in the book aren’t.

For example, a book teaching the basics of motion from the 60s would probably still be applicable today; motion fundamentals haven’t changed much. Same way the author’s approach of problem definition, isolation and iterative improvements can’t be faulted.

Highlights

  • Back of the envelope calculations – made me realize how these axioms could influence the way you program. Now you know why these questions are popular at interviews.
  • Over-engineering a solution might help it cope with unforeseen load
  • Iterative improvement – first solve the problem, then improve the algorithm if needed.
  • Confirm algorithmic correction – it’s very easy to assume your algorithms work when they infact contain bugs.
  • Programming for resource-constrained environments – how do you sort 4 million nodes on a limited memory computer?

5. The Little Schemer + The Reasoned Schemer

The style was nothing like any of the others and I surprisingly found it enjoyable. It flows as a series of questions and answers although readers are encouraged to try before taking a peek at the answers.

The first few chapters were easy to breeze through (coming from a SICP background probably helped too) however things get more interesting from about chapter 9 or thereabout.

  • Provides a ‘formula’ for nailing recursion: fix the base case and then handle the recursive case

Learning recursion is easy, master the first part and then you can learn the remaining parts (pun intended).

  • Good explanation of the Y-combinator – slow build up but still challenging explanation of a hard-to-grasp concept.
  • Covers CS concepts like generators, memoizations, closures, Church encoding, currying, continuation passing and more.

Worthy of mention

1. The Algorithm Design Manual (TADM)

Normally algorithms books are all full of mathematical proofs (really important to understand). However, I like the TADM because it focuses more on applying those algorithms in practical context. The various stories by the author and his humorous quips make it all the more fun too.

2. Eloquent JavaScript

The book that exposed me to functional programming and started the long JS adventure. I always recommend it since it is an excellent beginner book, free (despite its high quality) and is just awesome.

Conclusion

Looking back, it is interesting to see the subliminal influence these books have had on me. I only realized some of these while writing this review.

What are your favorite books and how have they influenced the way you write programs? Share in the comments.

Advice for aspiring programmers


I have made a couple of mistakes over the years and wanted to share those pitfalls so upcoming programmers know what to avoid and what works.

1. Focus

You can’t do everything – I don’t think it is possible for one person to simultaneously be a pro at writing high quality code, network engineering, computer hardware and machine learning.

Some years ago, a much younger and naive me wanted to do everything in software engineering, use several operating systems and know enough about network engineering. Did I succeed at that? No, I just diluted my efforts and wasted loads of time (was fun though…).

Laser-like focus is essential as the field of computing is extremely wide. Decide the field you are interested in, narrow down a learning path and get your hands dirty. For example, if you choose software engineering, you’ll have to select one of three tiers: web/mobile/desktop. Once decided, just work on completing a sample project. The more projects you have under your belt, the better you become and the more learning you do.

2. Dedication

You probably won’t become good enough in one or two years. That learn * in 24 hours book? Gives you a feel for what the language syntax is but there is more than that to mastery. You need to learn language idioms, problem analysis and clean architectural design.

This take time – a lot of time: Norvig put it at 10 years; just like mastery in any other field. It is a long haul so set your mind at it and you should hopefully start seeing results in about 5 years.

3. Consistency

Little drops of water consistently dripping on the same spot can wear down rocks.

The rush of excitement you feel at the beginning is eventually going to wane. Once it becomes a dreary old routine, what will keep you going at it? Consistency and habits can compound in surprising ways.

Assuming you are 1% better each day; after a year; you’ll be about 37 times better. Conversely, if you lose 1% everyday, then after a year you’ll be 0.02 of what you were. This is an extreme example but …

Small incremental progress over a long period of time leads to big increases. For example, a developer that dedicates 25 minutes daily to improving his skills would have spent ~183 hours honing his skills after a year. That’s nearly a month of intense study time. Can you study intensely for one month without burning out?

Can’t afford the 25 mins a day? How many minutes do you spend daily on Facebook? Multiply that by 365 to see where your time is going… Oh.. add in some more hours for games, TV and Twitter too. You see little things add up quickly over long periods of time.

Now once you are consistent, what’s the next step?

4. Seek mastery before moving on

Now that you have focus, the next obvious step is to stick to a path. Frameworks pop up every second in software development but hopping around from one framework to another is ill-advised: you would likely end up as a jack of all frameworks and master of none. I agree and encourage exploratory learning but jumping on every fad bandwagon is going to get you nowhere. Rather, make a strategic choice, stick to it and commit to learning it deeply.

Does this make you less marketable? Afterall, you want to be that guy who knows how to write code in 10,000 languages. Mastering a few increases your worth – you can now charge people more for expertise in your domain. Think of it this way, if you want a dining table masterpiece, what kind of carpenter would you prefer? One that specializes in dining tables or a generalist?

5. Learn the fundamentals

I can’t emphasize this enough. Let’s take a story: EmberJS and AngularJS brought about widespread adoption of 2-way binding across UI interfaces and this in turn triggered the proposal for object.observe as a JavaScript language method. More recently, the new path has been reactive programming and virtual DOM, eliminating the need for observables. You guessed it, object.observe got retired from the language. Now, a counter example, Dijkstra’s algorithm has been around since 1956 and it is still in use after 60 years! Can you beat that?

You have to know algorithms, data structures, operating system fundamentals, caching, Mathematics, etc. These concepts are language agnostic and provide a solid bases for solving new problems. That cool functional reactive somersaulting framework you are harping about today? It may be gone in 10 years. Fundamental CS knowledge? Evergreen, safe investment.

6. Be Humble

There are two things you should keep in mind:

  • There would always be a programmer better than you
  • You probably overestimate your skills

Do you disagree with me? Check out your ranking on the programmer skills matrix. Next, read Micheal Church’s excellent adders analogy. Unless you are already at 2.5+, there is no need to brag about your skills. Instead, quitely keep pushing the limits and some day you hopefully would earn bragging rights.

Humility is a great characteristic to have – there are a ton of things you do not know and need to know. And it takes time to reach the levels others have reached. Even if you think you are a rockstar, people generally appreciate a humble confident programmer who doesn’t rub his ‘magnificence’ into their faces all the time.

Need a yardstick to measure your progress? Periodically check back on both lists to track your progress.

7. Learn how to communicate

Most of software engineering is not about writing code – a large amount of time is spent discussing projects, evaluating designs and interacting with customers. The ability to clearly convey thoughts is one most programmers lack and it is a clear differentiator.

Being able to express yourself means you can get help with problems because you can explain them to others, it also means you can convince people to join your open source project or even adopt your architectural design. All these come from being able to sell yourself and your thoughts.

Communication is not limited to speech only though, writing well is another part. Want to improve? Write more, speak more, volunteer to be a spokesperson – practice makes perfect.

8. Get a mentor & be visible

Stand on the shoulders of giants to see farther. You need to avoid making pitiable mistakes if you can, accelerate your growth and ensure you are on the right path. Great mentors help you grow, introduce you to their networks and provide insightful feedback. They have a lot of experience which you don’t.

Don’t hide under a rock! People should know you for what you do!! Would you buy Microsoft Word if you never heard of it? Seek out peers in the community, interact with them and share ideas. Help one another grow and help influence the community. Being visible makes it easier to find mentors, jobs or even new freelance opportunities. Don’t want to go out? Then create a blog, comment on blogs, release software on Github or participate in fora.

9. Be open

No, your favorite language/IDE/tool is not the best thing since bread and butter. It has its own flaws and strengths.

You can have pretty strong opinions on anything, for example, favorite languages. However what you shouldn’t do is get so tied to some language that you refuse to try out any other language. There would always be situations where your favorite tool doesn’t cut it; in such scenarios, find the best tool and deploy it.

A Toyota Camry can probably carry logs of wood but you are better off moving such with a heavy freight truck.

I used to think I understood recursion until I read SICP; I was forced to write programs without resorting to variables and suddenly, simple toy programs became quite challenging. Exposure to various tools will empower you with various problem solving approaches. This is always a good thing since software engineering is all about solving the right problems.

10. Startups can be overglorified

I think most times, the start-up concept is overglorified – it involves a lot of work, huge risk and great rewards on success. However, the idea that once you make that awesome piece of software, it’ll automatically sell itself is mostly false. That rarely happens.

The saddening part however is that most of us do not know how to deliver good software, talk less of selling software. Yet, they want to run the business. Imagine you handing over a truck on the highway to an unqualified driver who barely knows the ropes. What’s that? A recipe for a crash right? Yup, then why do we do the same with startups?

Software is typically not the critical part of most businesses, rather meeting the user’s need is. You need to learn how to write software first, then learn about business and entrepreneurship – creating a MVP, build-measure-learn cycles, customer interactions, acquisition funnels, maintenance and support, delivery pipelines, execution and meeting competitors. Now, you know this, you can see the bigger picture – the software is not the goal. The business is the goal – the best piece of software that no one uses is a failure.

There is nothing bad with working in an established shop for some time to learn the ropes, creating connections and learning how business is run. You can even start testing the waters and at least you have a safe cushion if things go south.

And that’s it! Did I miss out on anything? Do share your thoughts in the comments or follow me on Twitter.

Learning ES2015 : let, const and var


Lions at the zoo

Zoos allow for safely viewing dangerous wild animals like lions. Lions are caged in their enclosures and can’t escape its boundaries (if they did, it’d be chaos eh?). Handlers, however, can get into cages and interact with them.

Like cages, you can think of variable scoping rules as establishing the boundaries and walls in which certain entities can live. In the zoo scenario, the zoo can be the global or function scope, the cage is the block scope and the lions are the variables. Handlers declare lions in their cages and lions at the zoo typically don’t exist outside their cages. This is how most block-scoped  zoos languages behave (e.g. C#, Java etc).

The JavaScript zoo is different in many ways. One striking issue is that lions handled by its var handlers are free to roam around in the entire zoo. Tales of mauled unsuspecting developers are a dime a dozen; walk too close to a ‘cage’ and you can fall into a trap.

Be careful of var lions – they might not be limited to their cages.

Fortunately, the  JS zoo authorities (aka TC39) have introduced two new lion handlers. They are the let and const. Now lions stay in their cages and we can all go visit the JS zoo without nasty surprises.

Technically this is not totally correct but who doesn’t like a good story? In block-scoped languages, variables are limited to enclosing scopes (mostly {} ); however JavaScript had function-scoped variables which were limited to the function body. let and const fix that for us.

How can function scope be dangerous?

Let’s go back to the zoo again…

function varHandler(){
    var visitor = 'going to the zoo!';
    var hasCages = true;

    if(hasCages){
        //create lion in cage scope with var
        var lion = 'in cage';
    }
    //check if lion exists outside its 'if cage'
    if(lion != null){
        visitor = 'Aaarrgh, the lion attacks!';
    }
    console.log(visitor);
    console.log(lion);
}
varHandler();
//Aaarrgh, the lion attacks!
//in cage

See! Visitors are not protected from encaged lions. Now, lets see what the new let handler

function letHandler(){
    let visitor = 'going to the zoo!';
    let hasCages= true;

    if(hasCages){
        //create lion in cage scope with let
        let lion = 'in cage';
    }

    //check if lion exists outside its 'if cage'
    if(lion != null){
       visitor = 'Aaarrgh, the lion attacks! :(';
    }

    console.log(visitor);
    console.log(lion);
}
letHandler();
//ReferenceError: lion is not defined

This code fails because we attempt to access the lion outside the if block cage. Even typeof wouldn’t work so the lion can’t be accessed outside its cage. To put it plainly, the lion can’t hurt the visitor.

Let – the solution to lion vars?

Let is mostly like var. The main advantage is the introduction of block (lexical) scoping. You can expect let variables to only exist in their containing blocks (could be a function, statement, expression or block).

The const

Const is a somewhat restricted let. Everything that applies to let applies to consts (e.g. scoping, hoisting etc) but consts have two more rules:

  • Consts must be initialized with a value
  • They cannot have another value assigned to them after initialization
const lion;
//SyntaxError: missing initializer...

const lion = 'hiya';
lion = 'hello';
//TypeError: Assignment to const...

Const objects?

While const objects can’t be reassigned to different values; you can still modify the contents of the object (just like in C# or Java finals).

const zoo = {};
zoo.hasLions = true;
console.log(zoo.hasLions);
//true

zoo = { a: 3};
//TypeError: Identifier 'zoo'...

Why? Think of a house address – homeowners can change the contents of their house (e.g. add more computers) but can’t just go slap their address on a totally different building.

If you want further restriction, try looking at Object.freeze but that is also shallow.

Let + Const vs Var

1. Hoisting

Surprise, surprise. Yes, they are hoisted. According to the ES2015 spec:

The variables are created when their containing Lexical Environment is instantiated but may not be accessed in any way until the variable’s LexicalBinding is evaluated.

The access restriction is explained in the temporal dead zone section. This can have performance implications for JavaScript engines since the engine has to check for variable initialization. Yes, let can be slower than var.

2. Binding in loops

We already know that Let/const are block-scoped while var is not. But how does binding in loops work?

for(var i=0; i&lt;=5; i++){
    setTimeout(function(){
        console.log(i);
    });
}
//5 5 5 5 5

Why is the output 5? Well the setTimeout functions are evaluated after the loop exits (remember JavaScript has one execution thread). Thus they are all bound to the value of which is 5 then.

The fix? Well, some tricks like using IIFEs to bind values exist. But let provides a better way out.

for(let i=0; i&lt;=5; i++){
    setTimeout(function(){
        console.log(i);
    });
}
//1 2 3 4 5

This works because the let value is bound on each loop evaluation and not to the last value of i. This same trick allows usage of let/const in for..of loops since the evaluation context is new on every pass.

Should loop counters be consts? Yes since loop counters are usually sentinel values that shouldn’t change. Changing them can cause difficult bugs especially in nested multi-loop scenarios.

3. Overwriting Global values via bindings

let/const can not overwrite global values unlike var. This prevents unintentional clobbering of language constructs.

var alert = 'alert'; //clobbers global alert function
window.alert(2);
//error - alert is not a function

let alert = 'alert'; //shadows global alert function
window.alert(2);

The downside of this is that some legitimate usages e.g. for modules require vars.

4. Variable Redeclaration

Redeclaring a variable that already exists with let/const will throw a SyntaxError. However redeclaring a var with another var works.

var x = 1;
let x = 2;
//error

const y = 0;
let y = 3;
//error

let x = 3;
var x = 4;
//SyntaxError

5. Shadowing

let/const allow for variable shadowing.

function test() {
    let x = 1;
    if(true) {
        let x = 3;
        console.log(x);
    }
    console.log(x);
}
test();
//3
//1

You can think of blocks as anything enclosed between {} in most languages. Do remember that the switch block is one big block.

The Temporal Dead Zone

What happens if you try to access a hoisted variable before it is initialized? With var, the value is undefined however with  let(or const), this throws an error.

The engine assumes you shouldn’t know about or attempt to use a variable that has been declared but not initialized. This state is called the temporal dead zone.

let tdz;
console.log(tdz);
//Error

tdx = 'Temporal dead zone';
console.log(tdz);
//Temporal dead zone

Should I switch to let?

Yay for safe zoos! Well, converting all the var handlers into let/const handlers comes with a hitch. There is a high probability that the zoo breaks down. A better approach involves doing slow gradual conversions over a long period.

The issue is that some var usages help with some tricky areas (e.g. globals for modular namespacing). Converting such to lets will break the application. Others might be workarounds for some scenarios so go at it slowly. For example, when working in a file, you can change all the vars in that function/file to let and run tests to ensure nothing is broken.

Let support across browsers

Safari, iOS and Opera provide no let support so you might have to use a transpiler or polyfill. Here is the matrix on caniuse for let and const.

Is var ever going to be deprecated?

The var behaviour is probably here to stay. Once quirks are out in the wild, they become very difficult to fix since innumerable applications are built on these. Future improvements now have to support these for backward compatibility. The other alternative would be to declare an entirely new language (e.g. Python3 vs Python2) but that approach led to the schism discussed in the last post.

Conlusion

Let and const take away most of the issues associated with Javascript’s function scoping and make it more familiar to programmers coming from other languages. You probably should switch to it too and start using it more often.

Learning ES2015 : Getting Started


ES6 (also ES2015) is the rave of the moment. Finally JavaScript is getting a makeover after nearly 6 years. The enhancements allow for more powerful and expressive JavaScript, ease the building of complex applications and iron out some quirky behaviour.

The var hoisting problem? let resolves that. IIFEsThe function context syntax eliminates the need for that.

There’s lots more though and hopefully the upcoming series of posts will provide in-depth walkthroughs of de-structuring, iterators and generators, new data structures (Map + WeakMap, Set + WeakSet), Proxies, Promises, rest parameters, new standard library functions and much more.

But first, the history lesson.

How did we get here?

JavaScript was standardized in 1997 (nearly 2 decades ago) and its evolution timeline follows:

ES1 (June 1997) – First release of JavaScript on the ECMAScript standard.

ES2 (June 1998) – Contained only minor standardization changes.

ES3 (December 1999) – Introduced try/catchErrors and improved string handling. Saw strong adoption across all browsers before being overtaken by ES5. Do you know you could overwrite undefined with some other value in this JS version? Don’t do evil stuff like that…

ES4 (dumped July 2008) – An ambitious project to overhaul JavaScript. Proposed changes included classes, iterators, modules (now in ES6) and static typing (dropped). It also included some core changes to the language itself e.g. classical inheritance.

Two parties emerged – one party wanted the ES4 upgrade while the opposing party believed it would ‘break’ the Internet. Since they couldn’t agree, the opposing party designed a minor language upgrade tagged ES3.1 (read SemVer for more on version numbers).

Ultimately the two parties met again in Oslo and agreed to release ES3.1 as ES5, shelve ES4 and collaborate on a ‘harmonizing’ version. And that’s how ES6 got the Harmony name.

ES5 (2009) – This was the first major standardized release in the 10 years after ES3. It’s also the most commonly supported JavaScript version nowadays.

Some of the features that got released in this version include ‘strict mode‘, native JSON support, non-writable NaN, undefined, Infinity values and some standard lib improvements  e.g. forEach, map, keys etc.

ES5.1 (June 2011) contained a few minor corrections.

ES6 (2015)

ES6 became feature complete in 2015 – hence the ES2015 name since the plan is to have yearly releases going forward. Yes, work has started already on the ES2016/ES7 version and you can contribute if you want.

There is improving support across hosting environments (both browsers and node); you can check kangax’s table and the Microsoft platform reference. If you can’t wait to try the new awesomeness, why not try out Babel,  Traceur or even TypeScript. Full-band coverage of features by all browsers would likely take time, for example ES5 was released in 2009 and there are still browsers that don’t offer full feature support.

Why do we need another JavaScript Version?

Complex Applications

JavaScript is a great language and works great; however cracks start appearing when you build complex applications in it. Try refactoring a 100k line project and you’ll see what I mean. The introduction of native namespacing and module support would help improve the language in this area.

No Surprises

JavaScript is quite ‘surprising’ as there are few corollaries in its behaviour compared to other languages. For example, prototypical inheritance can be confusing for developers with a classical inheritance background.

The absence of lexical scoping, funny behaviour of dictionaries and loss of context have puzzled many programmers and led to many bugs. Yes, a couple of workarounds exist for all these but the principle of least surprises is a powerful one.

The class syntax should help hide prototypical inheritance by providing familiar syntax; let provides lexical scoping, arrow functions remove the need for that=this and Maps allow objects as keys.

Easier for all

The standard library has been enhanced – there is native promise support, improved string handling methods (templates are finally here!) and other methods too.

Conclusion

ES6 preserves the good parts of JavaScript while fixing a lot of the bad parts. And that’s the introduction; upcoming posts would cover more in-depth ES6 features that you should know about or start using.

Understanding and using Streams in JavaScript


Introduction

What do you think of the following code snippet?

NaturalNumbers()
.filter(function (n) { return n % 2 === 0; })
.pick(100)
.sum();

Isn’t it beautifully succinct and neat? It reads just like English! That’s the power of streams.

Streams are just like lists but offer more capabilities because they simultaneously abstract data and computation.

Streams vs Lists/Arrays?

Let’s take a scenario from Mathematics, how would you model the infinite set of natural numbers? A list? An Array? Or a Stream?

Even with infinite storage and time, lists and arrays do not work well enough for this scenario. Why? Assuming the largest possible integer an array can hold is x, then you’ve obviously missed out on x + 1. Lists, although not constrained by initialization, need to have every value defined before insertion.

Don’t get me wrong, lists and arrays are valid for a whole slew of scenarios. However, in this situation, their abstraction model comes up short. And when abstractions do not perfectly match problem models, flaws emerge.

Once again, the constraints of this problem:

  • The size of the problem set might be infinite and is not defined at initialization time  (eliminates arrays).
  • Elements of the set might not be defined at insertion time (eliminates lists).

Streams, which combine data and computation, provide a better abstraction for such infinite problem sets. Their ability to model infinite lists stems from lazy evaluation – values are only evaluated when they are needed. This can lead to significant memory and performance boosts.

The set of natural numbers starts from 1 and every subsequent number adds 1 to its predecessor (sounds recursive eh? ). So a stream that stores the current value and keeps adding one to it can model this set.

Note: As might have become obvious: extra data structures might be needed to store previously generated stream values. Streams typically only hold a current value and a generator for calculating the next value.

What is a Stream?

I published stream-js, a very small (4.1kb minified) library that provides stream processing capabilities. Grab it or read the source as the post builds on it.

Oh, do contribute to the repo too!

How do I create a stream?

The Stream constructor expects an initial value and a generator function, these two values form the stream head and tail respectively.

An empty stream has null head and tail values. In infinite streams, the tail generator will endlessly generate successive values.

var emptyStream = new Stream(null, null);

var streamOf1 = new Stream(1, null);

var streamOf2 = new Stream(1, function () {
    return new Stream(2, null);
});

var streamOf3 = Stream.create(1,2,3);

var streamFromArray = Stream.fromArray([1,2,3]);

Note: The fromArray method uses the apply pattern to partially apply the input array to the arguments function above.

Show me the code!

Now that you know how to create Streams, how about a very basic example showing operations on Streams vs Arrays in JS?

With Arrays

var arr = [1,2,3];
var sum = arr.reduce(function(a, b) {
    return a + b;
});
console.log(sum);
//6

With Streams

var s = Stream.create(1,2,3);
var sum = s.reduce(function(a, b) {
    return a + b;
});
console.log(sum);
//6

The power of streams

The power of streams lies in their ability to hold model infinite sequences with well-defined repetition patterns.

The tail generator will always return a new stream with a head value set to the next value in the sequence and a tail generator that calculates the next value in the progression.

Finite Streams

The Stream.create offers an easy way to create streams but what if this was to be done manually? It’ll look like this:

var streamOf3 = new Stream (1, function() {
    return new Stream(2, function() {
        return new Stream(3, function () {
            return new Stream(null, null);
        });
    });
});

Infinite Streams

Infinite Ones

Let’s take a dummy scenario again – generating an infinite series of ones (can be 2s too or even 2352s). How can Streams help? First the head should definitely be 1, so we have:

var ones = new Stream(1, ...);

Next, what should tail do? Since it’s a never-ending sequence of ones, we know that tail should generate functions that look like the one below:

var ones = new Stream(1, function() {
    return new Stream (1, function() {
        ...
    };
});

Have you noticed that the inner Stream definition looks like the Ones function itself? How about having Ones use itself as the tail generator? Afterall head would always be one and tail would also continue the scheme.

var Ones = function () {
    return new Stream(1, /* HEAD */
        Ones /* REST GENERATOR */);
};

Natural Numbers

Let’s take this one step further. If we can generate infinite ones, can’t we generate the set of Natural numbers too? The recurring pattern for natural numbers is that elements are larger than their preceding siblings by just 1.

Let’s define the problem constraints and add checkboxes whenever a stream can be used.

  • Set is infinite ☑
  • Set has a well-defined recurring pattern ☑
  • Definition needs an infinite set of ones ☑

 

So can streams be used to represent natural numbers? Yes, stream capabilities match the problem requirements. How do we go about it?

The set of natural numbers can be described as the union of the set {1} and the set of all numbers obtained by adding ones to elements of the set of natural numbers. Yeah, that sounds absurd but let’s walk through it.

Starting from {1}, 1 + 1 = 2 and {1} ∪ {2} = {1,2}. Now, repeating the recursion gives rise to {1, 2} ∪ {2, 3}  = {1,2,3}. Can you see that this repeats indefinitely? Converting to code:

function NaturalNumbers() {
    return new Stream(1, function () {
        return Stream.add(
            Stream.NaturalNumbers(),
            Stream.Ones()
        );
    });
};

Execution walkthrough

The first call to NaturalNumbers.head() returns 1. The tail function is given below:

function () {
    return Stream.add(
        Stream.NaturalNumbers(),
        Stream.Ones()
    );
}
  • Stream.NaturalNumbers is now a stream that has a head of 1 and a tail generator that points to itself. Think of the sets {1} and Natural numbers.
  • Stream.Ones is a stream with a head of one and a tail generator of ones.

Once invoked, this will give a new stream with a head of 1 + 1 and a new tail function that will generate the next number.

Building upon natural numbers

Generating the sets of even and odd numbers is a cinch – just filter the set of natural numbers!

var evenNaturals = NaturalNumbers().filter(function(val) {
    return val % 2 === 0;
});

var oddNaturals = NaturalNumbers().filter(function(val) {
    return val % 2 === 1;
});

Pretty simple right?

Who needs infinite sets?

Computers are constrained by storage and time limits so it’s not possible to ‘have’ infinite lists in memory. Typically only sections are needed at any time.

stream-js allows you to do that

  • Stream.pick: allows you to pick elements of a stream.
  • toArray: converts a stream to an array

A typical workflow with stream-js would involve converting an input array to a stream, processing and then converting back to an array.

For example, here is the array of the first 100 odd numbers; you need a 1000? Just pick them (pun intended).

var first100odds = oddNaturals.pick(100).toArray();

Note: Stream operations can be chained since most stream operations return new streams (i.e. are closed operations). Here is odo, v0.5.0 of stream-js.  Odo means river in Yoruba, the language of my tribe.

And that’s about it! I hope you enjoyed this, now read how to write a promise/A+ compatible library next.

How to implement the Y-combinator in JavaScript


This post provides a very simple step-by-step implementation of the Y-combinator in JavaScript. You should be able to implement the Y-combinator in your language of choice after reading this post; as you’ll see – it’s that easy.

What is a combinator?

According to wikipedia,

A combinator is a particular type of higher-order function that may be used in defining functions without using variables. The combinators may be combined to direct values to their correct places in the expression without ever naming them as variables.

The emphasized text highlights the most interesting part of the definition – combinators allow functions to be defined without variables. Imperative programming relies heavily on variables and trying to eschew variables can be a mind-stretching exercise.

Show me the code!

The following code snippet is a Y-combinator example of the factorial function in JavaScript.

var yCombFact = function (number) {
    return (function (fn) {
        return fn(fn, number);
    })(function(f, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * f(f, (n - 1));
        }
    });
};

yCombFact(5);
//120

Looks abstruse right? No worries – lets break it down.

Two major things

There are two major concepts that help drive home the understanding of the Y-combinator:

  1. No variables allowed – this implies we have to find a way to write the factorial function without using variables.
  2. Invoking the no-variable-function defined in 1 without using variables again.

Part 1: Rewriting the factorial function without variables

Here is the factorial function definition. Can you spot the variable usage in it?

var factorial = function(n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * factorial(n - 1);
    }
}

factorial(5);
//120

The expression n * factorial(n – 1) only succeeds because it can find the variable factorial in scope; without it, the factorial function would not be recursive. But remember, we are trying to do away with all variables.

The workaround is to pass in the variable reference as a function parameter. In the factorial example, recursion can then be achieved by using the placeholder parameter as the reference. The no-variable-factorial function looks like the following:

var noVarFactorial = function(fn, n) {
    if(n <= 1) {
        return 1;
    } else {
        return n * fn(fn, (n - 1));
    }
}

noVarFactorial(noVarFactorial, 5);
//120

The new definition works exactly like the old one but without the internal dependency on the factorial variable. Rather, recursion succeeds by relying on the ‘injected’  parameter and the computer is none the wiser.

Part 2: Invoking the no-variable function without variables

We have rewritten the factorial function to avoid variables however we still need to store the factorial function in a variable before invoking it


var factorial = ...;

factorial(factorial, 5);

The trick? Functions to the rescue again! Let’s create a factorialCalculator function that uses the noVariableFactorial definition above.

function factorialCalculator(n) {
    //as defined earlier
    var noVarFactorial = ...;
    return noVarFactorial(noVarFactorial, n);
}

factorialCalculator(5);
//120

The noVarFactorial name has to go since we want to avoid variables. And how do we achieve this? Yes, functions once more. So lets create a wrapper function inside the factorialCalculator that invokes noVariableFactorial.

function factorialCalculator(n) {
    var wrapper = function (noVarFact) {
        return noVarFact(noVarFact, n);
    }
    return wrapper(noVarFactorial);
}

factorialCalculator(5);
//120

Unfortunately, the wrapper function has led created another wrapper variable and this has to be eliminated too. For a complete implementation, the two variables (wrapper and noVarFact) have to go.

It’s now time to leverage language specific idioms to achieve this. JavaScript has the IIFE idiom which allows you to immediately invoke a function (read about it here). Using it, we can eliminate the need for the wrapper variable as thus:


function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(noVarFactorial);
}

factorialCalculator(5);
//120

Combining all the pieces

The last thing is to insert the noVarFact definition so that it is no longer a global variable in the scope. Just as we do in Mathematics, we can just ‘substitute’ the value in. The final piece is then:

function factorialCalculator(n) {
    return (function (noVarFact) {
        return noVarFact(noVarFact, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}

factorialCalculator(5);
//120

And that, my friends, is the yCombinator in all its glory. I have decide to leave the variables as they are to make it all clear but here is the standard format so you know it when you see it


function factorialCalculator(n) {
    return (function (fn) {
        return fn(fn, n);
    })(function(fn, n) {
        if(n <= 1) {
            return 1;
        } else {
            return n * fn(fn, (n - 1));
        }
    });
}
factorialCalculator(5);

Conclusion

The Y-combinator is quite easy to understand – it requires understanding function invocation patterns, variable substitution by parameters and higher level functions. As an exercise, can you try implementing the fibonacci using the Y-combinator approach? Better still, why not create a Y-combinator function that accepts function that match the fn(fn, n) signature?

Related Posts

SICP Section 3.3 – 3.5 : Found a bug in memq


1. Is memq broken?

memq is an in-built list search function; it finds the first occurrence of a key in a list and returns a new list starting from that key.


(memq 3 (list 1 2 3 4))
//; '(3 4)

(memq 5 (list 1 2 3 4))
//; #f

Now that you know what memq does, lets look at some weird behaviour

(define x (list 1 2))
(define a (list 3 4))

//; append x to the list
(set! a (append a x))
(memq x a)
//; #f -> x is not in a

Building on that foundation leads to the following conundrum

(define x '(1 2 3))

//; Create a cycle: last element of x is itself
(set-cdr! x x)

//; is x in x?
(memq x x)

//; never-ending loop

memq tests whether the key exists in the list and if it does, then it returns the list starting from the first occurrence of the key. But what is the first occurrence in a cyclic list? Could this be the bug?

2. Algorithms, algorithms

  • Horner’s algorithm

This is useful for calculating polynomial values at discrete points; for example, given a polynomial function, f(x) = 7x³ + 4x² + 4; what is f(3)? A potential application (and possible interview question too) is to convert string values into numbers – 1234 is the value of x³ + 2x² + 3x + 4 when x is 10.


//assuming polynomial is represented from lowest power to highest

//i.e. 1234 -> [4, 3, 2, 1]

function horner (poly, base) {
    if(base === 0) {
        return 0;
    }

    var val = 0;
    var polyLen = poly.length;
    for(var i = 0; i < polyLen; i++ ) {
        val += poly[i] * Math.pow(base, i);
    }
    return val;
}

horner([4,3,2,1], 10);
//1234

  • Fast exponentiation

Because going twice as fast is more fun than going fast.


function exponent (base, power) {
    var val = 1;
    while(power > 0) {
        val = val * base;
        power = power - 1;
    }
    return val;
}

Now, lets look at fast exponentiation.

function fastExponent(base, power) {
    if(power === 1) {
        return base;
    }

    //handle odd powers
    if((power % 2) === 1) {
       return base * fastExponent(base, (power - 1));
    }

    var part = fastExponent(base, (power / 2));
    return part * part; //square of part also works
}

fastExponent(10,3)
//1000

Fast exponentiation grows logarithmically Ο(log N) while the normal one is Ο(N). This same concept can be reapplied to similar scenarios.

3. Streams

Functional programming offers many advantages but one potential downside is performance and needless calculation. For example, while imperative programming offers quick exit constructs (e.g. break, continue); functional programming constructs like filter, map and reduce have no such corollary – the entire list has to be processed even if only the first few items are needed.

Streams offer an elegant solution to this issue by performing only just-in-time computations. Data is lazily evaluated and this makes it possible to easily (and beautifully) represent infinite lists. Inshaaha Allaah I should explain this concept in an upcoming post. It’s very beautiful and elegant and powerful.

Related Posts on my SICP adventures

  1. SICP Review: Sections 3.1 & 3.2
  2. SICP Section 2.5

SICP Review: Sections 3.1 & 3.2


Here are my thoughts on Sections 3.1 and 3.2 of my SICP journey; the deeper I go in the book, the more I appreciate the effort, style and work the authors put into it. Each section builds on earlier sections and it is amazing how it forces you to see software development from a new angle. Enjoy…

1. Perception and Design

Our perceptions of the real world influence the way we model objects in software development. The example in the book uses two objects – a rational number and a bank account. Modifications of a rational number always generate a new rational number that is different from the original (e.g. adding 1/2 to 3/8).  We do not ‘view’ rational numbers as being capable of changing – i.e. the value of 3/8 is always 3/8.

Conversely a bank account object has a balance that can ‘change’; withdrawals and deposit change this value even though we still say it’s the same bank account object! There appears to be a double standard right? If internal details (i.e. the numerator or denominator) of a rational number changes, we might end up with a new rational number however if a bank account balance changes we still have the same bank object.

Strange isn’t it, the way we think does influence the way we write code.

2. Environment

The environment (context) is very important in programming languages even though we just sometimes forget about it. Every language has some global environment that determines what expressions, values and parameters mean. If you change the environment, then it becomes highly possible that the expression has a different meaning.

Lets take an example from human languages, the word ‘wa’ means ‘come’ in Yoruba but can mean ‘and’ in Arabic. The same applies to programming languages, have you ever wondered why and where the + (a free variable) is defined? It has to be in the global environment so the programming language knows what it means. Now if you overrode the ‘+’ (why you’ll do this beats me) then it can mean something else.

3. Execution Order 

Exercise 3.08 was pretty fun, it was a simple ask – write a procedure such that evaluating (+ (f 0) (f 1)) will return 0 if the arguments to + are evaluated from left to right but will return 1 if the arguments are evaluated from right to left.

While the exercise is trivial and probably just an academic exercise, it shows how assignments can influence program execution. My solution uses a closure to track the number of calls and returns a unary lambda. The lambda returns the value of its argument on its first call and a zero on subsequent calls.

Thus if the execution order is left-to-right, then the first call will have a parameter of 0 and further calls will return 0 leading to (+ 0 0) while if it was right-to-left, then the first call will have a value of 1 and further calls will evaluate to 0.