A framework for shipping high quality software


Software engineers, technical leads and managers all share one goal – shipping high-quality software on time. Ambiguous requirements, strict deadlines and technical debt exert conflicting tugs on a software team’s priorities. Software quality has to be great otherwise bugs inundate the team; further slowing down delivery speed.

This post proposes a model for consistently shipping high-quality software. It also provides a common vocabulary for communication across teams and people.

Origins

This framework is the culmination of lessons learnt delivering the most challenging project I have ever worked on. The task was to make a web application globally available to meet scaling and compliance requirements.

The one-line goal quickly ballooned into a multi-month effort requiring:

  • Moving from a single compute resource based approach to multiple compute resources.
  • Fundamental changes to platform-level components across all micro services.
  • Constantly collaborating with diverse teams to get more insight.

The icing on the cake? All critical deployments had to be seamless and not cause a service outage.

What’s Donald Rumsfeld gotta do with software?

He’s not a software engineer but his quote below provides the basis.

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

– Donald Rumsfeld

His quote is a simplified version of the Johari window from Psychology. Applying this to software, the window would look thus:

What the developer knows What the developer doesn’t know
What other developers know Known Unknown known
What other developers don’t know Known Unknown Unknown Unknown

1. The known

Feature requirements, bugs, customer requests etc. These are the concepts that are well-known and expected. However, writing code to implement a feature may not guarantee a full known status. For example, untested code can still be a known unknown until you guarantee how it works.

It is one thing to think code works as you think it would and it is another to prove it. Unit tests, functional tests and even manually stepping through every line help to increase the known.

2. The known unknown and the unknown known

I am collapsing both halves into one group because they are related.

  1. Known Unknown

    These are aspects that the developer knows about but other engineers in partner teams don’t know. A good example would be creating a replacement API and making an existing one obsolete. Another example would be changing the behaviour of some shared component.

  2. Unknown Known

    These are the aspects that the developer doesn’t know but engineers in other teams know about. For example, a seemingly minor update of a core component by a developer can trigger expensive rewrite cascades in partner teams. Another example could be quirks that are known to only a few engineers.

Clear communication is the one good fix for challenges in this category. Over-communicate! Send out emails, hold design reviews and continuously engage stakeholders.

This is extra important for changes with far-reaching impact. As the developer/lead/manager, you need to spend time with the key folks and understand their scenarios deeply. This would lead to better models as well as help forecast issues that might arise.

Finally, this applies to customers too – you may what the customer doesn’t know about and vice versa.

3. The unknown unknowns

This is the most challenging category. There is no way to model or prepare for something unpredictable – an event that has never happened before. Unknown Unknowns (UUs) include hacks, data loss / corruption, theft, sabotage, release bugs and so on.

Don’t fret it yet, the impact of UUs can be easily mitigated. Let’s take two more metrics:

  1. Mean time to repair (MTTR)

    The average amount of time it takes to repair an issue with the software.

  2. Mean time to detect (MTTD)

    The average amount of time it takes to detect a flaw.

The most reliable way of limiting the impact of UUs is to keep the MTTR and MTTD low. Compare the damage that a data-corrupting deployment can cause in 5 minutes versus 1 hour.

MTTD

A rich monitoring and telemetry system is essential for lowering MTTD metrics. Log compute system health usage metrics (RAM, CPU, disk reads etc), HTTP request statuses (500s, 400s etc.) and more.

Ideally, a bad release will trigger alarms and notify administrators immediately it goes out. This will enable the service owner to react and recover.

MTTR

Having a feature toggle or flighting system can help with MTTR metrics. Again using the bad release example, a flight/feature toggle will enable you to ‘turn off’ that feature before it causes irreparable damage.

Also critical is having a quick release pipeline, if it takes two days to get a fix out; then your MTTR is 2 days+x. That’s a red flag – invest in a CI pipeline.

tldr?

A software engineer is rolling out a critical core update, a few questions to ask:

  • Does he have enough logging to be able to debug and track issues if they arise?
  • Is the risky feature behind a flight or feature toggle? How soon can it be turned off if something goes wrong?
  • Are there metrics that can be used to find out if something goes wrong after the feature is deployed in production?

A release strategy is to roll out the feature in a turned off state and then turn it on for a few people and see if things are stable. If it fails, then you turn off the feature switch and fix the issue. Otherwise, you progressively roll out to more users.

What steps do you take to ensure software quality?

Related

  1. Creating Great User Experiences
  2. Efficiently shipping Big Hairy Audacious Software projects
  3. Things to check before releasing your web application

Creating Great User Experiences


Have you ever wondered why some applications always look and feel similar? Why for example does Apple have a unified experience across devices? Why are Google products starting to adopt the material experience?

This post explains some of the underlying themes influencing design choices. Developers can use these concepts to craft better user interfaces and experiences.

1. Consistency

A well designed user experience is consistent through out its constituent parts. Imagine how easy it would be to use an app with varying colour shades, item sizes and page styles? Wouldn’t that be confusing and difficult to understand?

A couple of examples of design consistency include:

  1. All computer applications usually have the set of three buttons on the title bar (close, minimize and maximize).
  2. Right-click nearly always opens up a context menu
  3. The menu bar for applications starts with drop downs titled File, Edit and so on

Such consistency lowers the learning barrier as they are implicit assumptions in the minds of users. When was the last time you tried to figure out the ‘close’ button? You don’t need to think, you subconsciously understand and use them.

UI frameworks like semantic-ui, bootstrap or Fabric provide a good foundation for quickly ramping up web applications. A colour palette helps ensure visual colour consistency. Do be careful of contrast though; having too much contrast is not visually appealing while having too little makes it difficult to see components.

And this leads us to the next point : ‘learnability’.

2. Learnability

One of the enduring messages I picked up from Steve Krug’s great book: ‘Don’t make me think’ is to avoid making users think. Why? When people have to ‘think’ deeply to use your application, that interrupts their concentration and flow; such interrupts might be fatal as some users might just leave your site.

A low learning curve helps users quickly become familiar and deeply understand how to use your product. Think of any software you are proficient in, now think about the first time you used that product, was it difficult? Did you grow to learn?

A couple of tips here:

  1. Component Behaviour – One common issue I run into with components is variable behaviour for the same action in different scenarios. For example, a set of actions might trigger a dialog to pop up while in other scenarios the dialog is not triggered. It’s much better to have the dialog pop up consistently otherwise you’ll have your users build many mental models of expected interaction responses.
  2. Grouping Related collections can be grouped and arranged in hierarchies that can be drilled into. I prefer using cards for such collections and allowing users drill up/down. Another thing to note is to ensure that the drill ladders are consistent; no point having users drill to a level they can’t drill out of.

3. Clickability

My principle of ‘clickability’

The number of clicks required to carry out a critical operation is inversely proportional to the usability scores for the web application.

The larger the number of clicks needed to achieve a major action, the more annoying the app gets over time. Minimize clicks as much as possible!

But why is this important? Let’s take a hypothetical example of two applications – app 1 and app 2 – which require 2 and 4 clicks to register a new user.

Assuming you need to add 5 new users, then app 1 requires 20 clicks while app 2 requires 40 clicks, a difference of 20 clicks! It’s safe to assume then that users who have to use the application for hours on end will prefer app1 to app2.

The heuristic I use is to minimize the number of clicks in core user paths and then allow short cuts for cases that are not possible.

Yes, ‘clickability’ does matter.

4. Navigability

Building upon the principle of clickability, this implies designing your flow so that the major content sections are well interlinked. It also includes thinking of the various links between independent modules so that navigation is seamless.

For example, you have a navigation hierarchy that is 3 layers deep and want users to do some higher-level action while at the bottom of the navigation tree. You can add links to the leaf nodes and even subsequent tree levels to ensure users can quick jump out rather than exposing a top-level option only. Why? See the principle of clickability above.

A good exercise is to build out the interaction tree which exposes the links between sections of your app. If you have a single leaf node somewhere, then something is really wrong. Once done, find ways to link leaf nodes if such interactions are vital and enhance the user experience.

5. Accessible + Responsive + nit-proof

One of my favorite tests is to see how a web application works on smaller devices.That quickly exposes a lot of responsive issues.

Another one is to add way too many text characters to see if text overflow is properly handled. The fix for that is very simple: just add the two CSS rules

.noOverflow {
   text-overflow: ellipsis;
   overflow: hidden;
}

Another important concept is adding titles to elements and following accessibility guidelines.

All these are important and show that some thought was put into the process.

Conclusion

Do a quick test of your application on some random user.The way users interact with your application and navigate around might surprise you!

While running such tests, ask if users feel they have an intuitive understanding of how the app works. If yes, bravo! Pat your self on the back. Otherwise, your assumptions might be way off and you need to return to the drawing board.

The best experiences always look deceptively simple but making the complex simple is a very difficult challenge. What are your experiences creating great user experiences?

Related

  1. Things to check before releasing your web application
  2. Tips for printing from web applications
  3. How to detect page visibility in web applications
  4. How to track errors in JavaScript Web applications

Things to check before releasing your web application


This post originally started out as a list of tips on how to break web applications but quickly morphed into a pre-release checklist.

So here are a couple of things to validate before you press the ‘go-live’ button on that wonderful web application of yours.

General

  1. Does the application handle extremely large input? Try copying a Wikipedia page into an input field. Strings can be too long and overflow database models.
  2. Does it handle boundary values properly? Try extremely large or small values; Infinity is a good one.
  3. Do you have validation? Try submitting forms with no entry.
  4. Do you validate mismatched value types? Try submitting strings where numbers are expected.
  5. Has all web copy been proofread and spell-checked? Typos are bad for reputation.

Localization (L10n) and Internationalization (I18n)

  1. Do you support Unicode? The Turkish i and German ß are two quick tests.
  2. Do you support right-to-left languages? CssJanus is a great tool for flipping pages.
  3. Time zones and daylight saving time changes.
  4. Time formats: 12 and 24 hour clocks
  5. Date formats: mm/dd/yyy vs dd/mm/yyyy
  6. Currencies in different locales.

Connections

  1. Does your web app work well on slow connections? You can use Chrome or Fiddler to simulate this.
  2. What happens when abrupt network disconnections occur while using your web application?
  3. Do you cut off expensive operations when the user navigates away or page is idle?

Usability + UX

  1. Does the application work well across the major browsers you support (including mobile)?
  2. Does the application look good at various resolution levels? Try resizing the window and see what happens.
  3. Is your application learnable? Are actions and flows consistent through the application? For example, modal dialogs should have the same layout regardless of the action triggering them.
  4. Do you have your own custom 404 page?
  5. Do you support print?
  6. Do error messages provide enough guidance to users?
  7. Does your application degrade gracefully when JavaScript is disabled?
  8. Are all links valid?

Security

  1. Do you validate all input?
  2. Are all assets secured and locked down?
  3. Do you grant least permissions for actions?
  4. Ensure error messages do not reveal sensitive server information.
  5. Have you stripped response headers of infrastructure-revealing information? E.g. server type, version etc.
  6. Do you have the latest patches installed on your servers and have a plan for regular updates?
  7. Do you have a Business Continuity / Disaster Response (BCDR) plan in place?
  8. Are you protected against the Owasp Top Ten?
  9. Do you have throttling and rate limiting mechanisms?
  10. Do you have a way to quickly rotate secrets?
  11. Have you scanned your code to ensure no valuable information is being released?

Code

  1. Did you lint your CSS and JS (see JSLint, JSHint, TSLint)?
  2. Have all assets (JavaScript, CSS etc) been minified, obfuscated and bundled?
  3. Do you have unit, integration and functional tests?

Performance

  1. Have you run Google’s Page Speed and Yahoo’s YSlow to identify issues?
  2. Are images optimized? Are you using sprites?
  3. Do you use a CDN for your static assets?
  4. Do you have a favicon? Helps to prevent unwanted 404s since browsers auto-request for them.
  5. Are you gzipping content?
  6. Do you have stylesheets at the top and JavaScript at the bottom?
  7. Have you considered moving to HTTP2?

Release Pipeline

  1. Do you have test and staging environments?
  2. Do you have automated release pipelines?
  3. Can you roll back changes?

Others

  1. Do you have a way to track errors and monitor this with logging?
  2. Do you have a plan to handle customer reported issues?
  3. Have you met all legal and compliance requirements for your domain?
  4. Have you handled SEO requirements?

Conclusion

These are just a few off of my head – feel free to suggest things I missed out. I should probably consider transferring these to a Github repo or something for easier usage.

Book Review:Build your own AngularJS


As part of my continuous learning; I started reading Tero Parviainen‘s ‘Build your own AngularJS‘ about 6 months ago. After 6 months and 127 commits, I am grateful I completed the book.

While I didn’t take notes while reading, some ideas stood out. Thus, this post describes some of the concepts I have picked up from the book.

The Good

1. Get the foundational concepts right

This appears to be a recurring theme as I learn more about software engineering. Just as I discovered while reading the SICP classic, nailing the right abstractions for the building bricks makes software easy to build and extend.

Angular has support for transclusion which allows directives to do whatever they want with some piece of DOM structure. A tricky concept but very powerful since it allows you to clone and manage the scope in transcluded content.

There is also support for element transclusion. Unlike the regular transclude which will include some DOM structure in some new location; element transclusion provides control over the element itself.

So why is this important? Imagine you can add this to some element to only show up under certain conditions? Then you can use element transclusion to ensure that the DOM structure is only created and linked when you need it. Need some DOM content to be repeated times? Just use element transclusion, clone and append it the times. These two examples are over-simplifications of ng-if and ng-repeat respectively.

Such great fundamentals allow engineers to build complex things from simple pieces – the whole is greater than the sum of parts.

2. Test Driven Development (TDD) works great

This was my first project built from the scratch using  TDD and it was a pleasant experience.

The array of about 863 tests helped identify critical regressions very early. It gave me the freedom to rewrite sections whenever I disagreed with the style. And since the tests were always running (and very fast too, thanks Karma!); the feedback was immediate. Broken tests meant my ‘refactoring’ was actually a bug injection. I don’t even want to imagine what would have happened if those tests didn’t exist.

Guided by the book – a testament to Tero’s excellent work and commitment to detail – it was possible to build up the various components independently. The full integration only happened in the last chapter (for me, about 6 months later). And it ran beautifully on the first attempt! Well, all the tests were passing…

3. Easy to configure, easy to extend

This is a big lesson for me and something I’d like to replicate in more of my projects: software should be easy to configure and extend.

The Angular team put a lot of thought into making the framework easy to configure and extend. There are reasonable defaults for people who just want to use it out of the box but as expected, there would be people who want a bit more power and they can get desires met too.

  • The default digest cycle’s repeat count of 10 can be changed
  • The interpolation service allows you to change the expression symbols from their default {{ and }}
  • Interceptors and transform hooks exist in the http module
  • Lots of hooks for directives and components

4. Simplified tooling

I have used grunt and gulp extensively in the past however the book used npm in conjunction with browserify. The delivery pipeline was ultimately simpler and easier to manage.

If tools are complex, then when things go wrong (bound to happen on any reasonably large project), you’d have to spend a lot of time debugging or trying to figure out what went wrong.

And yes, npm is powerful enough.

5. Engineering tricks, styles and a deeper knowledge of Angular

Recursion

The compile file which would allow two functions to pass references to each other – an elegant way to handle state handovers while also allowing for recursive loops.

Functions to the extreme

  1. As reference values: The other insightful trick was using function objects to ensure reference value integrity. Create a function to use as the reference.
  2. As dictionaries: functions are objects after all and while it is unusual to use them as objects, there is nothing saying you can’t.

function a() {};

a.extraInfo = "extra"

Angular

Most of the component hooks will work for directives as well – in reality, components are just a special class of directives. So you can use the $onInit, $onDestroy and so on hooks. And that might even lead to better performance.

Issues

Tero did an awesome job writing the book – it is over a 1000 pages long! He really is a pro and knows Angular deeply; by the way, you should check out his blog for awesome deep dives.

My only issues had to do with issue resolution; there were a few issues with outdated dependencies but nothing too difficult. If he writes an Angular 2 book, I’d like to take a peek too.

Conclusion

I took a peek at the official AngularJS repository and was quite surprised by how familiar the structure was and how it was easy to follow along based on the concepts explained in the book.

I’ll rate the book about 3.9 / 5.0. A good read if you have the time, patience and curiosity to dive deep into the  Angular 1 framework. Alas Angular has moved on to 2 but Angular 1 is still around. Moreover, learning how software is built is a great exercise always.

Code complete != ship ready


I used to work for a team where whenever an engineer said he was done, the next question would invariably be are you ‘done done‘?

Do you find yourself always under-estimating the amount of time needed to complete some software project? Does it take more time than what you planned? Or do you say you are done but need to change just one small feature.

Accurate estimation is difficult but again an unclear definition of what done means also contributes to this. So what does done mean?

Checklists and Processes

Boeing introduced preflight checklists in 1935 after a prototype crash killed two pilots. Investigations revealed that they had forgotten to carry out a critical step before takeoff. Thus, the fix was to introduce a preflight checklist to avoid this (they learnt from the mistake and became better for it).

A catastrophic software failure can have disastrous outcomes – it is expensive, can harm (imagine if embedded car software were to fail), and is bad PR for the company (how many security breaches have you heard of in recent times?).

So, do we need such heavy formal checklists for software development? Some companies require standardized processes while others don’t. However, it is always good to have a mental model of the various stages of the software shipping pipeline.

Here is a checklist to guide you through the various phases of done; you can now know which stage you are at any given point in time.

1. Code complete

You have written the code (and hopefully added unit tests) but have not done any deep testing. This is what most developers erroneously refer to as done. No, code complete does not mean done. It is the first milestone in a long journey.

Always have tests for code completion

2. Test complete

Thorough exploratory testing of the feature has been carried out. This covers extreme conditions, edge cases and load/performance scenarios.

It’s an exercise in futility to attempt to find all bugs and exercise all possible code paths. Rather, the goal of testing is to reduce the chances of a user finding a glaring simple-to-discover error that should have been found earlier. It also surfaces other issues (e.g usability, security holes etc.) that might have been latent.

Every discovered bug should ideally be fixed and a unit test added to prevent future regressions. However, not all bugs are fixable (and those turn into features…). Early knowledge of these issues enable developers to proactively prepare mitigation plans, workarounds and also notify users.

If you don’t have dedicated testers (they are worth their weight in gold and very good); you  can use James Whittaker’s testing tours method to get some fun while doing this and also ensure a good coverage.

3. Integration complete

You have completed the various isolated features, written unit tests and also done exploratory testing. You have a high confidence in each of the individual features however your components work exactly like the similitude below:

So get to it and do integration of all the disparate components and ensure that things still work as they should. More testing, more discussions with collaborating software engineers and maybe some more glue code.

Why do I keep emphasizing on testing? Have you ever had a bad experience with some software? It’s nearly always a quality issue.

4. Ship ready

This is the cross your t’s and dot your i’s moment; verify that the code solves the problem, issues discovered during testing have been fixed and components have been integrated. Code complete, test complete and integration complete? Great, time to ship.

Some organizations enforce strict limits on some parameters like performance, security, reliability and compliance. If so, you should do another pass to ensure that your software meets these criteria.

The product owner / customer likes the software and viola! Your software piece rolls out like a new car off the production line.

5. Shipped (maintenance ready)

Software is never done. When it is shipped, customers would start using it in ways you never imagined and will find issues. Some can be petty – for example, can you change move this button by 0.2 pixels? While others would be pretty serious stuff.

In maintenance mode, the software already exists but there are new bugs to fix, new feature extensions to make and behavioural modifications to make.

And when you think it’s all done, then maybe it’s time to release a new version. Start at number 1 again.

Enjoyed this post? Follow me on Twitter or subscribe for weekly posts. No spam for sure.

Related Posts

  1. The Myth of Perfect Software
  2. The Effective Programmer – 3 tips to maximize impact

What is Semantic Versioning (SemVer)?


Software Versioning

Software versioning has always been a problem for software developers, release managers and consumers since time immemorial. For developers, the challenge lies in releasing new breaking changes while simultaneously minimizing consumer upgrade pains. On the flip side; consumers, when they finally decide to upgrade to new-shiny-release-10000, want to be sure they are not buying a one-way-ticket to dejection-land. Add release managers who have to manage both parties to the mix and you’ll get a sense of the chaos.

Versioning is even more of an issue in current times with the rapid viral proliferation of tools, frameworks and libraries. This post explains what Semantic Versioning is, why it is needed and how to use it.

Semantic Versioning (SemVer)

Semantic Versioning was proposed by Tim Preston-Werner (apparently he founded Github too). The goal was to bring some sanity to the management of rapidly moving software release targets.

SemVer provides a standardized format for conveying versioning information as well as guidelines for usage. Thus, software release information becomes meaningful (cue semantic) -a glance at the number describes what to expect when consuming a new update or new piece of software. No more groping in the dark or taxing deductions…

But why SemVer? Can’t software developers just use any versioning style they like? Sure, you can update your local releases and call them whatever you like e.g. blue orange, green banana or orange coconut. These styles work fine if you live under the sea or don’t care about others using your code. In the real world, software developers collaborate and consumers need to easily know if the blue orange->green banana upgrade is not going to break their entire stack. A standardized format that everyone agrees on and understands is a likely fix and this is what SemVer does.

So SemVer…, a.b.c what?

At least three numbers are required to identify a software version; these are explained below:

major . minor . patch

For example, version 3.10.1 of lodash has a major version of 3, a minor version of 10 and a patch version of 1.

How do I use these numbers?

Each of these three numbers signifies the changes in the released version of software; i.e. before a new set of changes go out, you want to bump up one of these numbers to correctly match and convey information that describes it. The heuristic is thus:

  • New changes that break backwards compatibility require a bump of the major version number. The upcoming Angular release, which is not backwards compatible, is thus tagged 2.0 and not 1.5 or 1.6.
  • New features that do not break backward compatibility but are significant enough (e.g. adding new API methods that didn’t exist before) would bump up the Minor version number.
  • Small (maybe insignificant changes e.g. bug fixes) would bump up the patch number. These have to be backward compatible too.

How do I use the numbers?

Consuming new software

So you run into an awesome software on Github that uses semantic versioning and is at version 2.1.6. The software meets your needs and you want to integrate it right away, the following guide provides a heuristic for asking questions.

 2 – There have been two major releases.

  • What does the community think about the project’s continuity?
  • Is the first version deprecated or still being maintained?
  • How was the backward breaking major release handled? Did the authors get community support and feedback?

 1 – There has only been 1 minor update since the v2 major release.

  • Not too many features have been added, is the project active?
  • Are there plans to add more features or is the software mature?

16 –  16 patches!

  • Are the 16 patches bug fixes or just minor upgrades?
  • Does the project have a quality concern?
  • Is the software stable or does it break?

Upgrading existing software

There has been a new release of library foobaz – the core engine powering your software stack. Assuming the current consumption version is v1.2.3, the below explains likely actions based on new release version numbers:

v2.0.0 – This is a major bump and pre-existing contracts might be broken. A major update nullifies any pre-existing contractual obligations (that’s lawyerese for ye nerdy folks), so don’t jump on the bandwagon until you know the full impact of the changes.

v1.∞.∞ – This should be pretty safe, minor and patch bumps typically do not break backwards compatibility. So go in, update and start using the new features.

NPM and SemVer – the upgrade link

Open up a package.json or bower.json file, you’ll typically see a devDependencies or dependencies section with key-value pairs that looks like this:

"dependencies": {
"lodash": "~3.8.0",
"angular": "^1.3.0",
"bootstrap": "^3.2.0"
}

What do the ~ and ^ stand for? They determine what versions you upgrade to when you run npm update. Thus for the sample above, an update will allow newer versions based on the table below:

Special Character Version Allowed Upgrades Notes
^ ^1.3.0 1.*.* All backward compatible minor/patch updates
~ ~3.8.0 3.8.* All backward compatible patch updates

Note: This post only explains ~ and ^; there are more characters (e.g. >=, *, etc) and these should be explained in an upcoming post inshaaha Allaah.

3 Extra things to know about SemVer

Beware of 0.*.* 

The semver spec doesn’t track any release below 1. So if you plan to use some software tagged as 0.9.9 just be ready to redo your work any time (read my EmberJS story). These are libraries in development and terrifying things can happen, for example, the author(s) might never publish a production ready version or might simply abandon the project.

In short, SemVer does not cover libraries tagged 0.*.*. The first stable version is v1.0.0.

Start at 0.1.0

And this makes sense, doesn’t it? Projects typically start with a feature and not a bug fix. And since you would probably tweak a couple of things before becoming stable, it doesn’t make sense to start out with a major version bump. Unless of course, you are only publishing your closed-source code.

Identifiers for PreReleases And Builds

Valid identifiers are in the set [A-Za-z0-9] and cannot be empty.

Pre-release metadata is identified by appending a hyphen to the end of the SemVer sequence. Thus a pre-release version of some-awesome-software could be tagged v1.0.0-alpha. Note that hyphens are allowed in names for pre-release identifiers (in addition to the set specified above) but names cannot contain leading zeros.

Build metadata is identified by appending a + sign at the end of the patch version number or at the end of the prerelease identifier; leading zeros are allowed. Examples include v1.0.0+0017, v1.0.0-alpha+0018.

Criticism

Jeremy Askhenas (of underscore and backbone fame) has written a critique of SemVer; arguing that it is no silver bullet and most big software projects do not use it (e.g. node, jQuery etc). There is also ferver too…

Nevertheless, it is worth knowing that you SHOULDN’T absolutely rely on version numbers as implying SemVer compliance (cos developers are human afterall…), you should read up the project docs and ask the maintainers if you have questions.

Still curious? Read Why JavaScript ‘seems’ to get addition wrong.

The Effective Programmer – 3 tips to maximize impact


Effectiveness, (noun) : the degree to which something is successful in producing a desired result; success.

Over the years, I have tried experiments, read books and watched several talks in a bid to improve my effectiveness. After a series of burnout and recovery cycles, I finally have a 3-pronged approach that seems to serve me well.

1. Learn to estimate well

2. Use the big picture to seek opportunities

3. Continuous Improvement

Lets discuss these three.

1. Estimation – the bane of software development

Reliable coding estimates accurately forecast when feature work will be done. But when is a feature done? Is it when it is code complete? Test complete? Or deployed? Most developers wrongly associate code complete with test completion or deployment ready. This explains arbitrary estimates like: “Oh… I’ll be done in 2 hours”; such estimates typically miss the mark by wide margins due to error compounding. Let’s take a simple bug fix scenario at a fictitious software engineering company.

  • Bug is assigned to developer John SuperSmartz
  • John SuperSmartz reads the bug description, sets up his environment and reproduces it
  • He identifies the cause but does some light investigation to find any related bugs (he’s a good engineer)
  • John designs, implements and verifies the fix
  • Gets code review feedback and then checks in

Any of the intermediate steps can take longer than estimated (e.g. code reviews might expose design flaws, check-ins might be blocked by a bad merge, newer bugs might be discovered in step 3. etc). Without such an explicit breakdown, it becomes difficult to properly give estimates. Don’t you now think the 2-hour estimate is too optimistic?

Personally, I use kanbanFlow (I love their Kanban + pomodoro integration) to decompose work into small achievable 25-minute chunks. For example, I might break down some feature work into 8 pomodoros as follows:

  • Requirements clarification – 1 pomodoro
  • Software design and test scenario planning – 2 pomodoros
  • Coding (+ unit tests) – 3 pomodoros
  • Testing and code reviews – 1 pomodoro
  • Check-in + estimation review – 1 pomodoro

Some of the things I have learnt from using this approach:

  • I grossly underestimate feature work – the good side though is that this planning enables me to improve over time
  • I know when to start looking for help – as soon as a task exceeds its planned estimate, I start considering alternative approaches or seeking the help of a senior technical lead
  • Finally, it enables me to make more accurate forecasts – e.g. I can fix x bugs per week…

2. See the big picture

A man running around in circles covers a lot of distance but has little displacement. In optimal scenarios, distance covered equals displacement while in the worst scenario, it is possible to cover an infinite distance and have a displacement of zero.

Imagine working for several days on a feature and then discovering major design flaws that necessitates a system rewrite; a lot of distance has been covered but there has been little displacement. Working on non-essential low-impact tasks that no one cares about is neither efficient nor effective. Sure they might scratch an itch but always remember that the opportunity cost is quite high; the lost time could have been invested in higher priority tasks with a larger ROI.

Whales periodically surface for air and then get back into the water to do their business; so should engineers periodically verify that priorities align with company’s goals. It’s possible to get carried away by the deluge of never-ending feature requests and bugs fixes; an occasional step back is needed to grasp the whole picture. Here are sample questions to ask:

  • Where are the team’s goals?
  • Does your current work align with company goals?
  • Skills acquisition and obsolescence check
  • Opportunities for improvement?

Personally I try to create 3 to 4 high-impact deliverables at the beginning of each week and then focus on achieving these. Of course, such forecasts rely heavily on productivity estimates.

3. Continuous Improvement

Athletes consistently hold practice sessions even if they don’t want to because it’s essential to staying on top of their game. The same applies to pretty much any human endeavor – a dip in momentum typically leads to some loss in competitive edge. The software engineering field, with its rapidly evolving landscape, is even more demanding – developers have to continuously and relentlessly learn to stay relevant.

Staying relevant requires monitoring industry trends vis-à-vis blogs, conferences and newsletters. There are a ton of resources out there and it’s impossible to follow every single resource, rather it is essential to separate the wheat from the chaff and follow a select high-quality few.

Learning and experimentation with new technologies naturally follows from keeping abreast of developments. A developer might decide to learn more about the software stack his company uses, logic programming or even computer science theory. Even if his interests are totally unrelated to his day-to-day job, independent learning would expose him to new (possibly better) ways of solving problems, broaden his capabilities and might even open up new opportunities. I prefer learning established CS concepts to diving into every new db-data-to-user-moving framework.

Opportunities abound such as learning IDE shortcut keys, terminal commands, automating mundane tasks and so on. Ideally you want to start simple by selecting the area with the highest impact-to-effort ratio and then dedicating a few minutes to it daily. Over time, the benefits start to pay off.

And that’s about it! Do you have other suggestions?

Like this post? Follow me on Twitter here or read some of my top posts.

1. Code is poetry: 5 steps to bulletproof code

2. So you want to become a better programmer

Code is Poetry : 5 steps to bulletproof code


I recently read a Quora post about a meticulous cleaner who would not let the smallest blemish evade him – he’ll stop, fix the flaw and then step back to admire his work. The narrator admitted that the cleaner’s platform was one of the neatest he had ever seen around the world.

Morale of the story? The same principles apply to software development – programmers have to love their craft and put their best into making it stand out. Going the extra mile in simplifying complex code, removing dead code or writing unit tests help to increase the life of a code base. These actions might appear to impede development speed but again, what is the definition of slow? As a former colleague used to say: “we sometimes have to slow down to go faster”.

Poorly written software always comes back to haunt developers in unimaginable ways; such code will effortlessly conjure monster bugs of all types (e.g. heisenbugs, mutatorbugs, linkerbugs and more). Removing bugs in bad software is also a nightmare as fixes might spawn even more abominable bug monstrosities (without unit tests, how do you verify? ). Which would you prefer? Disappointed customers, having to work weekends and eventually rewriting the same code versus doing it well upfront. The latter does sound faster to me…

Enough ranting, how about a couple of suggestions on how to write bulletproof code?

1. Know what you’re doing

Have you ever had to work on impossibly cryptic code? To all programmers who disparage others who do not get ‘smart’ code, here is a beautiful quote from E.F.Schumacher:

“Any intelligent fool can make things bigger and more complex… It takes a touch of genius – and a lot of courage to move in the opposite direction.”

Most ‘smart’ code pieces are indicative of shallow problem domain knowledge. Most times, these pieces can be simplified by taking away pieces until the problem the author was trying to solve becomes clear.

The  first thing you want to do when approaching a problem is to stop and think deeply. Stop! Don’t even start that IDE!! Do you understand the problem thoroughly? Do you know what issues might arise? How about testing and completion criteria? Once brainstorming is done, then find the simplest, fastest and most effective way to clearly convey your solution. Always remember that programming languages are vehicles for expressing your thoughts.

Without fully understanding the task, it is possible to end up with over-complicated code or even worse: ‘smart’ code. Please do not create breeding grounds for evil bugs – developers have better things to do with productive hours. Simplicity is the true smartness…

2. State modification is the root of all most evil

Functional programming purists tout FP as being less susceptible to bugs because it avoids modifying state. The best functions are like Mathematical functions, e.g y = x². Why are they good? Well they always give the same results for the same input and have no extra hidden internal state lying around.

I like using functions a lot; composing them into larger pieces and building up systems based on them. Such systems rely heavily on piping – function outputs are passed to other functions as inputs. Clearly designed functions are great as they communicate intent (e.g. square vs x²), make it easy to follow the author’s thought process and provide simplifying abstractions.

Watch out for the double-edged sword of modifying state in functions, it is the root of most evil bugs. Having the wrong assumptions about state changes, obscure manipulations (i.e. inadvertently modifying loop invariants in functions) and improper naming inevitably lead to trouble.

Whenever possible, use a functional programming style that relies on stateless functions. Hopefully this saves you a lot of debugging pain – you can thank me later…

3. Clean up as you go

What is your typical reaction when you come across horrible code? Do you sidestep the mess (I didn’t write it…) or do you attempt to fix it?

Just as people are more likely to drop trash on a dirty sidewalk compared to a clean one, developers are also more likely to write horrible code if the original source code is bad. This attitude is one to avoid at all costs; if you can’t invest in cleaning up the dirt, at least don’t add to it! Remember the boy scouts’ rule – always leave the park better than you found it.

Ideally you should clean up the code as you work in it – it shows that you care and helps prolong the life of the code base. If you need to strike a balance between adding new features and fixing technical debt (which might not bring new business value), then carve out a small block of time (e.g. 30 minutes).

If you want to know why this is important, then you should read about the broken windows theory (not yet, finish the post).

4. Love your craft

The best craftsmen are immensely proud of their work and achievements; they try to show it off to everyone and won’t stop talking about it (even if we do not want to hear about it ).

You should be proud of your code. Seek feedback; what looks beautiful to you might be terrible code – perspective, as they say, is subjective. Don’t get defensive and hurt over review feedback too – it’s the code dude! Not you!! So swallow your pride, learn and improve the code. Growth involves some pain.

Spend the time to think about elegance, simplicity and brilliance; the extra time and effort pays off. Which would you prefer of the two outcomes?

  • Creating a masterpiece in 6 hours
  • Creating a ‘hacker-piece’ in 2 hours and then spending 10 hours in maintenance (bug fixes, extensions and changes)

It’s a trade-off; I don’t like hunting down bugs – I do it if I have to but I’d rather not…

5. Break your software yourself

There was a dramatic moment during the début of the Microsoft Surface – the presenter dropped it (intentionally) and everyone gasped! Why would anyone want to drop a tablet? There are countless woeful tales of smashed screens, internal damage and memory loss. However, the presenter knew the product’s limits and knew he could do that. Can you break your own software? Do you know its limits?

Go ahead! Break it!! How else would you know its limits? Nearly every piece of code can be broken – a function that calculates 1 + 1 will give a wrong result on a computer with only 1-bit of memory (yeah, I wonder how you’ll even get such a computer). 

It’s a shame if customers find obvious bugs that should never have gotten into production. Customers who run into show-stopper bugs are typically not happy and unhappy customers can consider alternatives. Broken software can cause huge losses (goodwill, money, credibility, tarnished brand images etc) so you do not want to take this lightly – unless you fancy going out of business.

In these days of web applications, your customers will do a lot of testing for you – the more users you have the more weird scenarios people would put your software through. Do not worry about the extreme cases, the normal expected way has to work well though.

So try to break your code yourself before you call it done. This makes it more robust and you get to know of the corner cases. Even if you do not fix them, such knowledge enables you to handle customer complaints.

Inshaaha Allaah I hope to write about how to break software some time soon.

Now go and write some better code…

Related

5 things I learnt from Code Complete 2


My yearning to read Code Complete started early in 2012 – I came across the book in the MASDAR library. Most of it didn’t make sense to me then but I made a mental note to return to it. Alhamdulilah I eventually got a copy of the book around August 2012 but I couldn’t bring myself to read it because of the many pages. Instead I read a couple of smaller books.  I finally stumbled upon goodreads this year and it helped to revive my long-lost book-reading habits.

I made a simple rule – I would make sure I read for about 30 mins everyday: no more no less; I wasn’t bothered about how long it would take or my reading speed, all I cared about was my consistency and understanding of the concepts. I finally finished the book about five months later (sounds like eternity huh?). The book is a little on the verbose side however I learnt a lot – I mean a REAL LOT – and here are my top five lessons from it.

1. Comments might actually imply bad code

It was surprising for me to learn that code with LOTS of comments might actually be bad code, strange huh? Yeah you’re right! Heavily commented code actually implies that the underlying codebase is abstruse and not self-documenting. Ideally code should be simple, straightforward and not need heavily commented

Moreover if comments are not updated when the original code changes, then they’ll become misleading which is even worse than having no comments at all. If you have excessive comments; maybe you should do a review and rework your code. Trust me, most times you’ll end up with cleaner code… OK,  well hopefully :).

Oh, and for those times you write ‘rocket science’ code (aka impossible to simplify), please save future maintainers the stress and do add comments.

2. Project Management

I think one of the most difficult aspects of the software development profession is getting requirements right. Customer perceptions always change and most times people do not even know WHAT they WANT and when they finally do, they want it delivered last week :).

I picked up a couple of ways on how to make this a pleasurable acceptable experience for everyone involved – from developers to project managers to clients. Learn how to say no, estimate project times and methodologies.

3. Leveling up as a programmer

I especially love the ‘Program into your language, Not in it’ quote. Programmers should NOT be limited by the languages or tools they use. If a language does not have a highly desired feature (e.g. asserts, exceptions etc), there is nothing stopping you from implementing your own such features.

Another important point I picked up was the continuous need to consciously improve the quality of my development process. I always get the urge to write quick dirty fixes to problems and the instant gratification of ‘working’ code fuels this bad habit. No, don’t give in! Resist the urge, fight it and crush it! Try to go the extra mile every time you code, that’s the way to grow.

I admit that writing ‘beautiful code’ takes more time and effort however the future savings far outweigh the immediate gains of bad fixes. What is the use of writing quick code in 2 hours and then spending 12 hours to maintain it? It would have been better to spend 6 hours to get it all right.

I once spent about two days trying to get a really critical function to work, I could have written it very quickly (< 2 hours) in sub-optimal ways however it would not have been reusable or flexible or neat.

4. Code is written to be maintained!

I have known for some time that duplicated code is bad and I finally got to really understand why; the issue is with maintenance. Assuming there are three replicated instances of the same code block; if a new programmer updates only one and forgets to update the others, then ‘miracles’ will happen during execution and it is going to be extremely difficult to find the issue.

It is also good to create config files that allow you to make platform-wide changes easily ( I use this a lot now); it makes code much more flexible and you do not really need to dive into some ‘evil’ forest to make changes.

Finally deleting code is a good idea – strange? The smaller the codebase, the fewer the chances of mistakes happening and the smaller the number of things you have to worry about. 🙂

5.  The importance of planning and design

Do not just rush into code, think carefully about the design, how to create re-usable components, how to properly name variables and ensure that code is clean and can be easily extended in the future.

I also adopted the Pseudocode Programming Process (PPP) which has saved me a lot of trouble with coding and design; this is how it works: you write pseudocode of what you want to do and how you are going to go about it – I normally just create a list of numbered steps – then you think through your plan and validate all assumptions.

Afterwards coding should be a breeze and even better, the pseudocode (assuming you leave it in the code) can be converted into cool comments. This approach also helps you to prevent second guessing when you get to tricky situations because you already went through the scenario in the past and wrote it out.

I noticed remarkable changes when I stopped diving into code without thinking. For example, I was able to re-use some  of my existing code to create an admin dashboard in just about 120 minutes; I bet I would have had to start from the scratch otherwise.

Final Thoughts

The size of the book and its verbosity still, Code Complete 2 is a great book and all programmers should read it – you will learn a lot and know what habits to drop.

If you also find it too difficult to understand, put it away for some time and make sure you return to it in about six months. The following quote explains this best:

If you’re a beginning programmer you won’t understand a lot of the material, and if you are experienced, the book will only confirm what you already know. Robert Harvey

The Art of Debugging


I recently wasted a lot of time while attempting to use the amazing Dropzonejs image uploader. I left out a function call in an overloaded function and this broke every subsequent call. More annoyingly, I could not pinpoint the cause of the error as Dropzone was failing silently.

Close to giving up, I decided to give it one more attempt and went to my sandbox – I create a safe area in all my projects for doing ‘risky’ stuff. Starting afresh there, I was amazed to see that everything worked fine. A close comparison of the code samples revealed the bug’s origin – the left out function call. I double-checked to confirm this and that was all about it. Sadly, the documentation does not mention that a failure to call done() in the accept function will break the code silently.

Programmers usually spend a lot of time debugging and it can be a painful experience;  some feel like tearing out their hair in exasperation, smashing their poor computers or even believing that their computers really ‘hate’ them! 🙂

Actually computers do not hate people and the operating system is not conjuring bugs – the most likely reason is buggy code. Here are a couple of tips on debugging; hopefully these will help to reduce time spent, frustration and annoyance levels.

The Inefficient Way of Debugging: Trial-by-Error

Random guesses? Print statements littered around? Fast fixes that ‘appear’ to work? Do these ring a bell? Welcome to the bad bad way of debugging.

This approach is fraught with a gazillion issues. The worst being the potential introduction of new bugs into the codebase. Lucky bug discoveries offer little learning opportunities and it might be difficult to reuse knowledge gained from such fixes in the future.  Also, such run-of-the-mill fixes tend to be inelegant and stand out like a sore thumb when compared to earlier designs and architecture.

The Efficient Ways of Debugging

1. The very first thing is to create a list of the likely causes of the bug; this usually involves thinking deeply about loopholes and design flaws. Studying program error dumps and buggy behaviour might also help.

2. Go through the list while eliminating disproved hypotheses. Once a likely cause is found, a verification needs to be carried out; this can be as simple as proving that the bug appears only when that cause is activated.

3. Don’t just rush to write a fix – I bet you would not want your ‘super-fix’ to mutate into a monster ‘bug’. Think deeply about the bug and what potential ripple effects a fix could have. Also think about how the proposed fix would work, how it blends into the system’s architecture and possible design impacts.

4. Yes, you got it!! Go ahead and squash that bug! Yuck – I dislike bugs.

5. Not finished yet, you need to go through your code armed with your newly acquired bug-terminating powers. Find and ruthlessly sanitize code likely to suffer from such bugs. Rinse and repeat – always leave code in a cleaner state than before.

Extreme Approaches: When the Efficient Ways Fail

You might apply all the above and still be stuck in a rut; this might occur if you have extra-terrestrial bugs (quite popular in multi-threaded environments).

6. Total examination

The brute force approach is bound to work insha Allaah although it involves painstaking work. You go through the code thoroughly (a debugger should come in handy); this might involve examining all execution paths if possible and trying to pinpoint the issue. Hard work…

7. Start afresh

When everything fails, why not throw out the old code and write it again?

Some other tips

6. Very important, set a time limit

Do not spend 3 days hours trying to fix a bug if you can re-implement the same feature in 2 hours. If you are sure you can rewrite a bug-free version of some buggy code, go ahead with a rewrite and spare yourself the trouble. Time counts.

7 Learn to use tools

Debuggers, memory examination tools and profilers. They will help point out to what might be causing the issue. I once had a really really nasty bug while using jQueryUI, the bug only showed up on a particular browser and only when the page was accessed over the local network; otherwise all was fine.

I eventually had to use a profiler after several failed debugging attempts, I  then discovered that a function was being called several hundred times. Bug case solved!

8. Discuss with a buddy

Find a programmer / buddy to discuss with, discussing your ideas with someone will help you to find design gaps and flawed assumptions; you might be surprised that you get the error spot on without any contribution from your buddy.

Now go ahead and eliminate some bugs :).