Sorting involves ordering elements of a collection. For example, dictionaries are sorted alphabetically, numbers lists could be in increasing or decreasing order. Sorting is important and can be applied in various context, especially those that have to do with massive data.

The efficiency of a sorting algorithm is related to the number of elements it works on. A complex sorting algorithm may not be worth the trouble when applied to small collections; however its performance on huge datasets might exceed that of simpler algorithms.

Here is a brief explanation of the bubble, selection, insertion and merge sort algorithms.

**Bubble Sort**

This algorithm makes multiple passes through a list, comparing adjacent items and swapping those which are out of order. The very first pass through the list places the largest element in its right place and subsequent passes put the next largest values in their places too. That is, elements ‘bubble’ up to their appropriate locations.

This is a very inefficient algorithm due to the large number of exchanges and comparisons it carries out. Consider sorting a list of n elements; on the very first pass, bubble sort make n-1 comparisons because there are n-1 possible pairs of elements; there are n-2 comparisons in the second and a complete sort requires (n-1) + (n-2) + … + (1) comparisons; the sum of this series is (n*(n+1)/2). This makes the bubble sort of order *O*(n-squared) .

**Selection Sort**

The selection sort is also *O*(n-squared) but it performs better than the Bubble sort. On every pass, it selects the smallest (could be largest too) element in the list and then swaps it with the element at the smallest (or greatest) index in the list. Next it finds the next smallest number and swaps it with the element at the next smallest index. It continues this process until it walks through the entire list.

A flaw of selection sort is that if can not detect already sorted collections.

**Insertion Sort**

The insertion sort works in different way and outperforms the selection sort. In each pass; it maintains a sorted sub-list in the lower positions of the list. Consequently, new items are inserted into their proper place in the sorted sub-list.

n-1 passes through the whole list are needed to sort the entire list; on each pass, a linear search is made to decide where to insert the new element. This leads to the *O*(n-squared) running time.

Interestingly, the insertion sort can be made to run in *O*(nlgn) time. Since the sublist in lower positions is always sorted; if a binary search is used to find the insertion point in this sub-list instead of linear search., the number of comparisons reduces to lg n.

**Merge Sort**

This uses a divide-and-conquer strategy to improve performance. The merge sort is a recursive algorithm that continuously splits a list of items in two. The base case is reached when the list has only one item – a list with one item is sorted anyway – and once this happens, it recursively merges the splits using the **merge **operation.

The merge sort runs in *O*(nlgn) time and outperforms the insertion sort when n is sufficiently large, when n > 40.

Can you suggest other sorting algorithms?

Nice insight into sorting.

I struggled with sorting for a while because of my poor maths skills .. :(

LikeLike

Thanks Wale,

I had the same problem too and still run into it at times :).

Brushing up my discrete maths really helped; you can try these videos: http://video.google.com/videoplay?docid=-2965569821331370765

Prof Shai Simonson is awesome.

LikeLike

Wikipedia has a few:

http://en.wikipedia.org/wiki/Sorting_algorithm

LikeLike

Yes; you are right Bernard. :) Thanks, have you implemented any before?

LikeLike