Sorting Algorithms


Sorting is the other fundamental operation that loads any computational process. Let us look at some of the considerations in sorting algorithms

In Place Sorting


In essence, any sorting algorithm has to compare elements and swap their positions. Swapping their position could mean physically changing their position in the memory, or the same could be achieved by playing with their index or link, etc.

If we physically swap their position in memory, we need a third place in the memory where we store one of the two elements and then overwrite it with the second. A memory swap of elements A and B would mean, creating a copy of A, overwriting A with B and then overwriting B with the copy of A. Thus, we have three memory copies. This may seem small when we sort a small array of numbers. But, if we have to sort a massive array of huge data structures, such copies would be very costly.

On the other hand plays with the indices and links. For example, if we have to swap two elements A and B in a singly linked list, we need to change the 4 next pointers - in elements before A / B and in elements A and B themselves. Even if A and B are huge data structures, this is all we need to do. There is no memory copy involved. This is called in place sorting. But there are some algorithms that leave the current data untouched and create a new sorted copy of that data. This is not in place sorting. Here, we have lesser number of copies (1 per element). But it requires more There are times when we need to continue working with the existing data in the existing order - the latter is useful in such a case.

Stable Sorting


When we sort elements in a list, we change their positions. If one element is less than the other - as per the given criteria - then we are sure that one of them will be before the other. But, how about elements that are equal according to the given criteria? Which one will go first? Is that defined?

Some sort algorithms ensure that the relative positions of such elements is not altered. Some others leave this unpredictable. The first is called stable sorting. The implications of stable or unstable sorting are not so obvious. But there are scenarios where we need stable sorting, and there are some scenarios where it does not matter.

Adaptive Sorting


Some algorithms assume by default that the elements are not sorted at all, and go through the whole process irrespective of the current state. Others assume that they are "almost sorted" and work towards improving this state. The latter is called adaptive sorting.

Again, there is no good or bad here. Sometimes, we know that the data is really in chaos and it makes sense to just discard the current order and start afresh. But there are other times, when the data is not that chaotic and we like to encash on any existing order in there.

We can choose an algorithm based on our requirements.

Insertion Sort


This works great with linked lists with a dynamic list. We start with a list of a single element. Ofcourse that is sorted. Then, we add another element - in the appropriate position to keep the list sorted. Thus, we continue to add elements, inserting them in the appropriate position to keep the list sorted.

Now, if we remove or add elements from this list, it is a very simple job. Removal ofcourse does not impact the sort order. But if we a new element, we just follow the same process of pushing it into the appropriate position.

This is neither stable nor adaptive nor in place. This works great with linked lists. But, if we are working with an array, any insertion would mean shifting the elements around it. That would be really costly.

Selection Sort


This is an in place sorting algorithm. Here, we start by identifying the smallest element in the entire set. To do this, we start by comparing the first two elements. The smaller of the two is remembered and we compare it with the third. The smaller of the two is then compared with the fourth and so on.

Once we have the smallest element, we swap it with the first element in the list. Then, we identify the smallest element in the remaining data. This is swapped with the second, and so on. Thus, we create a sorted list on one side - that continues to grow, shrinking the remaining unsorted part of the list.

Bubble Sort


Have you seen bubbles of air pushing their way through a glass of water? How does it happen? The small bubble of air is lighter than water, thus it moves up because of the gravitational pull. But, it does not have to prove itself right in the beginning. It compares itself with water just above it. Moving upward, it continues to compare with the water just above it, and continues to move upwards till it has air above it - that is when it stops moving.

The bubble sort algorithm is inspired by this process. Here, we do not compare all the elements in the data structure. We start with one end, comparing the element with its neighbor. If required, we swap them, or we just move forward. We do this continuously, till we notice that there is no swap anymore - the data is sorted.

This algorithm is in place, adaptive and also stable. This works well when we have all the data with us, and we do not expect any more changes. If we append an element to the end, we will have to go through the entire process, requiring several swaps till the element slides to its correct position.

Shell Sort


This is an extension of the bubble sort algorithm. This works out to be quite efficient. It is in place as well as adaptive. But not stable.

We start by defining an "interval". We split the list based on this interval, by picking up (in place) the elements at the given interval. Here, we compare and swap (if required) the neighboring elements of this sub list - bubbling through them. We then reduce the interval and repeat the process - continuing the same by reducing the interval up to 0 - where we bubble through the entire list. But by this time, the list is almost sorted and so we have very little processing left.

Merge Sort


Often, the entire data is too huge to sort at a time. Merge sort helps us simplify this problem. It is based on the concept of divide and rule. We start bu splitting the data into two parts. We sort this using one of the sort algorithms. Then, we merge these two sorted lists.

To do this, we compare the tips of the two sorted lists. The one which is smaller is pushed into to the sorted list. And then we compare the tips again. Doing this iteratively, we end up with a sorted list. But this is not in-place. It could be adaptive or stable - based on the actual algorithm used for sorting the individual subsets.

Quick Sort


This improves on the merge sort. This is not in place. But it is adaptive, as well as stable. It is good for huge data sets. It works by choosing a pivot value. The entire data is compared with this pivot and two sub lists are created - with elements that are higher or lower than this pivot.

Now, if we sort these two sublists, we can be sure that we get a sorted list simply by appending them. Then it does the same with the two sub lists - generating four sublists. This is repeated recursively till the entire data is sorted - or the sub lists are small enough to use another sorting algorithm.

Then we can simply append them in the right order to get a sorted list.