the cde tlks Understanding the language of code as it speaks
Understanding the sort behind the sorting algorithms
Understanding the sort behind the sorting algorithms

Let's say if we give a below list of numbers to you and ask to sort them in an ascending order.

How would you do it?

10, 7, 21, 4

And you would have already sorted the above set saying okay so the numbers sorted in ascending order would be,

4, 7, 10, 21

How did you do it? Umm.. yes a little tricky to answer that isn't it?? Our brain is a very technically complex structure and we would have to deep dive a lot to understand how exactly did it do in less than milli or nano seconds. To summarize the working of brain, it used all the learnings it had until now and used its ultra ultra fast processing to come with the results.

However with the computers or machines in general we would have to provide them or teach them a way to do the same. Also, there is not just one way there are many ways to get to a solution. Some might be using less processing and more space whereas some solutions might use more processing or less space and so on. 

Now why do we sort the data? 

Sorting helps us in making the data more readable, fetchable / findable. It also helps in searching algorithms since it makes things easier when working with sorted data. There are many more applications in database, 

So what are different types of sorting algorithms you ask?

Well there are many say,

And as we mentioned above, each solution or each way of sorting has a different set of complexities involved in the way they approach the sorting.

To put more clarity on this, there are kind of different pointers which we can differentiate a sorting algorithm on.

Number of iterations or swap 

This explains how many swap or number of times it has to move across the dataset to and fro in order to get the dataset sorted

Stable / Unstable ? 

When we say an algorithm is stable, it will always maintain the relative order of the elements placed. For example, if we have a dataset [ a, c, b, b], when it does the sort it would not disturb the relative order of the character.

Let's dive into this more. instead of just b, b lets name it b1 and b2  so say, we have dataset [a, c, b1, b2] where b1 and b2 both mean the character b. So an algorithm is said to be stable when the result is 

[a, b1, b2, c] which shows that it has maintained the relative order during the sort operation

Space Complexity 

Whether for performing the sort operation, does the algorithm some extra data structures or extra space apart from the input data defines the space complexity of an algorithm. Algorithms like Bubble Sort, Insertion Sort, Selection Sort have a space complexity of O(1) whereas Merge Sort has O(n), Quick Sort has O(log n)

Time complexity 

This is more inline with how much time does an algorithm takes to perform the sorting for a given input size. Say how much time will take to sort 10 or 100 or 1000 numbers in a dataset. It is indicated by Big O(worst case), Omega(best case), Theta(average case).

For example, Bubble Sort can be indicated by 

Big O - n2

Omega - n

Average - n2

Well, that's kind of a summary of what sorting algorithms are, if you want to know more just dive into each one of those and ask yourself which sorting algorithm did the brain used before!   

Powered by Froala Editor