the cde tlks Understanding the language of code as it speaks
Complexity of a code - Time
Complexity of a code - Time

Say, you have been asked to reach a destination which had many routes available. Now obviously we always are going to look for the fastest route available which has less traffic, less speed bumps, takes less time to reach right? 

So time is one of the deciding factors here... Similarly for any problems that we have in our hand most of the time, we do have more than one solution in our hand.. How efficient and fast the solution is depends on our choice. 

The same choices we have when we write code or which data structures we use or which platform we use for developing the same. Time complexity is based on the amount of time it takes for a particular algorithm / program / lines of code to execute. However rather than emphasizing on the overall time it takes into account the impact of the size of the data set on the given program execution.

This plays a crucial factor when choosing algorithms to solve a problem, for example in a program for searching a name in an address book, would you prefer linear search or binary search (choose this!). Binary search would be choosen because the worst case complexity of this search would O(log(n)).

Now what does O or the Big-Oh stands for?

So when looking at the time complexity of any algorithm, there is best case scenario (say input size of 1), worst case scenario(say input size of 1 million) or an average case scenario. To indicate each scenario we have certain notations.

  • Big-Oh -  O - Worst case scenario
  • Big-Theta -  θ - Average case scenario
  • Big-Omega -  Ω  - Best case scenario

It depends on the perspective which scenario we need to consider however generally it is seen that the worst case scenario is looked to decide on the efficiency of the algorithm.

Going ahead, below are some common complexities that we usually play around along with some fun ratings for the same!.

  1. O(1) - Constant Time - Independent of input size - BEST
  2. O(n) - Linear complexity - directly proportional to the input size - SURE WHY NOT!
  3. O(n2) - Quadratic complexity - proportional to square of the input size - ARE YOU REALLY SURE?
  4. O(nx) - Polynomial complexity - proportional to higher order or input size greater than 2 for example cube - ARE YOU REALLY REALLY SURE??.
  5. O(log n) - Logarithmic complexity - reduces input size by 50% every operation (e.g: binary search) - GOOD ENOUGH
  6. O(n log n) - Linearthmic complexity - combination of Linear + Logarithmic - NOT GOOD, BUT SURE!
  7. O(2n) - Exponential complexity  - I MEAN ARE YOU REALLYN SURE?? 

And you might have got a hang of it where the variable inside the Big O is no. of operations. O(no. of operations)

Let's dive in to understand more on the complexities.

Constant Time - O(1)

public static void main(String[] args) {

System.out.println("Program Executed"); 

}

The above code will always have a constant time complexity of O(1) because irrespective of the size of dataset args[] it will get completed in same amount of time. Basically saying the size of the dataset has no impact on the execution time of the program.

Linear Complexity - O(n)

public static void main(String[] args){

   for(int index = 0; index <= args.length; index++){

       System.out.printlnt(" Program Executed ");

   } 

} 

This code is dependent on the size of dataset provided. Say for 10 items present in the args array the loop will be executed for total 10 times hence printing the line "Program Executed" 10 times. For 100 times , 100 times the line will be printed. This gives us the understanding that for n amount of data, the time is n for the total execution. Time complexity can hence be said to be O(n)

Quadratic / Polynomial Complexity - O(n2) / O(nk)

public static void main(String[] args){

   for(int loopOneIndex = 0; loopOneIndex <= args.length; loopOneIndex++){

       for(int loopTwoIndex = 0; loopTwoIndex <= args.length; loopTwoIndex++){

              System.out.printlnt(" Program Executed ");

        }

    } 

} 

The above code will execute total ntimes. So if we have nested loops in our logic which is dependent on the input size, the time complexity will also get affected. In this case we have O(n2) complexity. Similarly if there are higher order loops say more than 2 times we would have polynomial complexity.

Logarithmic Complexity O(log n)

public static void main(String[] args){

 int[] arr = new int[] {1,2,3,4,5,6,7,8,9,10}

 int numToSearch = 8;

int startIndex = 0; endIndex=arr.length-1;

 boolean isNumberPresent = false;

   for(int index = startIndex; index <= endIndex; index++){

       //Check middle index value

        int middleIndex = startIndex + (endIndex-1) / 2;

       //If middleIndex has the number present 

       if(arr[middleIndex]==numToSearch){

          isNumberPresent =  true; break;

        }

       //If value less than middle index then change end index or else change start index

       if(arr[middleIndex] > numToSearch)

          endIndex =  middleIndex-1;

       else

         startIndex = middleIndex+1;

   } 

} 

This kind of code exhibits behavior where the input size is almost reduced by 50% for the next iteration. Looking at above code, for first iteration we have input size of 10. In the second iteration since the value is present in the 2nd half so the start and end index becomes 6 and 10 respectively thereby reducing the input size by 50%

Exponential Complexity O(2n)

Well, let's ensure we avoid using algorithms having such time complexities because this can well have a lot of impact on your code efficiency. Here is where the input size grows exponentially (literally in the name). The popular example of this is getting fibonacci series for number n. 

Fibonnaci series for the unversed is 0, 1, 1, 2, 3, 5, 8, 13, 21 and so on..Basically adding previous two numbers to get the next number in the iteration.

This may sound or look simple however just imagine the the stacks of loop when we write this out.

public static void main(String[] args){

   int num = 4;

   for(int index = 0; index <= n; index++){

      int value = getFibonacci(index);

      System.out.println(value);

   } 

}

private static int getFibonacci(int num){

    if(num <= 1)

       return num;

    else

       return getFibonacci(num-1) + getFibonacci(num-2); 

}

The above code will print 0, 1, 1, 2 with each number in next line. Now this may seem a trivial and one might think why so much stress on avoiding this. Let's understand this program more.

For index = 0,

getFibonacci is called, condition for num <=1 returns true and we return 0

For index = 1,

getFibonacci is called, condition for num <=1 returns true and we return 1'

For index = 2,

getFibonacci is called, condition for num <=1 returns false 

now again getFibonacci(2-1) + getFibonacci(2-2) is called

for the first and second call i.e. 2-1 = 1 so num<=1 returns true and 2-2 = 0 so num<=1 returns true

within each call the condition is evaluated and value is returned.

For index = 3,

getFibonacci is called, condition for num <=1 returns false 

now again getFibonacci(3-1) + getFibonacci(3-2) is called

for the first  i.e. 3-1 = 2 so num<=1 returns false so again getFibonacci(2-1) + getFibonacci(2-2) is called.

We can see how nested this is getting right??

within each call the condition is evaluated and value is returned.

Now the limit for this run is just uptil 3, just imagine the whole stack and nested calls when the limit is 10, 100, 1000. This is the reason why we avoid taking algorithms with exponential complexity.

Well, that was a lot of complexities within a short amount of time, isn't it!! That's it for now.. we might probably cover a lot of different algorithms and their complexities ahead.. Stay tuned for some more code talks.


Powered by Froala Editor