BIG-O NOTATION & ALGORITHM ANALYSIS

Big O Notation is a mathematical notation, which is used to abstractly describe the behavior of an algorithm in computer science. It is used to express the efficiency or complexity of an algorithm by describing how a program will scale with growing input size. A program to search for a number in a data set of 100 elements will return the result in a timely manner, but to understand the behavior of the same program as input size grows or with a data set of 1,000,000 elements, we use Big O Notation.

Efficiency of an algorithm is related to both time and space i.e. how much time will an algorithm take to solve a problem as the input size grows and how much space or memory will it consume during the execution. In reality there is a tradeoff between time and space, for example in one of the recent projects that I was part of we had to search for a part in an asset model and in order to reduce the time we had to cache the frequently accessed assets. This increased the response time in retrieving the parts, but we had the additional cost of the memory or the space required for maintaining cache.  In this article we will be focusing on the time complexity and how it can be expressed using Big O Notation. It doesn’t tell you the exact time for execution of a program with growing input size, but it helps to determine the order of the growth. It helps to understand if the input size grows will the amount of time that the algorithm takes to solve a problem will be constant, or will the time increase exponentially or quadratically? It expresses the growth of programs runtime relative to the input, as the input gets arbitrarily large.

The purpose of the Big O Notation is to capture the upper bound or the worst-case scenario. It is usually written as:

f(n)=O(inputSize)

One of the obvious questions which comes to mind is – Do we even need to analyze how an algorithm will scale with growing input size as computers are becoming more powerful and faster. While that is true, the use case that we deal with in today’s world also has much larger data sets. One of the projects that I worked on last year had a use case to analyze more than a million data points received from sensors of oil rig. So, with the data size becoming massive and time becoming crucial in real world problems, it’s very important to understand how an algorithm will behave as the input grows. A time sensitive program to detect anomalies from data received by sensors every 5 seconds, when run against an hour of input will return the result in desired time but how will the same program behave when it has to analyze one days of data. Big O Notation helps to predict it and understand how certain algorithm choices relate to the cost that’s going to be associated in the execution of the program.

Once again, the objective here is not to come up with the exact time of an algorithm for a given input size but to understand the order of growth i.e. If I double the size of the input, does that double the amount of time required to solve the problem or does it increase it by a certain factor?

Before discussing how do we analyze an algorithm and determine the time complexity, let us see what the other approaches are of doing this and its limitations

  1. Timer: In this approach, we can use the timer function and can record a start time of the execution of the program and the end time. The difference between the start time and the end time will give the time required to execute a function. We can then run the program with different input sizes and can record the execution time and can come up with a sense of how much time will the program take as the input size grows.

The first problem with this approach is that the running time of a program is dependent on underlying hardware and the measurement will vary from computer to computer. Secondly, though we can test the program with small size of inputs, it doesn’t necessarily predict what happens with large sized problems because of the way underlying OS behaves with respect to moving things in and out of memory during runtime. Therefore, this approach is highly dependent on the underlying hardware and does not help to establish a relationship between inputs and time.

  1. Operations Count: In this approach we can count the number of operations that are executed inside an algorithm and then can use the number of operations as a function of the size of the input to understand how the algorithm scales as input size grows.

For example , Below program does a sum of the integers in a list.

 public static int sum(List<Integer> list) {
     int sum = 0;
     for (int i: list) {
         sum += i;
     }
     return sum;
 } 

The operations in this program are  

  1. Initialize variable to 0
  2. Initialize the loop and set range as size of the list
  3. Loop through the size of the list, if list contains 10 integers it will loop 10 times and if it contains 100 integers, it will loop 100 times
  4. Perform addition
  5. Assign the value of addition to the variable sum
  6. Return the result

To summarize we have open operation outside the loop, 3 operations inside the loop which is executed for every element in the loop and finally 1 operation to return the result. That gives us an expression

[ 1+3x operations + 1] or [ 2 + 3x operations] where x is the size of the list

In this approach we have got an expression which tells us how much time my program is going to take as I change the size of the input. If the size of the list passed is 20 then it is going to take 62 operations, if the size of the list passed is 100 then it is going to take 302 operations. To derive the time, we can multiply this by whatever is that constant amount of time for each operation.

Though this approach is independent of computer speed, but the 2 main limitations of this approach are

  1. The count of operation depends on the implementation of the algorithm meaning if we use a while loop instead of a for loop, we will have an increase in an operation and since the objective is to measure the efficiency of an algorithm and not the underlying implementation this approach is not right
  1. There is no clear definition of which operations to count and one of the assumptions made was that the amount of the time taken for all operations are same like assignment of a variable and a mathematical operation will take same time and that may not be accurate.

Though counting operations is closer to what we want and gives us a relationship between inputs and the count but is not the right way to understand the efficiency of the algorithm

Therefore, the approach of using a timer and counting operations is not effective and doesn’t help to understand the efficiency of a program with growing input size. Big O Notation solves these shortcomings and helps to predict the order of growth of an algorithm by focusing on what happens when the size of the problem gets large and expresses the asymptotic behavior of the algorithm.

We will now discuss on what should we measure and what operations should we count without worrying on the implementation details and how can we express efficiency in terms of size of input.

Big O Notation focuses on the worst-case scenario and gives us the upper bound for the growth of algorithm. Therefore, during the analysis of an algorithm, we should focus on the pieces of the program that takes the most time and are a bottleneck and the operations that grow rapidly  The objective is not to predict the exact time required for a program but to understand whether the time required to solve a problem will be constant as the input size grows or will it be a linear or will the time required grow exponentially as the input grows

Big O notation uses a capital letter “O”, followed by a function in parentheses. It is the function that describes the scalability. The function almost always includes the letter “n”, which represents the input size. 

These are the different classes through which we express the order of growth of an algorithm

  • Constant: O(1)
  • Linear: O(n)
  • Logarithmic: O(log n)
  • Loglinear: O(n log n)
  • Quadratic: O(n2)
  • Polynomial: O(nk)
  • Exponential: O(2n)

Constant time complexity: O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set. As the input increases, time to run the algorithm stays constant. For example, consider the below program to calculate the sum of first n numbers

int sumofNumbers(int n) {  
       return n*(n+1)/2;      
} 

In the above program, even if n is 10 or 1000 there is only one operation and the time required to solve the problem as the input grows will still be constant therefore the complexity of the algorithm is expressed as Order of 1. In this scenario the execution time is independent of the size of the input. It is the fastest time complexity since the time it takes to execute a program is always the same. It does not matter that what’s the size of the input, the execution and the space required to run this will be the same.

Linear Time Complexity: O(n) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. As the input increases, the time to solve the problem will grow proportionally. For example: Given an array of n distinct integers and to search an element x in it, we have the below code snippet

int findNumber(int[] list, int x) {  
     for (int i = 0; i < list.length; i++) {                          
         if (list[i] == x)                   
         return i;          
 }       
} 

In this example, as the number of input grows the programs running time increases linearly in relation to the size of list. If the array size is 100 the loop runs 100 times and if the array size is 100,000 the loop runs that many times to find the element. As Big O Notation focuses on worst case scenario and finding the upper bound growth, our assumption is that to find the element the program will iterate through the entire elements in array.  Therefore, an algorithm is said to run in linear time if its time execution is directly proportional to the input size, i.e. time grows linearly as input size increases.

Logarithmic: O(log n) complexity class are algorithms that reduces the amount of work it needs to do as it progresses through the data set. So basically, given an input of size n, the number of steps it takes to accomplish the task are decreased by some factor with each step. Below program to print a number for an input size of 16 will run the loop for 3 times and for an input size of 256 it will run for 8 times. log(16) to the base of 3 is 2 and log(256) to the base of 2 is 8.

for (int i = 1; i < n; i = i * 2) {  
 
       System.out.println("Allocated Position: " + i);  
 
   } 

Logarithmic algorithms have excellent performance in large data sets

Loglinear: Algorithms with loglinear O (n log n) complexity is a combination of linear and logarithmic class.  It performs an O(log n) operation for every item in the input data. In the below example for every input the below loop runs log(8) =3 times.

for (int i = 1; i<= n; i++) {  
    for (int j = 1; j<8; j = j * 2) {  
        System.out.println("Generating Token Number -->" + i * j);  
    }  
}  

Several sorting algorithms, such as quick sort and heap sort are O(n log n).

Quadratic Time complexity:  An algorithm with quadratic time complexity O(n2) has a growth rate of n2 which means if the input size is 5 then it will do 25 operations, if input size is 10 it will do 100 operations and so on. In such algorithms when the input size grows the time complexity grows quadratically. Bubble sort is a typical example of O(n2) time complexity it works by swapping adjacent elements if they’re not in the desired order. This process repeats from the beginning of the array until all elements are in order.

In the below code snippet, if n=10 the program will iterate and run the operation 100 times, if n=20 then the operations will run 400 times.

for (int i = 1; i <= n; i++) {  
            for(int j = 1; j <= n; j++) {  
                System.out.println("Compare : " + i + " and " + j);  
            }  
        }  
    }  

Quadratic complexity says that the algorithm’s performance is proportional to the square of the data set size. This happens when the algorithm processes each element of a set, and that processing requires another pass through the set. 

Polynomial Time Complexity:  Quadratic time algorithms which we explained above are certain types of polynomial time O(nk) algorithms where k = 2. Therefore, in case of polynomial time complexity the running time of algorithms runs to the order of nk.  In the below example to find what will be the value of x, y and z for the equation 3x + 9y + 8z = 79

void calculateEquation() {  
          
        int solutions[][]= new int[1][3];;  
          
        int n=10;  
          
        for(int x = 1; x < n; x++) {  
            for(int y = 1; y < n; y++) {  
              for(int z = 1; z < n; z++) {  
                if( 3*x + 9*y + 8*z == 79 ) {  
                      
                     solutions[0][0]=x;  
                     solutions[0][1]=y;  
                     solutions[0][2]=z;  
                }  
              }  
            }  
            System.out.println(Arrays.deepToString(solutions));  
    }  

The time complexity to solve the above program for input of 9 elements the loop has to run 93 times and for an input size of 18 elements the loop has to run 5832 times which is 183 times. Hence the time complexity of such algorithms is expressed as Polynomial.

Exponential Time Complexity: Algorithms with exponential time complexity O(2n) grows double with each addition to the input data set. The growth curve of such algorithms is exponential. An example of an O(2N) function is the recursive calculation of Fibonacci numbers:

int fib(int n) {  
        if (n <= 1) {   
            return n;  
        } else {   
              
            return fib(n - 2) + fib(n - 1);  
        }  
   }  

In the above recursive program with each next step the number of operations grow 2n times, therefore the complexity of this O(2n)

Functions with exponential complexity are always going to be expensive to compute.

Analysis of algorithms to understand the complexity class helps to find out what will be the consequences of the design and what will be the cost associated in running it.

Below table also shows how the different complexity class grows when the input size is increased

Classesn=10n=100n=1000
O(1)111
O(log n )123
O(n)101001000
O(n log n)102003000
O(n2)100100001000000
O(2n)102412676506002 2822940149 67032053761.0715086071862673e+301

Algorithm with Quadratic and exponential complexity grows very quickly and exponential in particular is very worse. Therefore, the objective while designing an algorithm should be to be as high up in the listing.

Summary

  • Big O Notation is used to describe the complexity of an algorithm.
  • It doesn’t predict or evaluate the exact time that an algorithm takes to solve a problem, instead it tells the order of growth with respect to input size
  • It checks for the worst-case scenario and provides an upper bound for the function
  • It abstractly describes how an algorithm will scale with growing input size
  • Finding the complexity of algorithm helps to understand the consequences of design choices and the cost associated to run it.

  • The constants are ignored during the algorithm analysis and the operations which will grow rapidly are taken into considerations

The skills of analyzing algorithms for complexities and thereafter making the right choices to evaluate if the optimizations are worth is what we should focus on. It’s very important to strike a balance time , space , code maintainability and scalability during the design of a solution.

Leave a Reply