Common time complexities of functions

In last two posts time complexity for aglorithms and Master throrem, we learned the fundamentals of the complexity analysis. Today, we will discuss some of the common time complexities and examples of each to consolidate our understanding.

There are eight types of time complexities which occur commonly in algorithm : O(1) also called as constant time, O(n) called as linear, O(n2) called as quadratic, O(nk) called as polynomical, O(logn) known as logarithmic, O(nlogn) known as linearithmic, O(2n) known as exponential and O(n!) also known as factorial complexity.

Growth of complexity functions

All these complexity functions grow at a different rate and it is very important to know the difference. That will help you evaluate your algorithms in an objective manner. Below is the graph of growth of each complexity function as n grows larger.

Time complexity for algorithms

O(1) – Constant time complexity

Constant time complexity does not mean that there is not work or operations are done. It just means that the number of operations does not depend on the size of the input. Irrespective of the size of the input, it will run a constant number of times.
For example, accessing a specific index in an array has constant time complexity. Similarly, accessing a value for a given key from a dictionary* is O(1) operation, it does not matter if there is only one word in the dictionary or a million words.
Another misconception about constant time is that it should be small. Not that is not required, it can be as large as possible. As far as it does not depend on the input, your algorithm is a constant time or O(1) algorithm.

A perfect hash function can give a worst-case complexity of O(1). This ideal hash function is not practical, so there will be some collisions that can lead to worst-case runtime of O(n). On average the lookup time is O(1).

Quick check: What is the complexity of the code below?

for(int i=0; i<100000; i++){
     System.out.println(i);
}

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

It becomes more confusing when this kind of loop is nested inside multiple loops. For example,

for(int i=0; i<n; i++){
   for(int j=0; j<n; j++){
       for(int k=0; k<100000; k++){
           System.out.println("Hello");
       }
   }
}

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

O(n) – Linear time complexity

When the complexity is directly proportional to the size of the input size, time complexity of the algorithm is linear. For example,

for(int i=0; i<n; i++){
   System.out.println( i + " ");
}

this function prints all the numbers from 0 to n-1. As n grows, execution of print statement increases as well. For n=0, it will not execute at all, for n=1, it will execute once, for n=2, it executes two times and for n=n, it executes n times. So, the loop is directly proportional to the value of n, hence complexity of loop is O(n)

Can you identify the complexity of the following function? It prints all the odd numbers less than n.

for(int i=0; i<n; i++){
   if(i%2 == 1)
     System.out.println( i + " ");
}

We can see that the print statement is executed only half the time, but still, the complexity of loop if O(n) because we are still depending on the size of the input. Same is the complexity of the below code

for(int i=0; i<n/2; i++){
     System.out.println( i + " ");
}

O(n2) – Quadratic time complexity

Typical quadratic complexity function has format of this kind, a loop with a in loop and both loops depends on the size of input.

for(int i=0; i<n; i++){
   for(int j=0; j<n; j++){
      System.out.println(i+j);
   }
}

But there are some cases where it is not quite evident and can lead to misconception. One of the common quadratic complexity codes is as below and students and professional make mistake identifying it.

for(int i=0; i<n; i++){
   for(int j=i+1; j<n; j++){
      System.out.println(i+j);
   }
}

Cause of concern is that the inner loop is not exactly run n times. let’s see why this piece of code is O(sup>2)?

For i = 0: inner loop executes n times
For i = 1: inner loop executes n-1 times
For i = 2: inner loop executes n-2 times

For i = n-1: inner loop executes 1 time

So total number of time print statement is executed is n + (n-1) + (n-2) + (n-3) + … + 1 = (n * n-1)/2 which is O(n2)

Another common code where candidates miscalculate calculate the time complexity is as below

for(int i=0; i<a; i++){
   for(int j=0; j<b; j++){
      System.out.println(i+j);
   }
}

It is a common tendency to report the complexity of this code as O(n2) by looking at two loops. More common when there is a n anywhere in the function. Be careful that there two different variables a and b which are controlling the loops and there is no pre-knowledge of any relationship between two. So, best way to report complexity is O(a * b) and not O(n2).

Examples of quadratic complexity algorithms are: finding duplicates in an array, insertion sort, bubble sort, finding all the pairs of an array and many more. We will learn more when we solve problems.

O(log n) – Logarithmic time complexity

Logarithmic time complexities usually apply to algorithms that divide the inputs into multiples parts every time. For instance, in the binary search algorithm, we divide the array or input into two halves and discard one half each time. We keep doing that until we do not have any element in the input to look for. So, for first iteration, we will have size n, then n/2, then n/4 and so on till we reach 1. How many iterations does it take to reach from n to 1, by dividing n by 2 each time? It is log n times.
Hence complexity of binary search algorithm is O(log n), you can deduce the same using Master Theorem.

Adding a new element in a balanced binary search tree is a logarithmic algorithm.

In the next post, we will discuss the last 4 types of complexities algorithms. Please comment for anything missing or wrong.
If you are preparing for an interview and looking for personalized coaching for preparation, please reach out to us or book a free session with us.

Master theorem for complexity analysis

What is Master Theorem?

The Master theorem provides us solution for complexity analysis of recursive function by substituting values for variable. The Master Theorem applies to recurrences of the following form:

T (n) = aT (n/b) + f(n)

where a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function.
These kinds of recurrence relations occur in the analysis of many “divide and conquer” algorithms like binary search or merge sort.

But what do these variables a, b and n mean?
n is the size of the input or the problem.
a is the number of subproblems in the recursion, means are we dividing the problem into two halves or 3 or 5? For example, for binary search algorithm a=1, and for merge sort algorithm it is 2.
n/b is the relative subproblem size. What rate is the input reduced? E.g., Binary search and Merge sort cut input in half.
f(n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and
the cost of merging the solutions to the subproblems.

Once, we have a, b and f(n) it is easy to find complexity of the algorithm by substitutin values in this expression

O(nlogba)

However, that’s not all, we have f(n) in the equation and final runtime of your algorithm depends on the relationship between nlogba and f(n)

There are three possible scenarios:
Scenario 1: f(n) = O(nc) and c < logba then the complexity is nlogba This mean recursion is taking more time than the combine and merge.

Example:
T(n) = 8T(n/2) + 1000n2
Here, logba = log28 = 3 > c (2); hence complexity is O(nlogba) = O(n3)

Scenario 2: f(n) = O(nclogkn) for k >= 0 and c = logba then the complexity is nlogbalogk+1n. It essentially means same runtime inside and outside the recursion.

Example:
T(n) = 2T(n/2) + 10n
10n can be written as O(nlog0n) with k = 0.
Here, logba = log22 = 1 = c (1); hence complexity is O(nlogbalogk+1n) = O(nlogn)

Scenario 3: f(n) = O(nc) and c > logba then the complexity is O(nc). It essentially means same runtime outside the recursion is more than split and recurse.

Example:
T(n) = 2T(n/2) + n2
Here, logba = log22 = 1 < c (2); hence complexity is O(nc) = O(n2)

Exceptions to Master Theorem

1. T(n) = 2n T(n/2) + nn.

This is not admissible because a is not constant here. For Master theorem to be application a and b must be constant.

2. T(n) = 0.5 T(n/2) + n2.

This is not admissible because a < 1 which is to say we are reducing the problem in less than one subproblems. For Master theorem to be application a must be greater than 1.

3. T(n) = 64T(n/2) – n2logn.

This is not admissible because f(n) is negative.

4. T(n) = 64T(n/2) – 2n

This is not admissible because f(n) is not polynomial.

Master Theorem Examples

Let’s apply the Master theorem on some of the known algorithms and see if it works?
Binary Search Algorithm
In the binary search algorithm, depending on the relationship between the middle element and the key, we discard one part of the array and look into the other. Also, from above, we know the Master theorem:

T(n) = aT(n/b) + f(n)

In this case, a is 1 as we are reducing to only one problem. b is 2 as we divide the input by half and no outside of recursion no work is done, hence the f(n) is O(1)

logba is log21 = 0. So logba is actually equal to c which is 0. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of binary search alogrithm as O(logn)

Merge Sort Algorithm
In the merge sort algorithm, we split the array into two equal parts and sort them individually. Apart from split we do a merge of elements, which take O(n) time. Let’s find a, b in the Master Theorem equation.

T(n) = aT(n/b) + f(n)

In this case, a is 2 as we are reducing to only two subproblems. b is 2 as we divide the input by half and outside of recursion no work is done, hence the f(n) is O(n)

logba is log22 = 1. So logba is actually equal to c which is 1. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of merge sort alogrithm as O(nlogn)

Please book a free session if you are looking for coaching to prepare for your next technical interview. At Algorithms and Me, we provide personalized coaching and mock interviews to prepare you for Amazon, Google, Facebook, etc. interviews.

References

  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
    Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.
  • Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples.
    Wiley, 2002. ISBN 0-471-38365-1. The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270
  • Master theorem teaching at CSE
  • Master theorem teaching at CSD
  • Time complexity of algorithms

    Time complexity of algorithms

    An algorithm is a collection of steps that process a given input to produce an output. For example, to find a minimum element in an unsorted integer array, we have to do the following steps: Declare a variable min = Integer.MAX_VALUE. Go through each element in the array and check if the current element is less than min, then update min. When the array scan is finished, min will contain the minimum element in the given array.

    The time complexity of an algorithm is estimated time an algorithm takes to process the input and generate the desired output.
    We can implement an algorithm and run it on a computer, measure the time taken by the algorithm. What’s wrong with this method? There are many variables that can affect the run time of the algorithm like the kind of machine it is running on, the accuracy of the measurement, implementation language used, proficiency of programmer, etc.

    The time complexity is not about timing how long the algorithm takes. Instead, how many operations are executed. The number of instructions executed by a program is affected by the size of the input and how their elements are arranged.

    Complexity or run time of algorithms are calculated without implementing or running the algorithms on computers.

    How do I calculate the time complexity of an algorithm? It’s easy: just count the number of operations that the algorithm executes. Let’s take an example and see.

    public int findMinimum(int[] a){
         //Step 1
         int min = Integer.MAX_VALUE;
         
         //Loop through all elements
         for(int i=0; i<a.length; i++){
              //Step 2: Compare with each element
              if(min > a[i]){
                   min = a[i];
              } 
         }
         //Step 4 : Return the value
         return min;
    }
    

    Now, count the number of operations: initialization happens once, and return happens once. Comparison operation happens for N times and it may happen that assignment also happens N times. So, the total number of operations done by algorithm to find minimum element in an array is 2n+2

    Asymptotic analysis

    In mathematical analysis, asymptotic analysis is a method of describing limiting behavior. If f(n) = n2 + 3n, then as n becomes very large, the term 3n becomes insignificant compared to n2. The function f(n) is said to be “asymptotically equivalent to n2, as n tends to infinity”.

    Can we generalize the the function f(n) = 2n+2 to say f(n) = kn + c where k is multiplication factor and c is constant. As we increase n, the effect of k and c will diminish. For example, if k=2 and c=3, for n=1000, k and c do not matter. So, 2n+2 is asymptotically equivalent to n

    Worst case complexity and Big O notation

    The worst-case complexity of an algorithm is the maximum operations algorithm would do for a given input size of n.

    A big-o notation is a way to represent the growth of a function. In simple words, if I say algorithm run in O(g(n)), then for sufficiently big n, the running time of the algorithm is less than g(n) multiplied by some constant. Even less formally, if f(n)=O(n2) then we can say that f(n) runs no slower than n2, which actually links it back to the worst-case complexity.

    For example, if we want to search for an element in an array and use a linear search, worst-case complexity is when the element we are looking for is at the end index of the array or not present in the array at all. We would end up comparing every element of the array with the given value, n operations and O(n) complexity. 

    Average case behavior

    In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.
    Wikipedia

    For example, many sorting algorithms which utilize randomness, such as Quicksort, have a worst-case running time of n2, but an average-case running time of n log n, where n is the length of the input to be sorted

    Best case behavior
    The least used is the best case behavior i.e what would be the least number of comparisons one has to make in linear search? The answer is one if the element we are looking for is the first element of the array.

    TIP

    In an interview, we mostly are concerned with worst-case complexity or Big-O notation. Most of the time, the worst-case and expected-case are the same, however, if they are different, it worth mentioning in the interview.

    Some examples on time complexity analysis

    Question 1: What will be Asymptotic notation for expression n2 + 3n + 2 ?
    O(n2) because as n tends to infinity, n2 dominates the expression.

    Question 2: What will be Asymptotic notation for expression n2 + mlogm?
    O(n2 + mlogm) because we do not know any relationship between n and m and as n tends to infinity, we do not know if it dominates mlogm, so will keep both.

    Question 3: Find the complexity of below function?

       for(int i=1; i<=n; i++){
           for(int j=i+1; j<=n; j++){
              System.out.println(i + "," + j);
           }
       }
    

    Best way to calculate complexity of any iterative function or code block is to count the number of operation being done. In this example, for i=1, print statement will be executed n-1 times. For i=2, print will execute for n-2 time. As we go on, for i=n-1, print statement will execute only 1 time. Overall number of operations will be some of all these operations right?

    (n-1) + (n-2) + (n-3) + … + 1 = (n-1)(n-2)/2 = n2 – 3n + 2 = O(n2)

    So overall complexity of the implementation in Big-O notation is O(n2)>

    In the next post, we will understand the common types of time complexities like constant, linear, polynomial, exponential and see some examples which exhibit those complexities functions. Please share if there is anything wrong or missing.

    If you are preparing for an interview and would like to have personalized coaching, please book a free demo session with us.