## Complexity analysis of functions Part 2

In last post complexity analysis of functions, we learned the four types of complexity functions: constant time O(1), linear O(n), quadratic O(n2) and logarthmic O(logn). Today, we will discuss three more of the common time complexities and examples of each to consolidate our understanding.

These time complexities are: linealogarithmic O(nlogn), O(2n) known as exponential and O(n!) also known as factorial complexity.

### O(nlogn) – Linearithmic complexity

Whenever in interview you are asked to optimize your quadratic O(n2) function, first thing you should attempt to make it O(nlogn), as it sits between linear and quadratic complexity. This is self-evident for sorting algorithms, Bubble Sort, and Insertion Sort are quadratic algorithms O(n2) whereas other sorting algorithms like Merge Sort, Quick Sort, and Heap sort are linearithmic O(nlogn) which are considered better than former ones. If you ever have to sort your input to solve your problem, pick one of the latter three.

If we are sorting an input to reduce the complexity from O(n2) to linear, we forget that sorting complexity dominates the overall complexity of the solution and best it to use Quick Sort as it gives us O(nlogn) compared to bubble sort which is O(n2).

If you are looking at a recursive function with a Master theorem equation like T(n) = aT(n/b) + O(nc), and if logba is equal to c, then that recursive function has nlogn complexity.

```int merge(int a[], int start, int mid, int end){

int i,j,k;
int temp[end-start+1];

i = start; j = mid+1; k =0;
/* Compare all elements of two sorted arrays and store
them in sorted array. */
while(i <= mid && j <= end){
if(a[i]< a[j]){
temp[k++]= a[i++];
}
else {
temp[k++]  = a[j++];
}
}
while(i <= mid){
temp[k++] = a[i++];
}
while(j <= end){
temp[k++] = a[j++];
}
//Copy the temporary array into original array
for(i=0; i<k; i++){
a[i+start] = temp[i];
}
}
int mergeSort(int a[], int start, int end ){

int mid  = (start + end)/2;
if(start<end){
//sort the left part
mergeSort(a, start, mid);
//sort right part
mergeSort(a, mid+1, end);

//merge them together
merge(a, start, mid, end);
}
}
```

Let’s find the values of: T(n) = a T(n/b) + f(n)
a: The number of subproblems. So, a = 2.
b: Each of the subproblems divides n in half. So, b = 2
f(n): The work done outside the recursion is the function merge, which has a runtime of O(n) since it visits all the elements on the given arrays.

So logba = 1 is equal to c = 1 and hence the complexity of the merge sort algorithm is O(nlogn), refer to Master Theorem for more understanding how euqation is solved.

### O(2n) – Exponential complexity

This is the complexity you never want to have for your solution in the interview, but sometimes the first solution we come up with is exponential. That gives us a platform to start optimizing. But remember, not exponential complexity algorithms can be optimized and some are genuinely exponential.

The easiest way to find if an algorithm is exponential, try it with a few small inputs like 1,2,3 and see if the number of steps algorithm takes double at every increase in the input.

```int function(int n){
if(n == 0) return 0;

return function(n-1) + function(n-1)
}
```

Let’s use our trick:

for n = 1, Call to function is function(1) and two calls to function(0), total calls 3.
for n = 2, Call to function is function(2) and two calls to function(1), each has total calls 3, so overall calls for n = 2 are 7.
for n = 3, Call to function is function(3) and two calls to function(2), each has total calls 7, so overall calls for n = 3 are 15.

The number of steps done with each increase in the input is more than doubled, typical case for exponential function.

Rule of thumb for the complexity of the recursive functions in which input for each successive call is not divided by some factor is O(number of branchesdepth), in above example, number of branches are 2 and the depth of each branch will be n. Hence complexity of O(2n)

Can you guess the complexity of the below code? This function finds the sum of all the nodes in a balanced binary tree.

```int sum(Node node){
if(node == null) return 0;
return sum(node.left) + node.val + sum(node.right);
}
```

If we apply the formula O(number of branchesdepth), it becomes 2depth and algorithm looks exponential. We know that this function is not exponential, as to sum all the nodes of a binary tree, we go to each node once, complexity is O(n). Is our formula wrong? No, it is not. Look at the power of 2, it is the depth of the binary tree. Depth of a balanced binary tree is log n, so actual complexity is O(2log2n), when solved, it is equal to O(n)

An example of a true exponential complexity function is to find all subsets of a given set or an array. For a set of n elements, there can be 2n subset and if you have to gather all of them, it is exponential work. This one of those problems you cannot optimize.

### O(n!) – Factorial complexity

What is a factorial: it is the product of all the numbers which are less than or equal to n. For example,

the factorial of 5 = 5 x 4 x 3 x 2 x 1 = 120

This complexity grows super fast and you do not want you algorithms to have this complexity.

Algorithms with factorial complexity are the traditional approach to find the factorial of a number and find all the permutations of a string.

```function getPermutations(string, prefix = '') {
if(string.length <= 1) {
return [prefix + string];
}

return Array.from(string).reduce((result, char, index) => {
const reminder = string.slice(0, index) + string.slice(index+1);
result = result.concat(getPermutations(reminder, prefix + char));
return result;
}, []);
}
```

Today we learned three more types of complexity types, examples and tips to identify them in interview. If you are preparing for an interview and are looking for personalized coaching, please reach out to us on communications@algorithmsandme.com or book a free session with us.

## Common time complexities of functions

In last two posts time complexity for aglorithms and Master throrem, we learned the fundamentals of the complexity analysis. Today, we will discuss some of the common time complexities and examples of each to consolidate our understanding.

There are eight types of time complexities which occur commonly in algorithm : O(1) also called as constant time, O(n) called as linear, O(n2) called as quadratic, O(nk) called as polynomical, O(logn) known as logarithmic, O(nlogn) known as linearithmic, O(2n) known as exponential and O(n!) also known as factorial complexity.

## Growth of complexity functions

All these complexity functions grow at a different rate and it is very important to know the difference. That will help you evaluate your algorithms in an objective manner. Below is the graph of growth of each complexity function as n grows larger.

### O(1) – Constant time complexity

Constant time complexity does not mean that there is not work or operations are done. It just means that the number of operations does not depend on the size of the input. Irrespective of the size of the input, it will run a constant number of times.
For example, accessing a specific index in an array has constant time complexity. Similarly, accessing a value for a given key from a dictionary* is O(1) operation, it does not matter if there is only one word in the dictionary or a million words.
Another misconception about constant time is that it should be small. Not that is not required, it can be as large as possible. As far as it does not depend on the input, your algorithm is a constant time or O(1) algorithm.

A perfect hash function can give a worst-case complexity of O(1). This ideal hash function is not practical, so there will be some collisions that can lead to worst-case runtime of O(n). On average the lookup time is O(1).

Quick check: What is the complexity of the code below?

```for(int i=0; i<100000; i++){
System.out.println(i);
}
```

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

It becomes more confusing when this kind of loop is nested inside multiple loops. For example,

```for(int i=0; i<n; i++){
for(int j=0; j<n; j++){
for(int k=0; k<100000; k++){
System.out.println("Hello");
}
}
}
```

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

### O(n) – Linear time complexity

When the complexity is directly proportional to the size of the input size, time complexity of the algorithm is linear. For example,

```for(int i=0; i<n; i++){
System.out.println( i + " ");
}
```

this function prints all the numbers from 0 to n-1. As n grows, execution of print statement increases as well. For n=0, it will not execute at all, for n=1, it will execute once, for n=2, it executes two times and for n=n, it executes n times. So, the loop is directly proportional to the value of n, hence complexity of loop is O(n)

Can you identify the complexity of the following function? It prints all the odd numbers less than n.

```for(int i=0; i<n; i++){
if(i%2 == 1)
System.out.println( i + " ");
}
```

We can see that the print statement is executed only half the time, but still, the complexity of loop if O(n) because we are still depending on the size of the input. Same is the complexity of the below code

```for(int i=0; i<n/2; i++){
System.out.println( i + " ");
}
```

### O(n2) – Quadratic time complexity

Typical quadratic complexity function has format of this kind, a loop with a in loop and both loops depends on the size of input.

```for(int i=0; i<n; i++){
for(int j=0; j<n; j++){
System.out.println(i+j);
}
}
```

But there are some cases where it is not quite evident and can lead to misconception. One of the common quadratic complexity codes is as below and students and professional make mistake identifying it.

```for(int i=0; i<n; i++){
for(int j=i+1; j<n; j++){
System.out.println(i+j);
}
}
```

Cause of concern is that the inner loop is not exactly run n times. let’s see why this piece of code is O(sup>2)?

For i = 0: inner loop executes n times
For i = 1: inner loop executes n-1 times
For i = 2: inner loop executes n-2 times

For i = n-1: inner loop executes 1 time

So total number of time print statement is executed is n + (n-1) + (n-2) + (n-3) + … + 1 = (n * n-1)/2 which is O(n2)

Another common code where candidates miscalculate calculate the time complexity is as below

```for(int i=0; i<a; i++){
for(int j=0; j<b; j++){
System.out.println(i+j);
}
}
```

It is a common tendency to report the complexity of this code as O(n2) by looking at two loops. More common when there is a n anywhere in the function. Be careful that there two different variables a and b which are controlling the loops and there is no pre-knowledge of any relationship between two. So, best way to report complexity is O(a * b) and not O(n2).

Examples of quadratic complexity algorithms are: finding duplicates in an array, insertion sort, bubble sort, finding all the pairs of an array and many more. We will learn more when we solve problems.

### O(log n) – Logarithmic time complexity

Logarithmic time complexities usually apply to algorithms that divide the inputs into multiples parts every time. For instance, in the binary search algorithm, we divide the array or input into two halves and discard one half each time. We keep doing that until we do not have any element in the input to look for. So, for first iteration, we will have size n, then n/2, then n/4 and so on till we reach 1. How many iterations does it take to reach from n to 1, by dividing n by 2 each time? It is log n times.
Hence complexity of binary search algorithm is O(log n), you can deduce the same using Master Theorem.

Adding a new element in a balanced binary search tree is a logarithmic algorithm.

In the next post, we will discuss the last 4 types of complexities algorithms. Please comment for anything missing or wrong.
If you are preparing for an interview and looking for personalized coaching for preparation, please reach out to us or book a free session with us.

# What is Master Theorem?

The Master theorem provides us solution for complexity analysis of recursive function by substituting values for variable. The Master Theorem applies to recurrences of the following form:

T (n) = aT (n/b) + f(n)

where a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function.
These kinds of recurrence relations occur in the analysis of many “divide and conquer” algorithms like binary search or merge sort.

But what do these variables a, b and n mean?
n is the size of the input or the problem.
a is the number of subproblems in the recursion, means are we dividing the problem into two halves or 3 or 5? For example, for binary search algorithm a=1, and for merge sort algorithm it is 2.
n/b is the relative subproblem size. What rate is the input reduced? E.g., Binary search and Merge sort cut input in half.
f(n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and
the cost of merging the solutions to the subproblems.

Once, we have a, b and f(n) it is easy to find complexity of the algorithm by substitutin values in this expression

O(nlogba)

However, that’s not all, we have f(n) in the equation and final runtime of your algorithm depends on the relationship between nlogba and f(n)

There are three possible scenarios:
Scenario 1: f(n) = O(nc) and c < logba then the complexity is nlogba This mean recursion is taking more time than the combine and merge.

Example:
T(n) = 8T(n/2) + 1000n2
Here, logba = log28 = 3 > c (2); hence complexity is O(nlogba) = O(n3)

Scenario 2: f(n) = O(nclogkn) for k >= 0 and c = logba then the complexity is nlogbalogk+1n. It essentially means same runtime inside and outside the recursion.

Example:
T(n) = 2T(n/2) + 10n
10n can be written as O(nlog0n) with k = 0.
Here, logba = log22 = 1 = c (1); hence complexity is O(nlogbalogk+1n) = O(nlogn)

Scenario 3: f(n) = O(nc) and c > logba then the complexity is O(nc). It essentially means same runtime outside the recursion is more than split and recurse.

Example:
T(n) = 2T(n/2) + n2
Here, logba = log22 = 1 < c (2); hence complexity is O(nc) = O(n2)

## Exceptions to Master Theorem

1. T(n) = 2n T(n/2) + nn.

This is not admissible because a is not constant here. For Master theorem to be application a and b must be constant.

2. T(n) = 0.5 T(n/2) + n2.

This is not admissible because a < 1 which is to say we are reducing the problem in less than one subproblems. For Master theorem to be application a must be greater than 1.

3. T(n) = 64T(n/2) – n2logn.

This is not admissible because f(n) is negative.

4. T(n) = 64T(n/2) – 2n

This is not admissible because f(n) is not polynomial.

## Master Theorem Examples

Let’s apply the Master theorem on some of the known algorithms and see if it works?
Binary Search Algorithm
In the binary search algorithm, depending on the relationship between the middle element and the key, we discard one part of the array and look into the other. Also, from above, we know the Master theorem:

T(n) = aT(n/b) + f(n)

In this case, a is 1 as we are reducing to only one problem. b is 2 as we divide the input by half and no outside of recursion no work is done, hence the f(n) is O(1)

logba is log21 = 0. So logba is actually equal to c which is 0. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of binary search alogrithm as O(logn)

Merge Sort Algorithm
In the merge sort algorithm, we split the array into two equal parts and sort them individually. Apart from split we do a merge of elements, which take O(n) time. Let’s find a, b in the Master Theorem equation.

T(n) = aT(n/b) + f(n)

In this case, a is 2 as we are reducing to only two subproblems. b is 2 as we divide the input by half and outside of recursion no work is done, hence the f(n) is O(n)

logba is log22 = 1. So logba is actually equal to c which is 1. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of merge sort alogrithm as O(nlogn)

Please book a free session if you are looking for coaching to prepare for your next technical interview. At Algorithms and Me, we provide personalized coaching and mock interviews to prepare you for Amazon, Google, Facebook, etc. interviews.

### References

• Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.
• Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples.
Wiley, 2002. ISBN 0-471-38365-1. The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270
• Master theorem teaching at CSE
• Master theorem teaching at CSD