In last post complexity analysis of functions, we learned the four types of complexity functions: constant time `O(1)`, linear `O(n)`, quadratic `O(n ^{2})` and logarthmic

`O(logn)`. Today, we will discuss three more of the common time complexities and examples of each to consolidate our understanding.

These time complexities are: linealogarithmic `O(nlogn)`, `O(2 ^{n})` known as exponential and

`O(n!)`also known as factorial complexity.

### O(nlogn) – Linearithmic complexity

Whenever in interview you are asked to optimize your quadratic `O(n ^{2})` function, first thing you should attempt to make it

`O(nlogn)`, as it sits between linear and quadratic complexity. This is self-evident for sorting algorithms, Bubble Sort, and Insertion Sort are quadratic algorithms

`O(n`whereas other sorting algorithms like Merge Sort, Quick Sort, and Heap sort are linearithmic

^{2})`O(nlogn)`which are considered better than former ones. If you ever have to sort your input to solve your problem, pick one of the latter three.

If we are sorting an input to reduce the complexity from

O(nto linear, we forget that sorting complexity dominates the overall complexity of the solution and best it to use Quick Sort as it gives us^{2})O(nlogn)compared to bubble sort which isO(n.^{2})

If you are looking at a recursive function with a Master theorem equation like ` T(n) = aT(n/b) + O(n ^{c})`, and if

`log`is equal to c, then that recursive function has

_{b}a`nlogn`complexity.

int merge(int a[], int start, int mid, int end){ int i,j,k; int temp[end-start+1]; i = start; j = mid+1; k =0; /* Compare all elements of two sorted arrays and store them in sorted array. */ while(i <= mid && j <= end){ if(a[i]< a[j]){ temp[k++]= a[i++]; } else { temp[k++] = a[j++]; } } while(i <= mid){ temp[k++] = a[i++]; } while(j <= end){ temp[k++] = a[j++]; } //Copy the temporary array into original array for(i=0; i<k; i++){ a[i+start] = temp[i]; } } int mergeSort(int a[], int start, int end ){ int mid = (start + end)/2; if(start<end){ //sort the left part mergeSort(a, start, mid); //sort right part mergeSort(a, mid+1, end); //merge them together merge(a, start, mid, end); } }

Let’s find the values of: `T(n) = a T(n/b) + f(n)`

`a`: The number of subproblems. So, a = 2.

`b`: Each of the subproblems divides n in half. So, b = 2

`f(n)`: The work done outside the recursion is the function merge, which has a runtime of `O(n)` since it visits all the elements on the given arrays.

So ` log _{b}a` = 1 is equal to c = 1 and hence the complexity of the merge sort algorithm is

`O(nlogn)`, refer to Master Theorem for more understanding how euqation is solved.

### O(2^{n}) – Exponential complexity

This is the complexity you never want to have for your solution in the interview, but sometimes the first solution we come up with is exponential. That gives us a platform to start optimizing. But remember, not exponential complexity algorithms can be optimized and some are genuinely exponential.

The easiest way to find if an algorithm is exponential, try it with a few small inputs like 1,2,3 and see if the number of steps algorithm takes double at every increase in the input.

int function(int n){ if(n == 0) return 0; return function(n-1) + function(n-1) }

Let’s use our trick:

for n = 2, Call to function is function(2) and two calls to function(1), each has total calls 3, so overall calls for n = 2 are 7.

for n = 3, Call to function is function(3) and two calls to function(2), each has total calls 7, so overall calls for n = 3 are 15.

The number of steps done with each increase in the input is more than doubled, typical case for exponential function.

Rule of thumb for the complexity of the recursive functions in which input for each successive call is not divided by some factor is `O(number of branches ^{depth})`, in above example, number of branches are 2 and the depth of each branch will be n. Hence complexity of

`O(2`

^{n})Can you guess the complexity of the below code? This function finds the sum of all the nodes in a balanced binary tree.

int sum(Node node){ if(node == null) return 0; return sum(node.left) + node.val + sum(node.right); }

If we apply the formula `O(number of branches ^{depth})`, it becomes

`2`and algorithm looks exponential. We know that this function is not exponential, as to sum all the nodes of a binary tree, we go to each node once, complexity is O(n). Is our formula wrong? No, it is not. Look at the power of 2, it is the depth of the binary tree. Depth of a balanced binary tree is

^{depth}`log n`, so actual complexity is

`O(2`, when solved, it is equal to

^{log2n})`O(n)`

An example of a true exponential complexity function is to **find all subsets of a given set or an array**. For a set of n elements, there can be 2^{n} subset and if you have to gather all of them, it is exponential work. This one of those problems you cannot optimize.

### O(n!) – Factorial complexity

What is a factorial: it is the product of all the numbers which are less than or equal to n. For example,

the factorial of 5 = 5 x 4 x 3 x 2 x 1 = 120

This complexity grows super fast and you do not want you algorithms to have this complexity.

Algorithms with factorial complexity are the traditional approach to **find the factorial of a number** and **find all the permutations of a string**.

function getPermutations(string, prefix = '') { if(string.length <= 1) { return [prefix + string]; } return Array.from(string).reduce((result, char, index) => { const reminder = string.slice(0, index) + string.slice(index+1); result = result.concat(getPermutations(reminder, prefix + char)); return result; }, []); }

Today we learned three more types of complexity types, examples and tips to identify them in interview. If you are preparing for an interview and are looking for personalized coaching, please reach out to us on communications@algorithmsandme.com or book a free session with us.