Complexity analysis of functions Part 2

In last post complexity analysis of functions, we learned the four types of complexity functions: constant time O(1), linear O(n), quadratic O(n2) and logarthmic O(logn). Today, we will discuss three more of the common time complexities and examples of each to consolidate our understanding.

These time complexities are: linealogarithmic O(nlogn), O(2n) known as exponential and O(n!) also known as factorial complexity.

O(nlogn) – Linearithmic complexity

Whenever in interview you are asked to optimize your quadratic O(n2) function, first thing you should attempt to make it O(nlogn), as it sits between linear and quadratic complexity. This is self-evident for sorting algorithms, Bubble Sort, and Insertion Sort are quadratic algorithms O(n2) whereas other sorting algorithms like Merge Sort, Quick Sort, and Heap sort are linearithmic O(nlogn) which are considered better than former ones. If you ever have to sort your input to solve your problem, pick one of the latter three.

If we are sorting an input to reduce the complexity from O(n2) to linear, we forget that sorting complexity dominates the overall complexity of the solution and best it to use Quick Sort as it gives us O(nlogn) compared to bubble sort which is O(n2).

If you are looking at a recursive function with a Master theorem equation like T(n) = aT(n/b) + O(nc), and if logba is equal to c, then that recursive function has nlogn complexity.

```int merge(int a[], int start, int mid, int end){

int i,j,k;
int temp[end-start+1];

i = start; j = mid+1; k =0;
/* Compare all elements of two sorted arrays and store
them in sorted array. */
while(i <= mid && j <= end){
if(a[i]< a[j]){
temp[k++]= a[i++];
}
else {
temp[k++]  = a[j++];
}
}
while(i <= mid){
temp[k++] = a[i++];
}
while(j <= end){
temp[k++] = a[j++];
}
//Copy the temporary array into original array
for(i=0; i<k; i++){
a[i+start] = temp[i];
}
}
int mergeSort(int a[], int start, int end ){

int mid  = (start + end)/2;
if(start<end){
//sort the left part
mergeSort(a, start, mid);
//sort right part
mergeSort(a, mid+1, end);

//merge them together
merge(a, start, mid, end);
}
}
```

Let’s find the values of: T(n) = a T(n/b) + f(n)
a: The number of subproblems. So, a = 2.
b: Each of the subproblems divides n in half. So, b = 2
f(n): The work done outside the recursion is the function merge, which has a runtime of O(n) since it visits all the elements on the given arrays.

So logba = 1 is equal to c = 1 and hence the complexity of the merge sort algorithm is O(nlogn), refer to Master Theorem for more understanding how euqation is solved.

O(2n) – Exponential complexity

This is the complexity you never want to have for your solution in the interview, but sometimes the first solution we come up with is exponential. That gives us a platform to start optimizing. But remember, not exponential complexity algorithms can be optimized and some are genuinely exponential.

The easiest way to find if an algorithm is exponential, try it with a few small inputs like 1,2,3 and see if the number of steps algorithm takes double at every increase in the input.

```int function(int n){
if(n == 0) return 0;

return function(n-1) + function(n-1)
}
```

Let’s use our trick:

for n = 1, Call to function is function(1) and two calls to function(0), total calls 3.
for n = 2, Call to function is function(2) and two calls to function(1), each has total calls 3, so overall calls for n = 2 are 7.
for n = 3, Call to function is function(3) and two calls to function(2), each has total calls 7, so overall calls for n = 3 are 15.

The number of steps done with each increase in the input is more than doubled, typical case for exponential function.

Rule of thumb for the complexity of the recursive functions in which input for each successive call is not divided by some factor is O(number of branchesdepth), in above example, number of branches are 2 and the depth of each branch will be n. Hence complexity of O(2n)

Can you guess the complexity of the below code? This function finds the sum of all the nodes in a balanced binary tree.

```int sum(Node node){
if(node == null) return 0;
return sum(node.left) + node.val + sum(node.right);
}
```

If we apply the formula O(number of branchesdepth), it becomes 2depth and algorithm looks exponential. We know that this function is not exponential, as to sum all the nodes of a binary tree, we go to each node once, complexity is O(n). Is our formula wrong? No, it is not. Look at the power of 2, it is the depth of the binary tree. Depth of a balanced binary tree is log n, so actual complexity is O(2log2n), when solved, it is equal to O(n)

An example of a true exponential complexity function is to find all subsets of a given set or an array. For a set of n elements, there can be 2n subset and if you have to gather all of them, it is exponential work. This one of those problems you cannot optimize.

O(n!) – Factorial complexity

What is a factorial: it is the product of all the numbers which are less than or equal to n. For example,

the factorial of 5 = 5 x 4 x 3 x 2 x 1 = 120

This complexity grows super fast and you do not want you algorithms to have this complexity.

Algorithms with factorial complexity are the traditional approach to find the factorial of a number and find all the permutations of a string.

```function getPermutations(string, prefix = '') {
if(string.length <= 1) {
return [prefix + string];
}

return Array.from(string).reduce((result, char, index) => {
const reminder = string.slice(0, index) + string.slice(index+1);
result = result.concat(getPermutations(reminder, prefix + char));
return result;
}, []);
}
```

Today we learned three more types of complexity types, examples and tips to identify them in interview. If you are preparing for an interview and are looking for personalized coaching, please reach out to us on [email protected] or book a free session with us.

Common time complexities of functions

In last two posts time complexity for aglorithms and Master throrem, we learned the fundamentals of the complexity analysis. Today, we will discuss some of the common time complexities and examples of each to consolidate our understanding.

There are eight types of time complexities which occur commonly in algorithm : O(1) also called as constant time, O(n) called as linear, O(n2) called as quadratic, O(nk) called as polynomical, O(logn) known as logarithmic, O(nlogn) known as linearithmic, O(2n) known as exponential and O(n!) also known as factorial complexity.

Growth of complexity functions

All these complexity functions grow at a different rate and it is very important to know the difference. That will help you evaluate your algorithms in an objective manner. Below is the graph of growth of each complexity function as n grows larger.

O(1) – Constant time complexity

Constant time complexity does not mean that there is not work or operations are done. It just means that the number of operations does not depend on the size of the input. Irrespective of the size of the input, it will run a constant number of times.
For example, accessing a specific index in an array has constant time complexity. Similarly, accessing a value for a given key from a dictionary* is O(1) operation, it does not matter if there is only one word in the dictionary or a million words.
Another misconception about constant time is that it should be small. Not that is not required, it can be as large as possible. As far as it does not depend on the input, your algorithm is a constant time or O(1) algorithm.

A perfect hash function can give a worst-case complexity of O(1). This ideal hash function is not practical, so there will be some collisions that can lead to worst-case runtime of O(n). On average the lookup time is O(1).

Quick check: What is the complexity of the code below?

```for(int i=0; i<100000; i++){
System.out.println(i);
}
```

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

It becomes more confusing when this kind of loop is nested inside multiple loops. For example,

```for(int i=0; i<n; i++){
for(int j=0; j<n; j++){
for(int k=0; k<100000; k++){
System.out.println("Hello");
}
}
}
```

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

O(n) – Linear time complexity

When the complexity is directly proportional to the size of the input size, time complexity of the algorithm is linear. For example,

```for(int i=0; i<n; i++){
System.out.println( i + " ");
}
```

this function prints all the numbers from 0 to n-1. As n grows, execution of print statement increases as well. For n=0, it will not execute at all, for n=1, it will execute once, for n=2, it executes two times and for n=n, it executes n times. So, the loop is directly proportional to the value of n, hence complexity of loop is O(n)

Can you identify the complexity of the following function? It prints all the odd numbers less than n.

```for(int i=0; i<n; i++){
if(i%2 == 1)
System.out.println( i + " ");
}
```

We can see that the print statement is executed only half the time, but still, the complexity of loop if O(n) because we are still depending on the size of the input. Same is the complexity of the below code

```for(int i=0; i<n/2; i++){
System.out.println( i + " ");
}
```

Typical quadratic complexity function has format of this kind, a loop with a in loop and both loops depends on the size of input.

```for(int i=0; i<n; i++){
for(int j=0; j<n; j++){
System.out.println(i+j);
}
}
```

But there are some cases where it is not quite evident and can lead to misconception. One of the common quadratic complexity codes is as below and students and professional make mistake identifying it.

```for(int i=0; i<n; i++){
for(int j=i+1; j<n; j++){
System.out.println(i+j);
}
}
```

Cause of concern is that the inner loop is not exactly run n times. let’s see why this piece of code is O(sup>2)?

For i = 0: inner loop executes n times
For i = 1: inner loop executes n-1 times
For i = 2: inner loop executes n-2 times

For i = n-1: inner loop executes 1 time

So total number of time print statement is executed is n + (n-1) + (n-2) + (n-3) + … + 1 = (n * n-1)/2 which is O(n2)

Another common code where candidates miscalculate calculate the time complexity is as below

```for(int i=0; i<a; i++){
for(int j=0; j<b; j++){
System.out.println(i+j);
}
}
```

It is a common tendency to report the complexity of this code as O(n2) by looking at two loops. More common when there is a n anywhere in the function. Be careful that there two different variables a and b which are controlling the loops and there is no pre-knowledge of any relationship between two. So, best way to report complexity is O(a * b) and not O(n2).

Examples of quadratic complexity algorithms are: finding duplicates in an array, insertion sort, bubble sort, finding all the pairs of an array and many more. We will learn more when we solve problems.

O(log n) – Logarithmic time complexity

Logarithmic time complexities usually apply to algorithms that divide the inputs into multiples parts every time. For instance, in the binary search algorithm, we divide the array or input into two halves and discard one half each time. We keep doing that until we do not have any element in the input to look for. So, for first iteration, we will have size n, then n/2, then n/4 and so on till we reach 1. How many iterations does it take to reach from n to 1, by dividing n by 2 each time? It is log n times.
Hence complexity of binary search algorithm is O(log n), you can deduce the same using Master Theorem.

Adding a new element in a balanced binary search tree is a logarithmic algorithm.

In the next post, we will discuss the last 4 types of complexities algorithms. Please comment for anything missing or wrong.
If you are preparing for an interview and looking for personalized coaching for preparation, please reach out to us or book a free session with us.

Time complexity of algorithms

An algorithm is a collection of steps that process a given input to produce an output. For example, to find a minimum element in an unsorted integer array, we have to do the following steps: Declare a variable min = Integer.MAX_VALUE. Go through each element in the array and check if the current element is less than min, then update min. When the array scan is finished, min will contain the minimum element in the given array.

The time complexity of an algorithm is estimated time an algorithm takes to process the input and generate the desired output.
We can implement an algorithm and run it on a computer, measure the time taken by the algorithm. What’s wrong with this method? There are many variables that can affect the run time of the algorithm like the kind of machine it is running on, the accuracy of the measurement, implementation language used, proficiency of programmer, etc.

The time complexity is not about timing how long the algorithm takes. Instead, how many operations are executed. The number of instructions executed by a program is affected by the size of the input and how their elements are arranged.

Complexity or run time of algorithms are calculated without implementing or running the algorithms on computers.

How do I calculate the time complexity of an algorithm? It’s easy: just count the number of operations that the algorithm executes. Let’s take an example and see.

```public int findMinimum(int[] a){
//Step 1
int min = Integer.MAX_VALUE;

//Loop through all elements
for(int i=0; i<a.length; i++){
//Step 2: Compare with each element
if(min > a[i]){
min = a[i];
}
}
//Step 4 : Return the value
return min;
}
```

Now, count the number of operations: initialization happens once, and return happens once. Comparison operation happens for N times and it may happen that assignment also happens N times. So, the total number of operations done by algorithm to find minimum element in an array is 2n+2

Asymptotic analysis

In mathematical analysis, asymptotic analysis is a method of describing limiting behavior. If f(n) = n2 + 3n, then as n becomes very large, the term 3n becomes insignificant compared to n2. The function f(n) is said to be “asymptotically equivalent to n2, as n tends to infinity”.

Can we generalize the the function f(n) = 2n+2 to say f(n) = kn + c where k is multiplication factor and c is constant. As we increase n, the effect of k and c will diminish. For example, if k=2 and c=3, for n=1000, k and c do not matter. So, 2n+2 is asymptotically equivalent to n

Worst case complexity and Big O notation

The worst-case complexity of an algorithm is the maximum operations algorithm would do for a given input size of n.

A big-o notation is a way to represent the growth of a function. In simple words, if I say algorithm run in O(g(n)), then for sufficiently big n, the running time of the algorithm is less than g(n) multiplied by some constant. Even less formally, if f(n)=O(n2) then we can say that f(n) runs no slower than n2, which actually links it back to the worst-case complexity.

For example, if we want to search for an element in an array and use a linear search, worst-case complexity is when the element we are looking for is at the end index of the array or not present in the array at all. We would end up comparing every element of the array with the given value, n operations and O(n) complexity.

Average case behavior

In computational complexity theory, the average-case complexity of an algorithm is the amount of some computational resource (typically time) used by the algorithm, averaged over all possible inputs. It is frequently contrasted with worst-case complexity which considers the maximal complexity of the algorithm over all possible inputs.
Wikipedia

For example, many sorting algorithms which utilize randomness, such as Quicksort, have a worst-case running time of n2, but an average-case running time of n log n, where n is the length of the input to be sorted

Best case behavior
The least used is the best case behavior i.e what would be the least number of comparisons one has to make in linear search? The answer is one if the element we are looking for is the first element of the array.

TIP

In an interview, we mostly are concerned with worst-case complexity or Big-O notation. Most of the time, the worst-case and expected-case are the same, however, if they are different, it worth mentioning in the interview.

Some examples on time complexity analysis

Question 1: What will be Asymptotic notation for expression n2 + 3n + 2 ?
O(n2) because as n tends to infinity, n2 dominates the expression.

Question 2: What will be Asymptotic notation for expression n2 + mlogm?
O(n2 + mlogm) because we do not know any relationship between n and m and as n tends to infinity, we do not know if it dominates mlogm, so will keep both.

Question 3: Find the complexity of below function?

```   for(int i=1; i<=n; i++){
for(int j=i+1; j<=n; j++){
System.out.println(i + "," + j);
}
}
```

Best way to calculate complexity of any iterative function or code block is to count the number of operation being done. In this example, for i=1, print statement will be executed n-1 times. For i=2, print will execute for n-2 time. As we go on, for i=n-1, print statement will execute only 1 time. Overall number of operations will be some of all these operations right?

(n-1) + (n-2) + (n-3) + … + 1 = (n-1)(n-2)/2 = n2 – 3n + 2 = O(n2)

So overall complexity of the implementation in Big-O notation is O(n2)>

In the next post, we will understand the common types of time complexities like constant, linear, polynomial, exponential and see some examples which exhibit those complexities functions. Please share if there is anything wrong or missing.

If you are preparing for an interview and would like to have personalized coaching, please book a free demo session with us.