Complexity analysis of functions Part 2

In last post complexity analysis of functions, we learned the four types of complexity functions: constant time O(1), linear O(n), quadratic O(n2) and logarthmic O(logn). Today, we will discuss three more of the common time complexities and examples of each to consolidate our understanding.

These time complexities are: linealogarithmic O(nlogn), O(2n) known as exponential and O(n!) also known as factorial complexity.

O(nlogn) – Linearithmic complexity

Whenever in interview you are asked to optimize your quadratic O(n2) function, first thing you should attempt to make it O(nlogn), as it sits between linear and quadratic complexity. This is self-evident for sorting algorithms, Bubble Sort, and Insertion Sort are quadratic algorithms O(n2) whereas other sorting algorithms like Merge Sort, Quick Sort, and Heap sort are linearithmic O(nlogn) which are considered better than former ones. If you ever have to sort your input to solve your problem, pick one of the latter three.

If we are sorting an input to reduce the complexity from O(n2) to linear, we forget that sorting complexity dominates the overall complexity of the solution and best it to use Quick Sort as it gives us O(nlogn) compared to bubble sort which is O(n2).

If you are looking at a recursive function with a Master theorem equation like T(n) = aT(n/b) + O(nc), and if logba is equal to c, then that recursive function has nlogn complexity.

int merge(int a[], int start, int mid, int end){
  
    int i,j,k;
    int temp[end-start+1];
  
    i = start; j = mid+1; k =0;
    /* Compare all elements of two sorted arrays and store
      them in sorted array. */
    while(i <= mid && j <= end){
        if(a[i]< a[j]){
            temp[k++]= a[i++];
        }
        else {
            temp[k++]  = a[j++];
        }
    }
    while(i <= mid){
        temp[k++] = a[i++]; 
    }
    while(j <= end){
        temp[k++] = a[j++]; 
    }
    //Copy the temporary array into original array
    for(i=0; i<k; i++){
        a[i+start] = temp[i];
    }
}
int mergeSort(int a[], int start, int end ){
  
    int mid  = (start + end)/2;
    if(start<end){
        //sort the left part
        mergeSort(a, start, mid);
        //sort right part
        mergeSort(a, mid+1, end);
  
        //merge them together
        merge(a, start, mid, end);
    }
}

Let’s find the values of: T(n) = a T(n/b) + f(n)
a: The number of subproblems. So, a = 2.
b: Each of the subproblems divides n in half. So, b = 2
f(n): The work done outside the recursion is the function merge, which has a runtime of O(n) since it visits all the elements on the given arrays.

So logba = 1 is equal to c = 1 and hence the complexity of the merge sort algorithm is O(nlogn), refer to Master Theorem for more understanding how euqation is solved.

O(2n) – Exponential complexity

This is the complexity you never want to have for your solution in the interview, but sometimes the first solution we come up with is exponential. That gives us a platform to start optimizing. But remember, not exponential complexity algorithms can be optimized and some are genuinely exponential.

The easiest way to find if an algorithm is exponential, try it with a few small inputs like 1,2,3 and see if the number of steps algorithm takes double at every increase in the input.

int function(int n){
   if(n == 0) return 0;
   
   return function(n-1) + function(n-1)
}

Let’s use our trick:

for n = 1, Call to function is function(1) and two calls to function(0), total calls 3.
for n = 2, Call to function is function(2) and two calls to function(1), each has total calls 3, so overall calls for n = 2 are 7.
for n = 3, Call to function is function(3) and two calls to function(2), each has total calls 7, so overall calls for n = 3 are 15.

The number of steps done with each increase in the input is more than doubled, typical case for exponential function.

Rule of thumb for the complexity of the recursive functions in which input for each successive call is not divided by some factor is O(number of branchesdepth), in above example, number of branches are 2 and the depth of each branch will be n. Hence complexity of O(2n)

Can you guess the complexity of the below code? This function finds the sum of all the nodes in a balanced binary tree.

int sum(Node node){
    if(node == null) return 0;
    return sum(node.left) + node.val + sum(node.right);
}

If we apply the formula O(number of branchesdepth), it becomes 2depth and algorithm looks exponential. We know that this function is not exponential, as to sum all the nodes of a binary tree, we go to each node once, complexity is O(n). Is our formula wrong? No, it is not. Look at the power of 2, it is the depth of the binary tree. Depth of a balanced binary tree is log n, so actual complexity is O(2log2n), when solved, it is equal to O(n)

An example of a true exponential complexity function is to find all subsets of a given set or an array. For a set of n elements, there can be 2n subset and if you have to gather all of them, it is exponential work. This one of those problems you cannot optimize.

O(n!) – Factorial complexity

What is a factorial: it is the product of all the numbers which are less than or equal to n. For example,

the factorial of 5 = 5 x 4 x 3 x 2 x 1 = 120

This complexity grows super fast and you do not want you algorithms to have this complexity.

Algorithms with factorial complexity are the traditional approach to find the factorial of a number and find all the permutations of a string.

function getPermutations(string, prefix = '') {
  if(string.length <= 1) {
    return [prefix + string];
  }

  return Array.from(string).reduce((result, char, index) => {
    const reminder = string.slice(0, index) + string.slice(index+1);
    result = result.concat(getPermutations(reminder, prefix + char));
    return result;
  }, []);
}

Today we learned three more types of complexity types, examples and tips to identify them in interview. If you are preparing for an interview and are looking for personalized coaching, please reach out to us on communications@algorithmsandme.com or book a free session with us.

Common time complexities of functions

In last two posts time complexity for aglorithms and Master throrem, we learned the fundamentals of the complexity analysis. Today, we will discuss some of the common time complexities and examples of each to consolidate our understanding.

There are eight types of time complexities which occur commonly in algorithm : O(1) also called as constant time, O(n) called as linear, O(n2) called as quadratic, O(nk) called as polynomical, O(logn) known as logarithmic, O(nlogn) known as linearithmic, O(2n) known as exponential and O(n!) also known as factorial complexity.

Growth of complexity functions

All these complexity functions grow at a different rate and it is very important to know the difference. That will help you evaluate your algorithms in an objective manner. Below is the graph of growth of each complexity function as n grows larger.

Time complexity for algorithms

O(1) – Constant time complexity

Constant time complexity does not mean that there is not work or operations are done. It just means that the number of operations does not depend on the size of the input. Irrespective of the size of the input, it will run a constant number of times.
For example, accessing a specific index in an array has constant time complexity. Similarly, accessing a value for a given key from a dictionary* is O(1) operation, it does not matter if there is only one word in the dictionary or a million words.
Another misconception about constant time is that it should be small. Not that is not required, it can be as large as possible. As far as it does not depend on the input, your algorithm is a constant time or O(1) algorithm.

A perfect hash function can give a worst-case complexity of O(1). This ideal hash function is not practical, so there will be some collisions that can lead to worst-case runtime of O(n). On average the lookup time is O(1).

Quick check: What is the complexity of the code below?

for(int i=0; i<100000; i++){
     System.out.println(i);
}

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

It becomes more confusing when this kind of loop is nested inside multiple loops. For example,

for(int i=0; i<n; i++){
   for(int j=0; j<n; j++){
       for(int k=0; k<100000; k++){
           System.out.println("Hello");
       }
   }
}

By the look of it, we tend to think that it is O(n). However, notice that the for loop does not run a variable number of times, it runs exactly 100000 times no matter what is the input for the function. This is a constant time code.

O(n) – Linear time complexity

When the complexity is directly proportional to the size of the input size, time complexity of the algorithm is linear. For example,

for(int i=0; i<n; i++){
   System.out.println( i + " ");
}

this function prints all the numbers from 0 to n-1. As n grows, execution of print statement increases as well. For n=0, it will not execute at all, for n=1, it will execute once, for n=2, it executes two times and for n=n, it executes n times. So, the loop is directly proportional to the value of n, hence complexity of loop is O(n)

Can you identify the complexity of the following function? It prints all the odd numbers less than n.

for(int i=0; i<n; i++){
   if(i%2 == 1)
     System.out.println( i + " ");
}

We can see that the print statement is executed only half the time, but still, the complexity of loop if O(n) because we are still depending on the size of the input. Same is the complexity of the below code

for(int i=0; i<n/2; i++){
     System.out.println( i + " ");
}

O(n2) – Quadratic time complexity

Typical quadratic complexity function has format of this kind, a loop with a in loop and both loops depends on the size of input.

for(int i=0; i<n; i++){
   for(int j=0; j<n; j++){
      System.out.println(i+j);
   }
}

But there are some cases where it is not quite evident and can lead to misconception. One of the common quadratic complexity codes is as below and students and professional make mistake identifying it.

for(int i=0; i<n; i++){
   for(int j=i+1; j<n; j++){
      System.out.println(i+j);
   }
}

Cause of concern is that the inner loop is not exactly run n times. let’s see why this piece of code is O(sup>2)?

For i = 0: inner loop executes n times
For i = 1: inner loop executes n-1 times
For i = 2: inner loop executes n-2 times

For i = n-1: inner loop executes 1 time

So total number of time print statement is executed is n + (n-1) + (n-2) + (n-3) + … + 1 = (n * n-1)/2 which is O(n2)

Another common code where candidates miscalculate calculate the time complexity is as below

for(int i=0; i<a; i++){
   for(int j=0; j<b; j++){
      System.out.println(i+j);
   }
}

It is a common tendency to report the complexity of this code as O(n2) by looking at two loops. More common when there is a n anywhere in the function. Be careful that there two different variables a and b which are controlling the loops and there is no pre-knowledge of any relationship between two. So, best way to report complexity is O(a * b) and not O(n2).

Examples of quadratic complexity algorithms are: finding duplicates in an array, insertion sort, bubble sort, finding all the pairs of an array and many more. We will learn more when we solve problems.

O(log n) – Logarithmic time complexity

Logarithmic time complexities usually apply to algorithms that divide the inputs into multiples parts every time. For instance, in the binary search algorithm, we divide the array or input into two halves and discard one half each time. We keep doing that until we do not have any element in the input to look for. So, for first iteration, we will have size n, then n/2, then n/4 and so on till we reach 1. How many iterations does it take to reach from n to 1, by dividing n by 2 each time? It is log n times.
Hence complexity of binary search algorithm is O(log n), you can deduce the same using Master Theorem.

Adding a new element in a balanced binary search tree is a logarithmic algorithm.

In the next post, we will discuss the last 4 types of complexities algorithms. Please comment for anything missing or wrong.
If you are preparing for an interview and looking for personalized coaching for preparation, please reach out to us or book a free session with us.

Master theorem for complexity analysis

What is Master Theorem?

The Master theorem provides us solution for complexity analysis of recursive function by substituting values for variable. The Master Theorem applies to recurrences of the following form:

T (n) = aT (n/b) + f(n)

where a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function.
These kinds of recurrence relations occur in the analysis of many “divide and conquer” algorithms like binary search or merge sort.

But what do these variables a, b and n mean?
n is the size of the input or the problem.
a is the number of subproblems in the recursion, means are we dividing the problem into two halves or 3 or 5? For example, for binary search algorithm a=1, and for merge sort algorithm it is 2.
n/b is the relative subproblem size. What rate is the input reduced? E.g., Binary search and Merge sort cut input in half.
f(n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and
the cost of merging the solutions to the subproblems.

Once, we have a, b and f(n) it is easy to find complexity of the algorithm by substitutin values in this expression

O(nlogba)

However, that’s not all, we have f(n) in the equation and final runtime of your algorithm depends on the relationship between nlogba and f(n)

There are three possible scenarios:
Scenario 1: f(n) = O(nc) and c < logba then the complexity is nlogba This mean recursion is taking more time than the combine and merge.

Example:
T(n) = 8T(n/2) + 1000n2
Here, logba = log28 = 3 > c (2); hence complexity is O(nlogba) = O(n3)

Scenario 2: f(n) = O(nclogkn) for k >= 0 and c = logba then the complexity is nlogbalogk+1n. It essentially means same runtime inside and outside the recursion.

Example:
T(n) = 2T(n/2) + 10n
10n can be written as O(nlog0n) with k = 0.
Here, logba = log22 = 1 = c (1); hence complexity is O(nlogbalogk+1n) = O(nlogn)

Scenario 3: f(n) = O(nc) and c > logba then the complexity is O(nc). It essentially means same runtime outside the recursion is more than split and recurse.

Example:
T(n) = 2T(n/2) + n2
Here, logba = log22 = 1 < c (2); hence complexity is O(nc) = O(n2)

Exceptions to Master Theorem

1. T(n) = 2n T(n/2) + nn.

This is not admissible because a is not constant here. For Master theorem to be application a and b must be constant.

2. T(n) = 0.5 T(n/2) + n2.

This is not admissible because a < 1 which is to say we are reducing the problem in less than one subproblems. For Master theorem to be application a must be greater than 1.

3. T(n) = 64T(n/2) – n2logn.

This is not admissible because f(n) is negative.

4. T(n) = 64T(n/2) – 2n

This is not admissible because f(n) is not polynomial.

Master Theorem Examples

Let’s apply the Master theorem on some of the known algorithms and see if it works?
Binary Search Algorithm
In the binary search algorithm, depending on the relationship between the middle element and the key, we discard one part of the array and look into the other. Also, from above, we know the Master theorem:

T(n) = aT(n/b) + f(n)

In this case, a is 1 as we are reducing to only one problem. b is 2 as we divide the input by half and no outside of recursion no work is done, hence the f(n) is O(1)

logba is log21 = 0. So logba is actually equal to c which is 0. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of binary search alogrithm as O(logn)

Merge Sort Algorithm
In the merge sort algorithm, we split the array into two equal parts and sort them individually. Apart from split we do a merge of elements, which take O(n) time. Let’s find a, b in the Master Theorem equation.

T(n) = aT(n/b) + f(n)

In this case, a is 2 as we are reducing to only two subproblems. b is 2 as we divide the input by half and outside of recursion no work is done, hence the f(n) is O(n)

logba is log22 = 1. So logba is actually equal to c which is 1. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of merge sort alogrithm as O(nlogn)

Please book a free session if you are looking for coaching to prepare for your next technical interview. At Algorithms and Me, we provide personalized coaching and mock interviews to prepare you for Amazon, Google, Facebook, etc. interviews.

References

  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
    Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.
  • Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples.
    Wiley, 2002. ISBN 0-471-38365-1. The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270
  • Master theorem teaching at CSE
  • Master theorem teaching at CSD
  • Arrays With Elements In The Range 1 to N

    When an array has all its elements in the range of 1 to N ( where N is the length of the array ) we can use the indices to store the ordered state of the elements in the array. This ordered-state can in-turn be used to solve a variety of problems which we’ll explore soon. First, a very simple demonstration of this property. 

    Here is an array which has unique elements in the range of 1 to N.
    Given array (A) : 5,3,1,4,2
    Indices:                0,1,2,3,4

    Sort in Linear Time

    The first use-case of this unique property is being able to sort in O(N) time i.e. a special-case(all unique elements) of the Counting Sort. The crux of this sort is to check whether an element is at its corresponding index and swap it to its correct index if it’s not. Following is a demonstration of this logic:

    Given array (A) : 5,3,1,4,2
    Indices:                0,1,2,3,4

    For each A[i] check if A[A[i] – 1] equals A[i] or not. If they are not equal then swap element at A[A[i] – 1] with A[i]. Basically the correct value for any index i is when A[i] contains i+1. 

    In the above case, let’s start with i = 0.

    A[A[0] – 1] or A[5-1] orA[4] which is 2 and A[0] = 5. This means that A[A[i] – 1] is not equal to A[i] and hence not in its correct position. So we need to swap in order to put A[0] -> 5 to its correct position which is index 4 and A[0] will hold 4 after the swap. Similarly, we need to repeat this check & swap for all the elements.

    What if we cancel-out the common terms and modify the check from  A[i] != A[A[i] - 1] to i != A[i]-1 ?

    Find The Missing Integer

    A similar approach can help us find the smallest missing positive-integer in a given array. By smallest missing positive-integer, we just mean the smallest positive integer that does not exist in the given list of numbers. For example: 

    Given Array: -2, 3, 0, 1, 3
    In the above case, the smallest missing positive integer is 2.

    If we were to apply the usual sorting techniques and then scan the array for the smallest positive integer absent it would imply a time-complexity of O(NLog(N)) + O(N). We can definitely do better than this! At first glance, it seems that this problem does not lie within the unique property of elements being in the range of 1 to N since the numbers in the given array are well outside the range, but to solve this problem we still only need to figure out whether we have elements from 1 to N present in the given array or not.

    How do we know whether the given array has elements from 1 to N? We can use the counting-sort discussed earlier to put each element in its “correct position”, i.e index 0 should hold 1, index 1 should hold 2 and so on. The smallest index that does not hold its correct element is the missing integer.

    If we sort the given array using counting sort described above, we will get: 1, 0, 3, -2, 3. And the smallest index i to not hold its correct value i.e. i+1 will give us the answer to the smallest missing positive integer. In this case, that index is 1 since it does not hold 2, thus the smallest positive missing integer is 2.

    Find The Duplicate Element

    The third use-case of this property is to figure out the duplicate elements without using any extra space. We can iterate over the array A and mark the corresponding index of the encountered element as negative – unless it has already been marked negative! For example: if A[1] = 3 (or -3 ) then mark A[ Abs[3] - 1] as negative, this way whenever we encounter 3 (or -3) again in any of the A[i] we will know that the value 3 has been visited before since A[3-1] will be negative.

    Given array (A) : 5,3,1,4,3
    Indices:                0,1,2,3,4

    When we encounter A[0] i.e. 5, we make A[5-1] i.e. A[4] negative, so the array becomes: 
    5,3,1,4,-3
    Next, we encounter A[1] i.e. 3, we make A[3-1] i.e. A[2] negative, so the array becomes: 
    5,3,-1,4,-3
    Next, we encounter A[2] i.e. -1, we make A[1-1] i.e. A[0] negative, so the array becomes: 
    -5,3,-1,4,-3
    Next, we encounter A[3] i.e. 4, we make A[4-1] i.e. A[3] negative, so the array becomes: 
    -5,3,-1,-4,-3
    Next, we encounter A[4] i.e. -3, we want to make A[3-1] i.e. A[2] negative, but in this case, A[2] is already negative thus we know that A[2] has been visited before! Which means Abs(A[4]) i.e 3 is the duplicate element.


    Here is a snippet to demonstrate the code for sorting an array in linear time as per the above approach. The exact same approach can be used to solve the other two applications i.e. Finding the Duplicate and Finding The Missing Integer.

            int swap=0;
    
            for(int i = 0; i < nums.length;){
                
                if(nums[i] > 0 && nums[i] < nums.length) {
    
                    if(nums[nums[i]-1] != nums[i]){                     
                        swap = nums[i];
                        nums[i] = nums[nums[i] - 1];
                        nums[swap - 1] = swap;
                    }else{
                        i++;
                    }
                    
                }else{
                    i++;
                }
            }
    

     

    If you are preparing for a technical interview in companies like Amazon, Facebook, etc and want help with preparation, please register for a coaching session with us.

    Range sum query- Immutable array

    Range sum query- Immutable array

    Write a service which given an integer array, returns the sum of the elements between indices i and j (i ≤ j), inclusive. Example: nums = [-2, 0, 3, -5, 2, -1]
    sumRange(0, 2) -> 1
    sumRange(2, 5) -> -1
    sumRange(0, 5) -> -3

    Also, the input set does not change during the calls to the sumRange(i,j).

    The brute force solution is to calculate the sum of all the elements A[i] to A[j] whenever a sumRange(i,j) is called. This method has time complexity of O(n). It is OK to have this solution for small scale but as the number of queries goes up, processing of all the numbers from i to j would be inefficient. Also, imagine a case where the array itself is very large, then O(n) complexity for each query will lead to choking of your service.

    Range sum query- Immutable array : thoughts

    There are two hints for optimization is in the question, first, the array is immutable, it does not change. Second, we have to build a service, that means we have a server with us. These two things allow us to pre-compute and store results even before the query is made.

    Now, the question is what do we pre-compute and how do we store it? We can precompute the sum of all the elements between each i and j and store them in a two-dimensional array. range[i][j] stores the sum of all the elements between index i and j. It will use O(n2) additional memory, however, the response time for each sumRange query will be constant. Also, the preprocessing step is O(n2)

    Can we optimize for space as well? If I know the sum of all the elements from index 0 to index i and sum of all the elements from 0 to j, can I find the sum of all the elements between i and j? Of course, we can do it.

     Sum(i,j) = Sum(0,j) - Sum(0,i) + A[i]. 

    Below diagram explains it.
    range sum query array

    However, integer array is not passed in the query request, we cannot use it while calculating the sum. Instead, we will use formula like: Sum(i,j) = Sum(0,j) – Sum(0,i-1), which is equivalent to the above.

    We will pre-calculate the sum of all the elements between index 0 and j for all j>=0 and jImplementation

    class NumArray {
    
        int[] rangeSum;
        public NumArray(int[] nums) {
            rangeSum = new int[nums.length];
            
            if(nums.length>0){
                rangeSum[0] = nums[0]; 
                for(int i=1; i<nums.length; i++){
                    rangeSum[i] = rangeSum[i-1] + nums[i];
                }
            }
        }
        
        public int sumRange(int i, int j) {
            if(i==0) return rangeSum[j];
            return rangeSum[j] - rangeSum[i-1];
        }
    }
    

    Now, the preprocessing step is O(n). N additional space is used. At the same time query response time is O(1).

    Please share if there is anything wrong or missing in the post. If you are preparing for an interview and needs coaching to prepare for it, please book a free demo session with us.

    Find combinations which add up to a number

    Combination sum problem

    Given an array of integers (candidates) (without duplicates) and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.

    Also, same candidate can occur in the combination as multiple times.

    For example, Input: candidates = [4,3,5,9], target = 9, a solution set is:[ [9], [3,3,3], [4,5]]

    How can do we go about it? What happens if I take the coin 4 in the current example? Then all need to find in the candidates array if there is a combination adds up to 9-4 = 5. Seems like a recursion. For recursion, we need a termination condition. In this case, if I have on candidates to add and target is greater than zero, then whatever combination I have till now has no value, so I terminate the recursion in this case.

    Second what if I have already found a combination which adds up to target? Then I will put that combination in the list of combinations and return.

    What happens in recursive implementation? Well, we go through each coin, add that to current combination and see if leads to the target? If it does, it will be added to the result list along with the list of other candidates. If not, we just remove the current coin (backtrack) from the current combination and try the next coin.

    This approach is called exhaustive search and backtracking paradigm of problem-solving where you search the entire input set to see to find the answer. However, in this case, we can prune the search path as soon as we know that the current set of candidates add up more than the target.

    Combination sum : implementation

    class Solution {
        public List<List<Integer>> combinationSum(int[] candidates,
                                                  int target) {
            /* The result list contains all the combination 
               which add up to target.
            */
            List<List<Integer>> result = new ArrayList<List<Integer>> ();
            
            //We start with the first coin and search exhaustively.
            combinationSumUtil(candidates,
                               target,
                               result,
                               new ArrayList<Integer>(),
                                0
            );
            
            return result;
            
        }
        
        public void combinationSumUtil(int[] candidates, 
                                      int target,
                                      List<List<Integer>> result,
                                      List<Integer> current, 
                                      int index){
            
            /* 
               First termination condition: if there are no coins left
               and required target is more than zero.
            */
            if(target > 0 && index == candidates.length){
                return;    
            }
    
            /* 
               Second termination condition: if target is zero,
               we can add the current combination to the result
            */
            if(target == 0 && index < candidates.length){
                result.add(new ArrayList<>(current));
                return;
            }
            
            /* 
               Start from the current index, and go through
               all the coins.
            */
            for(int i=index; i<candidates.length; i++){
                /* 
                   This is where we prune the branches 
                   of our exhaustive search
                */
                if(target - candidates[i] >=0){
                    current.add(candidates[i]); // add to the list
                    combinationSumUtil(candidates, 
                                       target-candidates[i],
                                       result, current, i);
                    
                    /* Remove the candidate from the list and 
                       check other combinations.
                    */  
                    if(current.size() > 0)
                        current.remove(current.size()-1);
                }
            }
            
        }
    }
    

    The time complexity is C(n,1) + C(n,2) + … + C(n,n) = 2^n – C(n,0) = O(2n).

    The beauty of this solution is that it works with negative candidates as well, where the Dynamic solution for it may not work.

    Maximum area rectangle in a histogram

    A histogram is a diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval. Below is an example of a histogram.

    maximum area rectangle in histogram

    Given a histogram, whose class interval is 1, find maximum area rectangle in it. Let me explain the problem in more details.

    In the histogram above, there are at least 6 rectangles with areas 2, 1,5,6,2, and 3. Are there more rectangles? Yes, we can make more rectangles by combining some of these rectangles. A few are shown below.

    Apparently, the largest area rectangle in the histogram in the example is 2 x 5 = 10 rectangle. The task is to find a rectangle with maximum area in a given histogram. The histogram will be given as an array of the height of each block, in the example, input will be [2,1,5,6,2,3].

    Maximum area rectangle: thoughts

    First insight after looking at the rectangles above is: block can be part of a rectangle with a height less than or equal to its height. For each block of height h[i], check what all blocks on the left can be part of a rectangle with this block. All the blocks on the left side with a height greater than the current block height can be part of such a rectangle.
    Similarly, all the blocks on the right side with a height greater than the current block height can be part of such a rectangle.
    Idea is to calculate leftLimit and rightLimit and find the area (rightLimit - leftLimit) * h[i].
    Check if this area is greater than previously known area, then update the maximum area else, continue to the next block.

    class Solution {
        public int largestRectangleArea(int[] heights) {
            
            if(heights.length == 0) return 0;
            int maxArea = Integer.MIN_VALUE;
    
            for(int i=0; i<heights.length; i++){
                //Find the left limit for current block
                int leftLimit = findLeftLimit(heights, i);
    
                //Find the right limit for current block
                int rightLimit = findRightLimit(heights, i);
    
                int currentArea = (rightLimit - leftLimit-1) * heights[i];
                maxArea = Integer.max(maxArea, currentArea);
            }
    
            return maxArea;
        }
    
        private int findLeftLimit(int [] heights, int index){
            int j = index-1;
            while (j >= 0 && heights[j] >= heights[index]) j--;
    
            return j;
        }
    
        private int findRightLimit(int [] heights, int index){
            int j = index+1;
            while (j < heights.length && heights[j] >= heights[index])
                j++;
    
            return j;
        }
    }
    

    The time complexity of the implementation is O(n2); we will left and right of each block which will take n operations, we do it for n blocks and hence the complexity is quadratic. Can we optimize the time complexity?

    If heights[j] >= heights[i] and leftLimit of index j is already known, can we safely say that it will also be the leftLimit of index i as well?
    Can we say the same thing for rightLimit well? Answers to all the questions are yes. If we store the left and right limit for all indices already seen, we can avoid re-calculating them.

    class Solution {
        public int largestRectangleArea(int[] heights) {
            
            if(heights.length == 0) return 0;
    
            int maxArea = Integer.MIN_VALUE;
    
            //Finds left limit for each index, complexity O(n)
            int [] leftLimit = getLeftLimits(heights);
            //Find right limit for each index, complexity O(n)
            int [] rightLimit = getRightLimits(heights);
    
            for(int i=0; i<heights.length; i++){
                int currentArea = 
                    (rightLimit[i] - leftLimit[i] -1) * heights[i];
                maxArea = Integer.max(maxArea, currentArea);
            }
    
            return maxArea;
        }
    
        private int[] getLeftLimits(int [] heights){
    
            int [] leftLimit = new int[heights.length];
            leftLimit[heights.length-1] = -1;
    
            for(int i=0; i<heights.length; i++) {
                int j = i - 1;
                while (j >= 0 && heights[j] >= heights[i]) {
                    j = leftLimit[j];
                }
                leftLimit[i] = j;
            }
            return leftLimit;
        }
    
        private int[] getRightLimits (int [] heights){
    
            int [] rightLimit = new int[heights.length];
            rightLimit[heights.length-1] = heights.length;
    
            for(int i=heights.length-2; i>=0; i--){
                int j = i+1;
                while(j<heights.length 
                          && heights[j] > heights[i]){
                    j = rightLimit[j];
                }
                rightLimit[i] = j;
            }
            return rightLimit;
        }
    }
    

    The array leftLimitcontains at index i the closest index j to the left of i such that height[j] < height[i]. You can think about each value of the array as a pointer (or an arrow) pointing to such j for every i. How to calculate leftLimit[i]? Just point the arrow one to the left and if necessary just follow the arrows from there, until you get to proper j. The key idea here to see why this algorithm runs in O(n) is to observe that each arrow is followed at most once.

    Largest area rectangle: stack-based solution

    There is a classic method to solve this problem using the stack as well. Let’s see if we can build a stack-based solution using the information we already have. Let’s we do not calculate the area of the rectangle which includes the bar when we are processing it. When should we process it? Where should this bar be put on? If we want to create a rectangle with a height of this bar, we should find the left and right boundaries of such a rectangle. We should put this bar on a stack.
    Now when you are processing bar j if height[j] is less than the bar on the top of the stack, we pop out the bar at the top. Why? Because this is the first bar on the right which has a height less than the height of the bar at top of the stack. This means if we want to make a rectangle with a height of the bar at the top of the stack, this index means the right boundary. This also gives away that all the blocks on the stack are in increasing order, as we never put a block which has a height less than the height of block at the top on to the stack. It means the next bar on the stack is the first bar which has a height lower than the bar at the top. To calculate the area of the rectangle with height as h[top], we need to take width as current index j - stack.peek() - 1

    So the idea is that:

    1. For each bar, take its height as the rectangle’s height. Then find the left and right boundaries of this rectangle.
    2. The second top bar in the stack is always the first bar lower than the top bar on the stack on the left.
    3. The bar that j points to is always the first bar lower than the top bar in the stack on the right.
    4. After step 2 and 3, we know the left and right boundaries, then know the width, then know the area.
    private int maxAreaUsingStack(int[] heights){
    
            Stack<Integer> s = new Stack<>();
    
            int maxArea = 0;
            for(int i=0; i<=heights.length; i++){
                //Handling the last case
                int h = i == heights.length ? 0 : heights[i];
                while(!s.empty() && h < heights[s.peek()]){
                    int top = s.pop();
                    int leftLimit = s.isEmpty() ? -1 : s.peek();
                    int width = i-leftLimit-1;
    
                    int area = width * heights[top];
                    maxArea = Integer.max(area, maxArea);
                }
                s.push(i);
            }
            return maxArea;
        }
    
    The time complexity of the code is O(n) with an additional space complexity of O(n) If you are preparing for a technical interview in companies like Amazon, Facebook, etc and want help with preparation, please register for a coaching session with us.

    Minimizing maximum lateness

    Minimizing maximum lateness : Greedy algorithm

    Since we have chosen the greed, let continue with it for one more post at least. Today’s problem is to minimize maximum lateness of a task. Let me clarify the problem: given a processor which processes one process at a time and as always given a list of processes to be scheduled on that processor, with the intention that maximum late process should be minimized. Contrary to previous problems, this time, we are not provided with start time and end time, but we are given length of time ti process will run and deadline it has to meet di, fi is actual finish time of process completion.

    Lateness of a process is defined as
    li = max{0, fi − di}, i.e. the length of time past its deadline that it finishes.
    Goal here to schedule all tasks to minimize maximum lateness L = max li For example:

    minimize maximum lateness

    Minimizing maximum lateness : algorithm

    Let’s decide our optimization strategy. There is some order in which jobs can be decided: shortest job first, earliest deadline first, least slack time first.

    Let’s see if any of the above strategies work for the optimal solution. For shortest processing time first, consider example P1 = (1,100) P2 = (10, 10). If we schedule the shortest job first as in order (P1, P2), lateness will be 91, but if we take them as (P2, P1), lateness will be 0. So, clearly taking the shortest process first does not give us an optimal solution.

    Check for the smallest slack time approach. See if you can come up with some counterexample that it does not work.

    That leaves us with only one option, take the process which has the most pressing deadline, that is the one with the smallest deadline and yet not scheduled. If you have noticed, the example given for the problem statement is solved using this method. So, we know it works.

    1. Sort all job in ascending order of deadlines
    2. Start with time t = 0
    3. For each job in the list
      1. Schedule the job at time t
      2. Finish time = t + processing time of job
      3. t = finish time
    4. Return (start time, finish time) for each job

    Minimizing maximum lateness : implementation

    from operator import itemgetter
    
    jobs = [(1, 3, 6), (2, 2, 9), (3, 1, 8), (4, 4, 9), 
            (5, 3, 14), (6, 2, 15)] 
    
    def get_minimum_lateness():
    	schedule =[];
    	max_lateness = 0
    	t = 0;
    	
    	sorted_jobs = sorted(jobs,key=itemgetter(2))
    	
    	for job in sorted_jobs:
    		job_start_time = t
    		job_finish_time = t + job[1]
    
    		t = job_finish_time
    		if(job_finish_time > job[2]):
    			max_lateness =  max (max_lateness, (job_finish_time - job[2]))
    		schedule.append((job_start_time, job_finish_time))
    
    	return max_lateness, schedule
    
    max_lateness, sc = get_minimum_lateness();
    print "Maximum lateness will be :" + str(max_lateness)
    for t in sc:
    	print t[0], t[1]
    

    The complexity of implementation is dominated by sort function, which is O(nlogn), rest of processing takes O(n).

    Please share your suggestions or if you find something is wrong in comments. We would love to hear what you have to say. If you find this post interesting, please feel free to share or like.

    Coin change problem : Greedy algorithm

    Coin change problem : Greedy algorithm

    Today, we will learn a very common problem which can be solved using the greedy algorithm. If you are not very familiar with a greedy algorithm, here is the gist: At every step of the algorithm, you take the best available option and hope that everything turns optimal at the end which usually does. The problem at hand is coin change problem, which goes like given coins of denominations 1,5,10,25,100; find out a way to give a customer an amount with the fewest number of coins. For example, if I ask you to return me change for 30, there are more than two ways to do so like

     
    Amount: 30
    Solutions : 3 X 10  ( 3 coins ) 
                6 X 5   ( 6 coins ) 
                1 X 25 + 5 X 1 ( 6 coins )
                1 X 25 + 1 X 5 ( 2 coins )

    The last solution is the optimal one as it gives us a change of amount only with 2 coins, where as all other solutions provide it in more than two coins.

    Solution for coin change problem using greedy algorithm is very intuitive and called as cashier’s algorithm. Basic principle is : At every iteration in search of a coin, take the largest coin which can fit into remaining amount we need change for at the instance. At the end you will have optimal solution.

    Coin change problem : Algorithm

    1. Sort n denomination coins in increasing order of value.
    2. Initialize set of coins as empty. S = {}
    3. While amount is not zero:
    3.1 Ck is largest coin such that amount > Ck
    3.1.1 If there is no such coin return “no viable solution”
    3.1.2 Else include the coin in the solution S.
    3.1.3 Decrease the remaining amount = amount – Ck

    Coin change problem : implementation

    #include <stdio.h>
     
    int coins[] = { 1,5,10,25,100 };
     
    int findMaxCoin(int amount, int size){
    	for(int i=0; i<size; i++){
    	    if(amount < coins[i]) return i-1;
    	}
    	return -1;
    }
    
    int findMinimumCoinsForAmount(int amount, int change[]){
     
    	int numOfCoins = sizeof(coins)/sizeof(coins[0]);
    	int count = 0;
    	while(amount){
    	    int k = findMaxCoin(amount, numOfCoins);
    	    if(k == -1)
                    printf("No viable solution");
    	    else{
                    amount-= coins[k];
    		change[count++] = coins[k];
                }
    	}
    	return count;
    }
     
    int main(void) {
    	int change[10]; // This needs to be dynamic
    	int amount = 34;
    	int count = findMinimumCoinsForAmount(amount, change);
     
    	printf("\n Number of coins for change of %d : %d", amount, count);
    	printf("\n Coins : ");
    	for(int i=0; i<count; i++){
    		printf("%d ", change[i]);
    	}
    	return 0;
    }
    

    What will the time complexity of the implementation? First of all, we are sorting the array of coins of size n, hence complexity with O(nlogn). While loop, the worst case is O(amount). If all we have is the coin with 1-denomination. Overall complexity for coin change problem becomes O(n log n) + O(amount).

    Will this algorithm work for all sort of denominations? The answer is no. It will not give any solution if there is no coin with denomination 1. So be careful while applying this algorithm.

    Please share if you have any suggestion or if you want me to write on a specific topic. If you liked the post, share it!

    Disjoint set data structure

    Disjoint set data structure

    A disjoint set data structure or union and find maintains a collection 𝑆 = { 𝑆1, 𝑆2, ⋯ , 𝑆𝑛 } of disjoint dynamic sets. Subsets are said to be disjoint if intersection between them is NULL. For example, set {1,2,3} and {4,5,6} are disjoint sets, but {1,2,3} and {1,3,5} are not as intersection is {1,3} which is not null. Another important thing about the disjoint set is that every set is represented by a member of that set called as representative.

    Operations on this disjoint set data structure:
    1. Make Set: Creates a new set with one element x, since the sets are disjoint, we require that x not already be in any of the existing sets.
    2. Union: Merges two sets containing x and y let’s say Sx and Sy and destroys the original sets.
    3.Find: Returns the representative of the set which element belongs to.

    Let’s take an example and see how disjointed sets can be used to find the connected components of an undirected graph.

    To start with, we will make a set for each vertex by using make-set operation.

    for each vertex v in G(V)
        do makeSet(v)
    

    Next process all the edges in the graph (u,v) and connect set(u) and set(v) if the representatives of the set which contains u and set which contains v are not same.

    for each edge (u,v) in 𝐺(E)
        do if findSet(u) != findSet(v)
            then union(u, v)
    

    Once above preprocessing steps have run, then we can easily find answer if two vertices u and v are part of same connected component or not?

    boolean isSameComponent(u, v)
     if findSet(u)==findSet(v)
         return True
     else 
         return False
    

    To find how many components are there, we can look at how many disjoint sets are there and that will give us the number of connected components in a graph. Let’s take an example and see how it works.

    disjoint set data structure

    Below table shows the processing of each edge in the graph show figure above.

    disjoint sets

    Now, how can we implement sets and quickly do union and find operations? There are two ways to do it.

    Disjoint set representation using an array

    Simple implementation of disjoint set is using an array which maintains their representative of element i in A[i]. To this implementation to work, it is must that all the element in the set are in range 0 to N-1 where N is size of the array.

    Initially, in makeSet() operation, set A[i]=i, for each i between 0 and N-1 and create the initial versions of the sets.

    disjoint set data structure representation of graph

    for (int i=0; i<N; i++) A[i] = i;
    

    Union operation for the sets that contain integers u and v, we scan the array A and change all the elements
    that have the value A[u] to have the value A[v]. For example, we if want to connect an edge between 1 and 2 in the above set, the union operation will replace A[2] with A[1].

    disjoint set data structure time complexity and implementation in java

    Now, if want to add an edge between 3 and 1. In this case, u = 3 and v = 1. A[3] = 3 and A[1] = 1. So, we will replace all the indices of A where A[i] = 1. So final array looks like this.

    disjoint set data structure java

    Similarly, if want to add an edge from 6 to 7.
    disjoint sets

    //change all elements from A[u] to A[v].
    void union(int A[], int u, int v){
        int temp = A[u];
        for(int i=0; i<A.length; i++){
            if(A[i] == temp)
                A[i] = A[v]; 
        }
    }
    

    findSet(v) operation returns the value of A[v].

    int findSet(int A[], int v){
        return A[v]
    }
    

    The complexity of makeSet() operation is O(n) as it initializes the entire array. Union operation take every time O(n) operations if we have to connect n nodes, then it will be O(n2) operations. FindSet() operation has constant time complexity.

    We can represent a disjoint set using linked list too. In that case, each set will be a linked list, and head of the linked list will be the representative element. Each node contains two pointers, one to its next element it the set and other points to the representative of the set.

    To initialize, each element will be added to a linked list. To union (u, v), we add the linked list which contains u to end of the linked list which contains v and change representation pointer of each node to point to the representation of list which contained v.

    The complexity of union operation is again O(n). Also, find operation can be O(1) as it returns the representative of it.

    Disjoint set forest

    The disjoint-forests data structure is implemented by changing the interpretation of the meaning of the element of array A. Now each A[i] represents an element of a set and points to another element of that set. The root element points to itself. In short, A[i] now points to the parent of i.

    Makeset operation does not change, as to start with each element will be the parent of itself.
    Union operation will change, if we want to connect u and v with an edge, we update A[root of u] with the root of v. How to find the root of an element? As we have the relationship that A[i] is the parent of i, we can move up the chain until we find a case where A[i] == i, that case, i is the root of v.

    //finding root of an element
    int root(int A[],int i){
        while(A[i] != i){
            i = A[i];
        }
        return i;
    }
    
    /*Changed union function where we connect 
      the elements by changing the root of 
      one of the elements
    */
    
    int union(int A[] ,int u ,int v){
        int rootU = root(A, u);       
        int rootV = root(A, v);  
        A[rootU] = rootV ; 
    }
    

    This implementation has a worst-case complexity of O(n) for union function. And also we made the worst complexity of findSet operation as O(n).

    However, we can do some ranking on the size of trees which are being connected. We make sure that always root of smaller tree point to the root of the bigger tree.

    void union(int[] A, int[] sz, u, v){
    
        //Finding roots
        for (int i = u; i != A[i]; i = A[i]) ;
        for (int j = v; j != A[j]; j = A[j]) ;
    
        if (i == j) return;
        //Comparing size of tree to put smaller tree root under 
        // bigger tree's root.
        if (sz[i] < sz[j]){
            A[i] = j;
            sz[j] += sz[i];
        }
        else {
            A[j] = i; 
            sz[i] += sz[j];
        }
    }
    

    In the next few posts, we will be discussing applications of this method to solve different problems on graphs.
    Please share if there is something wrong or missing. If you are preparing for an interview, and want coaching sessions to prepare for it, please signup for free demo session.