Master theorem for complexity analysis

What is Master Theorem?

The Master theorem provides us solution for complexity analysis of recursive function by substituting values for variable. The Master Theorem applies to recurrences of the following form:

T (n) = aT (n/b) + f(n)

where a ≥ 1 and b > 1 are constants and f(n) is an asymptotically positive function.
These kinds of recurrence relations occur in the analysis of many “divide and conquer” algorithms like binary search or merge sort.

But what do these variables a, b and n mean?
n is the size of the input or the problem.
a is the number of subproblems in the recursion, means are we dividing the problem into two halves or 3 or 5? For example, for binary search algorithm a=1, and for merge sort algorithm it is 2.
n/b is the relative subproblem size. What rate is the input reduced? E.g., Binary search and Merge sort cut input in half.
f(n) is the cost of the work done outside the recursive calls, which includes the cost of dividing the problem and
the cost of merging the solutions to the subproblems.

Once, we have a, b and f(n) it is easy to find complexity of the algorithm by substitutin values in this expression

O(nlogba)

However, that’s not all, we have f(n) in the equation and final runtime of your algorithm depends on the relationship between nlogba and f(n)

There are three possible scenarios:
Scenario 1: f(n) = O(nc) and c < logba then the complexity is nlogba This mean recursion is taking more time than the combine and merge.

Example:
T(n) = 8T(n/2) + 1000n2
Here, logba = log28 = 3 > c (2); hence complexity is O(nlogba) = O(n3)

Scenario 2: f(n) = O(nclogkn) for k >= 0 and c = logba then the complexity is nlogbalogk+1n. It essentially means same runtime inside and outside the recursion.

Example:
T(n) = 2T(n/2) + 10n
10n can be written as O(nlog0n) with k = 0.
Here, logba = log22 = 1 = c (1); hence complexity is O(nlogbalogk+1n) = O(nlogn)

Scenario 3: f(n) = O(nc) and c > logba then the complexity is O(nc). It essentially means same runtime outside the recursion is more than split and recurse.

Example:
T(n) = 2T(n/2) + n2
Here, logba = log22 = 1 < c (2); hence complexity is O(nc) = O(n2)

Exceptions to Master Theorem

1. T(n) = 2n T(n/2) + nn.

This is not admissible because a is not constant here. For Master theorem to be application a and b must be constant.

2. T(n) = 0.5 T(n/2) + n2.

This is not admissible because a < 1 which is to say we are reducing the problem in less than one subproblems. For Master theorem to be application a must be greater than 1.

3. T(n) = 64T(n/2) – n2logn.

This is not admissible because f(n) is negative.

4. T(n) = 64T(n/2) – 2n

This is not admissible because f(n) is not polynomial.

Master Theorem Examples

Let’s apply the Master theorem on some of the known algorithms and see if it works?
Binary Search Algorithm
In the binary search algorithm, depending on the relationship between the middle element and the key, we discard one part of the array and look into the other. Also, from above, we know the Master theorem:

T(n) = aT(n/b) + f(n)

In this case, a is 1 as we are reducing to only one problem. b is 2 as we divide the input by half and no outside of recursion no work is done, hence the f(n) is O(1)

logba is log21 = 0. So logba is actually equal to c which is 0. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of binary search alogrithm as O(logn)

Merge Sort Algorithm
In the merge sort algorithm, we split the array into two equal parts and sort them individually. Apart from split we do a merge of elements, which take O(n) time. Let’s find a, b in the Master Theorem equation.

T(n) = aT(n/b) + f(n)

In this case, a is 2 as we are reducing to only two subproblems. b is 2 as we divide the input by half and outside of recursion no work is done, hence the f(n) is O(n)

logba is log22 = 1. So logba is actually equal to c which is 1. In that case, the complexity of the algorithm is defined by O(nlogbalogk+1n), where k = 0. Substituting values in this, we get the complexity of merge sort alogrithm as O(nlogn)

Please book a free session if you are looking for coaching to prepare for your next technical interview. At Algorithms and Me, we provide personalized coaching and mock interviews to prepare you for Amazon, Google, Facebook, etc. interviews.

References

  • Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
    Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.
  • Michael T. Goodrich and Roberto Tamassia. Algorithm Design: Foundation, Analysis, and Internet Examples.
    Wiley, 2002. ISBN 0-471-38365-1. The master theorem (including the version of Case 2 included here, which is stronger than the one from CLRS) is on pp. 268–270
  • Master theorem teaching at CSE
  • Master theorem teaching at CSD
  • Arrays With Elements In The Range 1 to N

    When an array has all its elements in the range of 1 to N ( where N is the length of the array ) we can use the indices to store the ordered state of the elements in the array. This ordered-state can in-turn be used to solve a variety of problems which we’ll explore soon. First, a very simple demonstration of this property. 

    Here is an array which has unique elements in the range of 1 to N.
    Given array (A) : 5,3,1,4,2
    Indices:                0,1,2,3,4

    Sort in Linear Time

    The first use-case of this unique property is being able to sort in O(N) time i.e. a special-case(all unique elements) of the Counting Sort. The crux of this sort is to check whether an element is at its corresponding index and swap it to its correct index if it’s not. Following is a demonstration of this logic:

    Given array (A) : 5,3,1,4,2
    Indices:                0,1,2,3,4

    For each A[i] check if A[A[i] – 1] equals A[i] or not. If they are not equal then swap element at A[A[i] – 1] with A[i]. Basically the correct value for any index i is when A[i] contains i+1. 

    In the above case, let’s start with i = 0.

    A[A[0] – 1] or A[5-1] orA[4] which is 2 and A[0] = 5. This means that A[A[i] – 1] is not equal to A[i] and hence not in its correct position. So we need to swap in order to put A[0] -> 5 to its correct position which is index 4 and A[0] will hold 4 after the swap. Similarly, we need to repeat this check & swap for all the elements.

    What if we cancel-out the common terms and modify the check from  A[i] != A[A[i] - 1] to i != A[i]-1 ?

    Find The Missing Integer

    A similar approach can help us find the smallest missing positive-integer in a given array. By smallest missing positive-integer, we just mean the smallest positive integer that does not exist in the given list of numbers. For example: 

    Given Array: -2, 3, 0, 1, 3
    In the above case, the smallest missing positive integer is 2.

    If we were to apply the usual sorting techniques and then scan the array for the smallest positive integer absent it would imply a time-complexity of O(NLog(N)) + O(N). We can definitely do better than this! At first glance, it seems that this problem does not lie within the unique property of elements being in the range of 1 to N since the numbers in the given array are well outside the range, but to solve this problem we still only need to figure out whether we have elements from 1 to N present in the given array or not.

    How do we know whether the given array has elements from 1 to N? We can use the counting-sort discussed earlier to put each element in its “correct position”, i.e index 0 should hold 1, index 1 should hold 2 and so on. The smallest index that does not hold its correct element is the missing integer.

    If we sort the given array using counting sort described above, we will get: 1, 0, 3, -2, 3. And the smallest index i to not hold its correct value i.e. i+1 will give us the answer to the smallest missing positive integer. In this case, that index is 1 since it does not hold 2, thus the smallest positive missing integer is 2.

    Find The Duplicate Element

    The third use-case of this property is to figure out the duplicate elements without using any extra space. We can iterate over the array A and mark the corresponding index of the encountered element as negative – unless it has already been marked negative! For example: if A[1] = 3 (or -3 ) then mark A[ Abs[3] - 1] as negative, this way whenever we encounter 3 (or -3) again in any of the A[i] we will know that the value 3 has been visited before since A[3-1] will be negative.

    Given array (A) : 5,3,1,4,3
    Indices:                0,1,2,3,4

    When we encounter A[0] i.e. 5, we make A[5-1] i.e. A[4] negative, so the array becomes: 
    5,3,1,4,-3
    Next, we encounter A[1] i.e. 3, we make A[3-1] i.e. A[2] negative, so the array becomes: 
    5,3,-1,4,-3
    Next, we encounter A[2] i.e. -1, we make A[1-1] i.e. A[0] negative, so the array becomes: 
    -5,3,-1,4,-3
    Next, we encounter A[3] i.e. 4, we make A[4-1] i.e. A[3] negative, so the array becomes: 
    -5,3,-1,-4,-3
    Next, we encounter A[4] i.e. -3, we want to make A[3-1] i.e. A[2] negative, but in this case, A[2] is already negative thus we know that A[2] has been visited before! Which means Abs(A[4]) i.e 3 is the duplicate element.


    Here is a snippet to demonstrate the code for sorting an array in linear time as per the above approach. The exact same approach can be used to solve the other two applications i.e. Finding the Duplicate and Finding The Missing Integer.

            int swap=0;
    
            for(int i = 0; i < nums.length;){
                
                if(nums[i] > 0 && nums[i] < nums.length) {
    
                    if(nums[nums[i]-1] != nums[i]){                     
                        swap = nums[i];
                        nums[i] = nums[nums[i] - 1];
                        nums[swap - 1] = swap;
                    }else{
                        i++;
                    }
                    
                }else{
                    i++;
                }
            }
    

     

    If you are preparing for a technical interview in companies like Amazon, Facebook, etc and want help with preparation, please register for a coaching session with us.

    Range sum query- Immutable array

    Range sum query- Immutable array

    Write a service which given an integer array, returns the sum of the elements between indices i and j (i ≤ j), inclusive. Example: nums = [-2, 0, 3, -5, 2, -1]
    sumRange(0, 2) -> 1
    sumRange(2, 5) -> -1
    sumRange(0, 5) -> -3

    Also, the input set does not change during the calls to the sumRange(i,j).

    The brute force solution is to calculate the sum of all the elements A[i] to A[j] whenever a sumRange(i,j) is called. This method has time complexity of O(n). It is OK to have this solution for small scale but as the number of queries goes up, processing of all the numbers from i to j would be inefficient. Also, imagine a case where the array itself is very large, then O(n) complexity for each query will lead to choking of your service.

    Range sum query- Immutable array : thoughts

    There are two hints for optimization is in the question, first, the array is immutable, it does not change. Second, we have to build a service, that means we have a server with us. These two things allow us to pre-compute and store results even before the query is made.

    Now, the question is what do we pre-compute and how do we store it? We can precompute the sum of all the elements between each i and j and store them in a two-dimensional array. range[i][j] stores the sum of all the elements between index i and j. It will use O(n2) additional memory, however, the response time for each sumRange query will be constant. Also, the preprocessing step is O(n2)

    Can we optimize for space as well? If I know the sum of all the elements from index 0 to index i and sum of all the elements from 0 to j, can I find the sum of all the elements between i and j? Of course, we can do it.

     Sum(i,j) = Sum(0,j) - Sum(0,i) + A[i]. 

    Below diagram explains it.
    range sum query array

    However, integer array is not passed in the query request, we cannot use it while calculating the sum. Instead, we will use formula like: Sum(i,j) = Sum(0,j) – Sum(0,i-1), which is equivalent to the above.

    We will pre-calculate the sum of all the elements between index 0 and j for all j>=0 and jImplementation

    class NumArray {
    
        int[] rangeSum;
        public NumArray(int[] nums) {
            rangeSum = new int[nums.length];
            
            if(nums.length>0){
                rangeSum[0] = nums[0]; 
                for(int i=1; i<nums.length; i++){
                    rangeSum[i] = rangeSum[i-1] + nums[i];
                }
            }
        }
        
        public int sumRange(int i, int j) {
            if(i==0) return rangeSum[j];
            return rangeSum[j] - rangeSum[i-1];
        }
    }
    

    Now, the preprocessing step is O(n). N additional space is used. At the same time query response time is O(1).

    Please share if there is anything wrong or missing in the post. If you are preparing for an interview and needs coaching to prepare for it, please book a free demo session with us.

    Find combinations which add up to a number

    Combination sum problem

    Given an array of integers (candidates) (without duplicates) and a target number (target), find all unique combinations in candidates where the candidate numbers sums to target.

    Also, same candidate can occur in the combination as multiple times.

    For example, Input: candidates = [4,3,5,9], target = 9, a solution set is:[ [9], [3,3,3], [4,5]]

    How can do we go about it? What happens if I take the coin 4 in the current example? Then all need to find in the candidates array if there is a combination adds up to 9-4 = 5. Seems like a recursion. For recursion, we need a termination condition. In this case, if I have on candidates to add and target is greater than zero, then whatever combination I have till now has no value, so I terminate the recursion in this case.

    Second what if I have already found a combination which adds up to target? Then I will put that combination in the list of combinations and return.

    What happens in recursive implementation? Well, we go through each coin, add that to current combination and see if leads to the target? If it does, it will be added to the result list along with the list of other candidates. If not, we just remove the current coin (backtrack) from the current combination and try the next coin.

    This approach is called exhaustive search and backtracking paradigm of problem-solving where you search the entire input set to see to find the answer. However, in this case, we can prune the search path as soon as we know that the current set of candidates add up more than the target.

    Combination sum : implementation

    class Solution {
        public List<List<Integer>> combinationSum(int[] candidates,
                                                  int target) {
            /* The result list contains all the combination 
               which add up to target.
            */
            List<List<Integer>> result = new ArrayList<List<Integer>> ();
            
            //We start with the first coin and search exhaustively.
            combinationSumUtil(candidates,
                               target,
                               result,
                               new ArrayList<Integer>(),
                                0
            );
            
            return result;
            
        }
        
        public void combinationSumUtil(int[] candidates, 
                                      int target,
                                      List<List<Integer>> result,
                                      List<Integer> current, 
                                      int index){
            
            /* 
               First termination condition: if there are no coins left
               and required target is more than zero.
            */
            if(target > 0 && index == candidates.length){
                return;    
            }
    
            /* 
               Second termination condition: if target is zero,
               we can add the current combination to the result
            */
            if(target == 0 && index < candidates.length){
                result.add(new ArrayList<>(current));
                return;
            }
            
            /* 
               Start from the current index, and go through
               all the coins.
            */
            for(int i=index; i<candidates.length; i++){
                /* 
                   This is where we prune the branches 
                   of our exhaustive search
                */
                if(target - candidates[i] >=0){
                    current.add(candidates[i]); // add to the list
                    combinationSumUtil(candidates, 
                                       target-candidates[i],
                                       result, current, i);
                    
                    /* Remove the candidate from the list and 
                       check other combinations.
                    */  
                    if(current.size() > 0)
                        current.remove(current.size()-1);
                }
            }
            
        }
    }
    

    The time complexity is C(n,1) + C(n,2) + … + C(n,n) = 2^n – C(n,0) = O(2n).

    The beauty of this solution is that it works with negative candidates as well, where the Dynamic solution for it may not work.

    Maximum area rectangle in a histogram

    A histogram is a diagram consisting of rectangles whose area is proportional to the frequency of a variable and whose width is equal to the class interval. Below is an example of a histogram.

    maximum area rectangle in histogram

    Given a histogram, whose class interval is 1, find maximum area rectangle in it. Let me explain the problem in more details.

    In the histogram above, there are at least 6 rectangles with areas 2, 1,5,6,2, and 3. Are there more rectangles? Yes, we can make more rectangles by combining some of these rectangles. A few are shown below.

    Apparently, the largest area rectangle in the histogram in the example is 2 x 5 = 10 rectangle. The task is to find a rectangle with maximum area in a given histogram. The histogram will be given as an array of the height of each block, in the example, input will be [2,1,5,6,2,3].

    Maximum area rectangle: thoughts

    First insight after looking at the rectangles above is: block can be part of a rectangle with a height less than or equal to its height. For each block of height h[i], check what all blocks on the left can be part of a rectangle with this block. All the blocks on the left side with a height greater than the current block height can be part of such a rectangle.
    Similarly, all the blocks on the right side with a height greater than the current block height can be part of such a rectangle.
    Idea is to calculate leftLimit and rightLimit and find the area (rightLimit - leftLimit) * h[i].
    Check if this area is greater than previously known area, then update the maximum area else, continue to the next block.

    class Solution {
        public int largestRectangleArea(int[] heights) {
            
            if(heights.length == 0) return 0;
            int maxArea = Integer.MIN_VALUE;
    
            for(int i=0; i<heights.length; i++){
                //Find the left limit for current block
                int leftLimit = findLeftLimit(heights, i);
    
                //Find the right limit for current block
                int rightLimit = findRightLimit(heights, i);
    
                int currentArea = (rightLimit - leftLimit-1) * heights[i];
                maxArea = Integer.max(maxArea, currentArea);
            }
    
            return maxArea;
        }
    
        private int findLeftLimit(int [] heights, int index){
            int j = index-1;
            while (j >= 0 && heights[j] >= heights[index]) j--;
    
            return j;
        }
    
        private int findRightLimit(int [] heights, int index){
            int j = index+1;
            while (j < heights.length && heights[j] >= heights[index])
                j++;
    
            return j;
        }
    }
    

    The time complexity of the implementation is O(n2); we will left and right of each block which will take n operations, we do it for n blocks and hence the complexity is quadratic. Can we optimize the time complexity?

    If heights[j] >= heights[i] and leftLimit of index j is already known, can we safely say that it will also be the leftLimit of index i as well?
    Can we say the same thing for rightLimit well? Answers to all the questions are yes. If we store the left and right limit for all indices already seen, we can avoid re-calculating them.

    class Solution {
        public int largestRectangleArea(int[] heights) {
            
            if(heights.length == 0) return 0;
    
            int maxArea = Integer.MIN_VALUE;
    
            //Finds left limit for each index, complexity O(n)
            int [] leftLimit = getLeftLimits(heights);
            //Find right limit for each index, complexity O(n)
            int [] rightLimit = getRightLimits(heights);
    
            for(int i=0; i<heights.length; i++){
                int currentArea = 
                    (rightLimit[i] - leftLimit[i] -1) * heights[i];
                maxArea = Integer.max(maxArea, currentArea);
            }
    
            return maxArea;
        }
    
        private int[] getLeftLimits(int [] heights){
    
            int [] leftLimit = new int[heights.length];
            leftLimit[heights.length-1] = -1;
    
            for(int i=0; i<heights.length; i++) {
                int j = i - 1;
                while (j >= 0 && heights[j] >= heights[i]) {
                    j = leftLimit[j];
                }
                leftLimit[i] = j;
            }
            return leftLimit;
        }
    
        private int[] getRightLimits (int [] heights){
    
            int [] rightLimit = new int[heights.length];
            rightLimit[heights.length-1] = heights.length;
    
            for(int i=heights.length-2; i>=0; i--){
                int j = i+1;
                while(j<heights.length 
                          && heights[j] > heights[i]){
                    j = rightLimit[j];
                }
                rightLimit[i] = j;
            }
            return rightLimit;
        }
    }
    

    The array leftLimitcontains at index i the closest index j to the left of i such that height[j] < height[i]. You can think about each value of the array as a pointer (or an arrow) pointing to such j for every i. How to calculate leftLimit[i]? Just point the arrow one to the left and if necessary just follow the arrows from there, until you get to proper j. The key idea here to see why this algorithm runs in O(n) is to observe that each arrow is followed at most once.

    Largest area rectangle: stack-based solution

    There is a classic method to solve this problem using the stack as well. Let’s see if we can build a stack-based solution using the information we already have. Let’s we do not calculate the area of the rectangle which includes the bar when we are processing it. When should we process it? Where should this bar be put on? If we want to create a rectangle with a height of this bar, we should find the left and right boundaries of such a rectangle. We should put this bar on a stack.
    Now when you are processing bar j if height[j] is less than the bar on the top of the stack, we pop out the bar at the top. Why? Because this is the first bar on the right which has a height less than the height of the bar at top of the stack. This means if we want to make a rectangle with a height of the bar at the top of the stack, this index means the right boundary. This also gives away that all the blocks on the stack are in increasing order, as we never put a block which has a height less than the height of block at the top on to the stack. It means the next bar on the stack is the first bar which has a height lower than the bar at the top. To calculate the area of the rectangle with height as h[top], we need to take width as current index j - stack.peek() - 1

    So the idea is that:

    1. For each bar, take its height as the rectangle’s height. Then find the left and right boundaries of this rectangle.
    2. The second top bar in the stack is always the first bar lower than the top bar on the stack on the left.
    3. The bar that j points to is always the first bar lower than the top bar in the stack on the right.
    4. After step 2 and 3, we know the left and right boundaries, then know the width, then know the area.
    private int maxAreaUsingStack(int[] heights){
    
            Stack<Integer> s = new Stack<>();
    
            int maxArea = 0;
            for(int i=0; i<=heights.length; i++){
                //Handling the last case
                int h = i == heights.length ? 0 : heights[i];
                while(!s.empty() && h < heights[s.peek()]){
                    int top = s.pop();
                    int leftLimit = s.isEmpty() ? -1 : s.peek();
                    int width = i-leftLimit-1;
    
                    int area = width * heights[top];
                    maxArea = Integer.max(area, maxArea);
                }
                s.push(i);
            }
            return maxArea;
        }
    
    The time complexity of the code is O(n) with an additional space complexity of O(n) If you are preparing for a technical interview in companies like Amazon, Facebook, etc and want help with preparation, please register for a coaching session with us.

    Minimizing maximum lateness

    Minimizing maximum lateness : Greedy algorithm

    Since we have chosen the greed, let continue with it for one more post at least. Today’s problem is to minimize maximum lateness of a task. Let me clarify the problem: given a processor which processes one process at a time and as always given a list of processes to be scheduled on that processor, with the intention that maximum late process should be minimized. Contrary to previous problems, this time, we are not provided with start time and end time, but we are given length of time ti process will run and deadline it has to meet di, fi is actual finish time of process completion.

    Lateness of a process is defined as
    li = max{0, fi − di}, i.e. the length of time past its deadline that it finishes.
    Goal here to schedule all tasks to minimize maximum lateness L = max li For example:

    minimize maximum lateness

    Minimizing maximum lateness : algorithm

    Let’s decide our optimization strategy. There is some order in which jobs can be decided: shortest job first, earliest deadline first, least slack time first.

    Let’s see if any of the above strategies work for the optimal solution. For shortest processing time first, consider example P1 = (1,100) P2 = (10, 10). If we schedule the shortest job first as in order (P1, P2), lateness will be 91, but if we take them as (P2, P1), lateness will be 0. So, clearly taking the shortest process first does not give us an optimal solution.

    Check for the smallest slack time approach. See if you can come up with some counterexample that it does not work.

    That leaves us with only one option, take the process which has the most pressing deadline, that is the one with the smallest deadline and yet not scheduled. If you have noticed, the example given for the problem statement is solved using this method. So, we know it works.

    1. Sort all job in ascending order of deadlines
    2. Start with time t = 0
    3. For each job in the list
      1. Schedule the job at time t
      2. Finish time = t + processing time of job
      3. t = finish time
    4. Return (start time, finish time) for each job

    Minimizing maximum lateness : implementation

    from operator import itemgetter
    
    jobs = [(1, 3, 6), (2, 2, 9), (3, 1, 8), (4, 4, 9), 
            (5, 3, 14), (6, 2, 15)] 
    
    def get_minimum_lateness():
    	schedule =[];
    	max_lateness = 0
    	t = 0;
    	
    	sorted_jobs = sorted(jobs,key=itemgetter(2))
    	
    	for job in sorted_jobs:
    		job_start_time = t
    		job_finish_time = t + job[1]
    
    		t = job_finish_time
    		if(job_finish_time > job[2]):
    			max_lateness =  max (max_lateness, (job_finish_time - job[2]))
    		schedule.append((job_start_time, job_finish_time))
    
    	return max_lateness, schedule
    
    max_lateness, sc = get_minimum_lateness();
    print "Maximum lateness will be :" + str(max_lateness)
    for t in sc:
    	print t[0], t[1]
    

    The complexity of implementation is dominated by sort function, which is O(nlogn), rest of processing takes O(n).

    Please share your suggestions or if you find something is wrong in comments. We would love to hear what you have to say. If you find this post interesting, please feel free to share or like.

    Coin change problem : Greedy algorithm

    Coin change problem : Greedy algorithm

    Today, we will learn a very common problem which can be solved using the greedy algorithm. If you are not very familiar with a greedy algorithm, here is the gist: At every step of the algorithm, you take the best available option and hope that everything turns optimal at the end which usually does. The problem at hand is coin change problem, which goes like given coins of denominations 1,5,10,25,100; find out a way to give a customer an amount with the fewest number of coins. For example, if I ask you to return me change for 30, there are more than two ways to do so like

     
    Amount: 30
    Solutions : 3 X 10  ( 3 coins ) 
                6 X 5   ( 6 coins ) 
                1 X 25 + 5 X 1 ( 6 coins )
                1 X 25 + 1 X 5 ( 2 coins )

    The last solution is the optimal one as it gives us a change of amount only with 2 coins, where as all other solutions provide it in more than two coins.

    Solution for coin change problem using greedy algorithm is very intuitive and called as cashier’s algorithm. Basic principle is : At every iteration in search of a coin, take the largest coin which can fit into remaining amount we need change for at the instance. At the end you will have optimal solution.

    Coin change problem : Algorithm

    1. Sort n denomination coins in increasing order of value.
    2. Initialize set of coins as empty. S = {}
    3. While amount is not zero:
    3.1 Ck is largest coin such that amount > Ck
    3.1.1 If there is no such coin return “no viable solution”
    3.1.2 Else include the coin in the solution S.
    3.1.3 Decrease the remaining amount = amount – Ck

    Coin change problem : implementation

    #include <stdio.h>
     
    int coins[] = { 1,5,10,25,100 };
     
    int findMaxCoin(int amount, int size){
    	for(int i=0; i<size; i++){
    	    if(amount < coins[i]) return i-1;
    	}
    	return -1;
    }
    
    int findMinimumCoinsForAmount(int amount, int change[]){
     
    	int numOfCoins = sizeof(coins)/sizeof(coins[0]);
    	int count = 0;
    	while(amount){
    	    int k = findMaxCoin(amount, numOfCoins);
    	    if(k == -1)
                    printf("No viable solution");
    	    else{
                    amount-= coins[k];
    		change[count++] = coins[k];
                }
    	}
    	return count;
    }
     
    int main(void) {
    	int change[10]; // This needs to be dynamic
    	int amount = 34;
    	int count = findMinimumCoinsForAmount(amount, change);
     
    	printf("\n Number of coins for change of %d : %d", amount, count);
    	printf("\n Coins : ");
    	for(int i=0; i<count; i++){
    		printf("%d ", change[i]);
    	}
    	return 0;
    }
    

    What will the time complexity of the implementation? First of all, we are sorting the array of coins of size n, hence complexity with O(nlogn). While loop, the worst case is O(amount). If all we have is the coin with 1-denomination. Overall complexity for coin change problem becomes O(n log n) + O(amount).

    Will this algorithm work for all sort of denominations? The answer is no. It will not give any solution if there is no coin with denomination 1. So be careful while applying this algorithm.

    Please share if you have any suggestion or if you want me to write on a specific topic. If you liked the post, share it!

    Disjoint set data structure

    Disjoint set data structure

    A disjoint set data structure or union and find maintains a collection 𝑆 = { 𝑆1, 𝑆2, ⋯ , 𝑆𝑛 } of disjoint dynamic sets. Subsets are said to be disjoint if intersection between them is NULL. For example, set {1,2,3} and {4,5,6} are disjoint sets, but {1,2,3} and {1,3,5} are not as intersection is {1,3} which is not null. Another important thing about the disjoint set is that every set is represented by a member of that set called as representative.

    Operations on this disjoint set data structure:
    1. Make Set: Creates a new set with one element x, since the sets are disjoint, we require that x not already be in any of the existing sets.
    2. Union: Merges two sets containing x and y let’s say Sx and Sy and destroys the original sets.
    3.Find: Returns the representative of the set which element belongs to.

    Let’s take an example and see how disjointed sets can be used to find the connected components of an undirected graph.

    To start with, we will make a set for each vertex by using make-set operation.

    for each vertex v in G(V)
        do makeSet(v)
    

    Next process all the edges in the graph (u,v) and connect set(u) and set(v) if the representatives of the set which contains u and set which contains v are not same.

    for each edge (u,v) in 𝐺(E)
        do if findSet(u) != findSet(v)
            then union(u, v)
    

    Once above preprocessing steps have run, then we can easily find answer if two vertices u and v are part of same connected component or not?

    boolean isSameComponent(u, v)
     if findSet(u)==findSet(v)
         return True
     else 
         return False
    

    To find how many components are there, we can look at how many disjoint sets are there and that will give us the number of connected components in a graph. Let’s take an example and see how it works.

    disjoint set data structure

    Below table shows the processing of each edge in the graph show figure above.

    disjoint sets

    Now, how can we implement sets and quickly do union and find operations? There are two ways to do it.

    Disjoint set representation using an array

    Simple implementation of disjoint set is using an array which maintains their representative of element i in A[i]. To this implementation to work, it is must that all the element in the set are in range 0 to N-1 where N is size of the array.

    Initially, in makeSet() operation, set A[i]=i, for each i between 0 and N-1 and create the initial versions of the sets.

    disjoint set data structure representation of graph

    for (int i=0; i<N; i++) A[i] = i;
    

    Union operation for the sets that contain integers u and v, we scan the array A and change all the elements
    that have the value A[u] to have the value A[v]. For example, we if want to connect an edge between 1 and 2 in the above set, the union operation will replace A[2] with A[1].

    disjoint set data structure time complexity and implementation in java

    Now, if want to add an edge between 3 and 1. In this case, u = 3 and v = 1. A[3] = 3 and A[1] = 1. So, we will replace all the indices of A where A[i] = 1. So final array looks like this.

    disjoint set data structure java

    Similarly, if want to add an edge from 6 to 7.
    disjoint sets

    //change all elements from A[u] to A[v].
    void union(int A[], int u, int v){
        int temp = A[u];
        for(int i=0; i<A.length; i++){
            if(A[i] == temp)
                A[i] = A[v]; 
        }
    }
    

    findSet(v) operation returns the value of A[v].

    int findSet(int A[], int v){
        return A[v]
    }
    

    The complexity of makeSet() operation is O(n) as it initializes the entire array. Union operation take every time O(n) operations if we have to connect n nodes, then it will be O(n2) operations. FindSet() operation has constant time complexity.

    We can represent a disjoint set using linked list too. In that case, each set will be a linked list, and head of the linked list will be the representative element. Each node contains two pointers, one to its next element it the set and other points to the representative of the set.

    To initialize, each element will be added to a linked list. To union (u, v), we add the linked list which contains u to end of the linked list which contains v and change representation pointer of each node to point to the representation of list which contained v.

    The complexity of union operation is again O(n). Also, find operation can be O(1) as it returns the representative of it.

    Disjoint set forest

    The disjoint-forests data structure is implemented by changing the interpretation of the meaning of the element of array A. Now each A[i] represents an element of a set and points to another element of that set. The root element points to itself. In short, A[i] now points to the parent of i.

    Makeset operation does not change, as to start with each element will be the parent of itself.
    Union operation will change, if we want to connect u and v with an edge, we update A[root of u] with the root of v. How to find the root of an element? As we have the relationship that A[i] is the parent of i, we can move up the chain until we find a case where A[i] == i, that case, i is the root of v.

    //finding root of an element
    int root(int A[],int i){
        while(A[i] != i){
            i = A[i];
        }
        return i;
    }
    
    /*Changed union function where we connect 
      the elements by changing the root of 
      one of the elements
    */
    
    int union(int A[] ,int u ,int v){
        int rootU = root(A, u);       
        int rootV = root(A, v);  
        A[rootU] = rootV ; 
    }
    

    This implementation has a worst-case complexity of O(n) for union function. And also we made the worst complexity of findSet operation as O(n).

    However, we can do some ranking on the size of trees which are being connected. We make sure that always root of smaller tree point to the root of the bigger tree.

    void union(int[] A, int[] sz, u, v){
    
        //Finding roots
        for (int i = u; i != A[i]; i = A[i]) ;
        for (int j = v; j != A[j]; j = A[j]) ;
    
        if (i == j) return;
        //Comparing size of tree to put smaller tree root under 
        // bigger tree's root.
        if (sz[i] < sz[j]){
            A[i] = j;
            sz[j] += sz[i];
        }
        else {
            A[j] = i; 
            sz[i] += sz[j];
        }
    }
    

    In the next few posts, we will be discussing applications of this method to solve different problems on graphs.
    Please share if there is something wrong or missing. If you are preparing for an interview, and want coaching sessions to prepare for it, please signup for free demo session.

    Connect n ropes with minimum cost

    Connect n ropes with minimum cost

    There are given n ropes of different lengths, we need to connect these n ropes into one rope. The cost to connect two ropes is equal to the sum of their lengths. We need to connect the ropes with minimum cost.

    For example, if there are 4 ropes of lengths 5, 2, 3 and 9. We can connect the ropes in the following way: First, connect the ropes of lengths 2 and 3, the cost of this connection is the sum of lengths of ropes which is 2 + 3 = 5. We are left with three ropes with lengths 5, 5 and 9. Next, connect the ropes of lengths 5 and 5. Cost of connection is 10. Total cost till now is 5 + 10 = 15. We have two ropes left with lengths 10 and 9. Finally, connect the last two ropes and all ropes have connected, Total Cost would be 15 + 19 = 34.

    Another way of connecting ropes would be: connect ropes with length 5 and 9 first (we get three ropes of 3, 2 and 14), then connect 14 and 3, which gives us two ropes of lengths 17 and 2. Finally, we connect 19 and 2. Total cost in this way is 14 + 17 + 21 = 52. which is much higher than the optimal cost we had earlier.

    Minimum cost to connect n ropes: algorithm

    When we were doing calculations in examples, did you notice one thing? Lengths of the ropes connected first are added subsequently in all the connections. For example, we connected ropes with length 2 and 3 in the first example, it gets added to next connect as part of rope with length 5, and again when we connect the ropes with lengths 15 and 9, 2 + 3 is already inside 15.

    Read Huffman coding to understand how to solve this problem from this hint.

    All we have to make sure that the most repeated added rope is the smallest, then the second smallest and so on. This gives the idea that if we sort the ropes by their sizes and add them, sort again the array again until there is no ropes to add. It will always give us the optimal solution to connect ropes.

    What will be the complexity of this implementation? The complexity will be dominated by the sorting algorithm, best we can achieve is O(n log n) using quicksort or merge sort. Also, connecting two ropes we have to sort the arry again. So overall complexity of this method is O(n2 log n)

    Can we do better than this? Do we need the array sorted at all the times? All we need is the two ropes with the least length. What data structure gives me the minimum element in the least time. Min Heap will do so. If we create a min heap with lengths of ropes, we can easily find the two ropes with least length in O(1) complexity.

    1. Create a min heap from the array of rope lengths
    2. Fetch the root which will give us smallest rope
    3. Fetch the root again which will give us second smallest rope
    4. Add two ropes and put it back into heap
    5. Go back to step 2

    Minimum cost to conenct ropes

    package com.company;
    
    import java.util.Arrays;
    import java.util.List;
    import java.util.PriorityQueue;
    import java.util.stream.Collectors;
    
    /**
     * Created by sangar on 3.1.19.
     */
    public class ConnectRopes {
    
        public int getMinimumCost(int[] ropeLength){
    
            PriorityQueue<Integer> minHeap = new PriorityQueue<Integer>();
    
            /*
            There is no shortcut for converting from int[] to List<Integer> as Arrays.asList
            does not deal with boxing and will just create a List<int[]>
            which is not what you want.
             */
            List<Integer> list = Arrays.stream(ropeLength).boxed().collect(Collectors.toList());
    
            /*
            Javadoc seems to imply that addAll is inherited from AbstractQueue where
            it is implemented as a sequence of adds.
            So complexity of this operation is O(nlogn)
             */
            minHeap.addAll(list);
    
            int totalLength = 0;
    
            while(minHeap.size() > 1){
                int len1 = (int)minHeap.remove();
                int len2 = (int)minHeap.remove();
    
                totalLength+=(len1 + len2);
    
                minHeap.add(len1+len2);
            }
    
            return totalLength;
        }
    }
    

    Test cases

    package test;
    
    import com.company.ConnectRopes;
    import org.junit.jupiter.api.Test;
    import static org.junit.jupiter.api.Assertions.assertEquals;
    /**
     * Created by sangar on 23.9.18.
     */
    public class ConnectRopeTest {
    
        ConnectRopes tester = new ConnectRopes();
    
        @Test
        public void minimumCostTest() {
    
            int[] a = {5,2,3,9};
    
            assertEquals(24, tester.getMinimumCost(a));
        }
        @Test
        public void minimumCostOneRopeTest() {
    
            int[] a = {5};
    
            assertEquals(0, tester.getMinimumCost(a));
        }
    }
    

    The complexity of this implementation is O(nlogn) (to create min heap out of an array in java Priority queue) + O(nlogn) (to fetch two minimum and re-heapify). However, initial complexity to build a heap from the array can be brought down to O(n) by using own implementation of min heap.

    Please share if there is something wrong or missing.

    Lowest common ancestor(LCA) using Range Minimum Query(RMQ)

    Lowest common ancestor(LCA) using RMQ

    We already have discussed lowest common ancestor and range minimum query. In this post, we will discuss how to use RMQ to find the lowest common ancestor of two given nodes in a binary tree or binary search tree. LCA of two nodes u and v is the node which is furthest from root and u and v are descendant of that node. For example, LCA node(5) and node(9) in below tree is node(2).

    lowest common ancestor using RMQ

    In earlier solutions, we scan the whole binary tree every time we have to find LCA of two nodes. This has a complexity of O(n) for each query. If this query if fired frequently, this operation may become a bottleneck of the algorithm. One way to avoid processing all nodes on each query is to preprocess binary tree and store precalculated information to find LCA of any two nodes in constant time.

    This pattern is very similar to a range minimum query algorithm. Can we reduce the lowest common ancestor problem to range minimum query problem?

    Reduction of lowest common ancestor problem to RMQ

    Let’s revise what is RMQ: Given an array A of length n; RMQ(i,j) – returns the index of the minimum element in the subarray A[i..j].

    lowest common ancestor using RMQ

    Let’s find LCA of two nodes 5 and 8 manually in the above binary tree. We notice that LCA(u,v) is a shallowest common node (in terms of distance from root) which is visited when u and v are visited using the depth-first search of the tree. An important thing to note is that we are interested in shallowest, which is minimum depth, the node between u and v. Sounds like RMQ?

    Implementation wise, the tree is traversed as Euler tour, which means we visit each node of tree, without lifting the pencil. This is very similar to a preorder traversal of a tree. At most, there can be 2n-1 nodes in Euler tour of a tree with n nodes, store this tour in an array E[1..2n-1].

    As algorithm requires the shallowest node, closest to root, so we store the depth of each node while doing Euler tour, so we store the depth of each node in another array D[1..2n-1].

    We should maintain the value when the node was visited for the first time. Why?

    E[1..2n-1] – Store the nodes visited in a Euler tour of T. Euler[i] stores ith node visited in the tour.
    D[1..2n-1] – Stores level of the nodes in tour. D[i] is the level of node at Euler[i]. (level is defined to be the distance from the root).
    F[1..n] – F[i] will hold value when node is first visited.

    For example of this graph, we start from node(1) and do Euler tour of the binary tree.

    lowest common ancestor using rmq

    Euler tour would be like

    lca using rmq

    Depth array is like

    lca using rmq

    First visit array looks like

    lca using rmq

    To compute LCA(u,v): All nodes in the Euler tour between the first visits to u and v are E[F[u]...F[v]] (assume F[u] is less than F[v] else, swap u and v). The shallowest node in this tour is at index RMQ D(F[u]..F[v]), since D[i] stores the depth of node at E[i].
    RMQ function will return the index of the shallowest node between u and v, thus output node will be E[RMQ D(F[u], F[v])] as LCA(u,v)

    Let’s take an example, find the lowest common ancestor of node(5) and node(8).

    First of all, find the first visit to node(5) and node(8). It will be F[5] which is 2 and F[8] which is 7.

    Now, all the nodes which come between visit of node(5) and node(8) are in E[2..7], we have to find the shallowest node out these nodes. This can be done by applying RMQ on array D with range 3 to 6.

    lca using rmq

    LCA will be E[RMQ( D(2,7)], in this case, RMQ(D[2..7]) is index 3. E[3] = 2, hence LCA(5,8) is node(2).

    Lowest common ancestor using RMQ: Implementation

    package com.company.BST;
    
    import java.util.Arrays;
    
    /**
     * Created by sangar on 1.1.19.
     */
    public class LowestCommonAncestor {
    
        private int[] E;
        private int[] D;
        private int[] F;
    
        int[][] M;
    
        private int tourCount;
    
        public LowestCommonAncestor(BinarySearchTree tree){
            //Create Euler tour, Depth array and First Visited array
            E = new int[2*tree.getSize()];
            D = new int[2*tree.getSize()];
            F = new int[tree.getSize() + 1];
    
            M = new int[2 * tree.getSize()][2 * tree.getSize()];
    
            Arrays.fill(F, -1);
            getEulerTour(tree.getRoot(), E, D, F, 0);
    
            preProcess(D);
        }
    
        public int findLowestCommonAncestor(int u, int v){
            //This means node is not in tree
            if(u >= F.length || v >= F.length || F[u] == -1 || F[u] == -1)
                return -1 ;
    
            return E[rmq(D, F[u], F[v])];
        }
    
        /* This function does all the preprocessing on the tree and
           creates all required arrays for the algorithm.
        */
        private void getEulerTour(TreeNode node, int[] E, int[] D, int[] F,
                                  int level){
            if(node == null) return;
    
            int val = (int)node.getValue();
    
            E[tourCount] = val; // add to tour
            D[tourCount] =  level; // store depth
    
            if(F[val] == -1) {
                F[(int) node.getValue()] = tourCount;
            }
            tourCount++;
            
            if(node.getLeft() != null ) {
                getEulerTour(node.getLeft(), E, D, F, level + 1);
    
                E[tourCount] = val;
                D[tourCount++] = level;
            }
            if(node.getRight() != null ) {
                getEulerTour(node.getRight(), E, D, F, level + 1);
    
                E[tourCount] = val;
                D[tourCount++] = level;
            }
        }
    
        /*
          This function preprocess the depth array to quickly find 
          RMQ which is used to find shallowest node.
         */
        void preProcess(int[] D) {
    
            for (int i = 0; i < D.length; i++)
                M[i][0] = i;
    
            for (int j = 1; 1 << j <D.length ; j++){
                for (int i = 0; i + (1 << j) - 1 < D.length; i++){
                    if (D[M[i][j - 1]] < D[M[i + (1 << (j - 1))][j - 1]])
                        M[i][j] = M[i][j - 1];
                    else
                        M[i][j] = M[i + (1 << (j - 1))][j - 1];
                }
            }
        }
    
        private int rmq(int a[], int start, int end){
            int j = (int)Math.floor(Math.log(end-start+1));
    
            if ( a[ M[start][j] ] <= a[M[end-(1<<j)+1][j]] )
                return M[start][j];
    
            else
                return M[end-(1<<j)+1][j];
        }
    }
    

    The beauty of this algorithm is that it can be used to find LCA of any tree, not just only binary tree or BST. The complexity of the algorithm to find a lowest common ancestor using range minimum query is (O(n), O(1)) with an additional space complexity of O(n).

    Reference
    Faster algorithms for finding lowest common ancestors in directed acyclic graphs

    Please share if there is something wrong or missing. If you are preparing for an interview, please signup for free demo class to guide you through the process.