Coin change problem : Greedy algorithm

Coin change problem : Greedy algorithm

Today, we will learn a very common problem which can be solved using the greedy algorithm. If you are not very familiar with a greedy algorithm, here is the gist: At every step of the algorithm, you take the best available option and hope that everything turns optimal at the end which usually does. The problem at hand is coin change problem, which goes like given coins of denominations 1,5,10,25,100; find out a way to give a customer an amount with the fewest number of coins. For example, if I ask you to return me change for 30, there are more than two ways to do so like

 
Amount: 30
Solutions : 3 X 10  ( 3 coins ) 
            6 X 5   ( 6 coins ) 
            1 X 25 + 5 X 1 ( 6 coins )
            1 X 25 + 1 X 5 ( 2 coins )

The last solution is the optimal one as it gives us a change of amount only with 2 coins, where as all other solutions provide it in more than two coins.

Solution for coin change problem using greedy algorithm is very intuitive and called as cashier’s algorithm. Basic principle is : At every iteration in search of a coin, take the largest coin which can fit into remaining amount we need change for at the instance. At the end you will have optimal solution.

Coin change problem : Algorithm

1. Sort n denomination coins in increasing order of value.
2. Initialize set of coins as empty. S = {}
3. While amount is not zero:
3.1 Ck is largest coin such that amount > Ck
3.1.1 If there is no such coin return “no viable solution”
3.1.2 Else include the coin in the solution S.
3.1.3 Decrease the remaining amount = amount – Ck

Coin change problem : implementation

#include <stdio.h>
 
int coins[] = { 1,5,10,25,100 };
 
int findMaxCoin(int amount, int size){
	for(int i=0; i<size; i++){
	    if(amount < coins[i]) return i-1;
	}
	return -1;
}

int findMinimumCoinsForAmount(int amount, int change[]){
 
	int numOfCoins = sizeof(coins)/sizeof(coins[0]);
	int count = 0;
	while(amount){
	    int k = findMaxCoin(amount, numOfCoins);
	    if(k == -1)
                printf("No viable solution");
	    else{
                amount-= coins[k];
		change[count++] = coins[k];
            }
	}
	return count;
}
 
int main(void) {
	int change[10]; // This needs to be dynamic
	int amount = 34;
	int count = findMinimumCoinsForAmount(amount, change);
 
	printf("\n Number of coins for change of %d : %d", amount, count);
	printf("\n Coins : ");
	for(int i=0; i<count; i++){
		printf("%d ", change[i]);
	}
	return 0;
}

What will the time complexity of the implementation? First of all, we are sorting the array of coins of size n, hence complexity with O(nlogn). While loop, the worst case is O(amount). If all we have is the coin with 1-denomination. Overall complexity for coin change problem becomes O(n log n) + O(amount).

Will this algorithm work for all sort of denominations? The answer is no. It will not give any solution if there is no coin with denomination 1. So be careful while applying this algorithm.

Please share if you have any suggestion or if you want me to write on a specific topic. If you liked the post, share it!

Coin change problem

Coin change problem

Given a number S and coins of values V = {V1,V2,V3, V4}. Find number of ways change can be made for S using these coins.We have infinite supply of these coins. Commonly, this problem is known as coin change problem.

For example, for N = 4 and S = {1,2,3}, there are four solutions: {1,1,1,1},{1,1,2},{2,2},{1,3}. So output should be 4.

coin-change-problem

Coin change problem : Line of thought

As always, let’s start with brute force solution. We have two choices for a coin : Either we can include the coin in solution, or we can not include it in solution. If coin m is included in solution, problem reduces to find change for value N-V(m) using K coins. Notice that we can again use all the coins for reduced problem too.
If coin is not included in solution when value to changed is less than the denomination of coin. At this time, there is reduction in value to be changed, but we lose one coin, hence problem stands : Change value V with N-1 coins.
If we look at it, it is simple recursive formulation.

C(N,m) = C(N,m-1) + C(N- V(m), m)

coin change problem

When do we stop then? When do we know that there is no more solution going forward. If change is required for value zero, there is only one solution possible, which is to include no coin.

C(N,m) = 1 if N == 0

Also, what is solution if we do not have any coins with us at all and there is value to be changed. There is no solution then.

C(N,m) =0 if N > 0 and m < 0

Also, if value required is less than zero, then there is again no solution, given that all coins have positive values.

C(N,m) = 0 if N < 0

We see that problem reduces to smaller problems and then there are point when we can stop going forward. That’s a perfect condition for recursive solution.

Coin change problem : recursive implementation

package com.company;

/**
 * Created by sangar on 5.5.18.
 */
public class CoinChange {

    public static int coinChange(int value, int[] coins, int consideredCoin){
        /* If value to be changed is negative, since coins are positives,
        there is no way to change them
         */
        if(value < 0) return 0;

        /*When there may or may not be coins, and no value is
        required, there is one solution which is not to include any
        coin */
        if(value == 0) return 1;

        /* When there is value required by no coins left,
           there is not solution
         */
        if(value > 0 && consideredCoin == coins.length) return 0;
                // When coin is included
        return coinChange(value-coins[consideredCoin],coins, consideredCoin)
               //When coin is not included
            + coinChange(value,coins, consideredCoin + 1);
    }

    public  static void main(String[] args){
        int value = 4;
        int[] coins = {1,2,3};

        int count = coinChange(value, coins, 0);
        System.out.println(count);
    }
}

Complexity of recursive implementation of coin change solution is exponential O(2m) where m is number of coins.
For every coin we have 2 options, either we include it or exclude it so if we think in terms of binary, its 0(exclude) or 1(include). so for example if we have 2 coins, options will be [ 00, 01, 10, 11 ]. so its 22. For m coins , it will be 2m. In all these options we will be checking whether that selection has made the change which is required
Reason for this exponential complexity is that we are calculating smaller subproblems again and again.

coin change problem recursive tree

In discussion so far, we see two properties of the solution : First, there is optimal substructure, that is optimal solution to subproblems provides optimal solution to bigger problems. This is known as optimal substructure property. Second, subproblems are overlapping. These two condition as necessary for applying dynamic programming approach.
To avoid calculating same subproblem again and again, that we can avoid using simple memoization. Let’s create a two dimensional table Coins. Coins(i, j) represents the number of ways in which change for i can be made using j coins.
Now if jth coin is included, then numbers of ways will be Coins(i- v[j], j-1)
If jth coin is not included, number of ways will be  Coins (i, j-1). Adding both of them will give us

Coins(i,j) = Coins(i-v[j]) + Coins(i, j-1).

For all i = 0 Cost(i,j) = 1, for all j = 0 Cost(i,j) will 1 for all i > 0.
We can start bottom up and fill the table as per the formula above. Once we have filled the entire table, Coins(N,m) will be answer to the problem.

package com.company;

/**
 * Created by sangar on 5.5.18.
 */
public class CoinChange {

    public static int coinChangeDP(int value, int[] coins){

        int[][] numWays = new int[value+1][coins.length+1];

        for(int i = 0; i <= coins.length; i++ ){
            numWays[0][i] = 1;
        }

        for(int i = 1; i <= value; i++ ){
            numWays[i][0] = 0;
        }

        for(int i = 1; i <= value; i++){
            for(int j = 1; j <= coins.length; j++){
                numWays[i][j] = (i -coins[j-1] >= 0 ? numWays[i-coins[j-1]][j] : 0 )
                                 + numWays[i][j-1];
            }
        }

        return numWays[value][coins.length];
    }

    public  static void main(String[] args){
        int value = 4;
        int[] coins = {1,2,3};

        int count = coinChangeDP(value, coins);

        System.out.println(count);
    }
}

Example : for Value = 4 and coins {1,2,3}, below is the bottom up filling of table.

Complexity of dynamic programming implementation of coin change problem is O(Nm) where N is value and m is number of coins. Additional O(Nm) space is also required for memoization.

Please share if there is something missing or wrong. If you want to share your knowledge with thousands of learners across the world, please reach out to us on communications@algorithmsandme.com

0/1 knapsack problem using dynamic programming

0/1 Knapsack problem

0/1 Knapsack is typical problem which is used to demonstrate application of greedy algorithm as well as dynamic programming. There are cases when applying greedy algorithm does not give optimal solution. There are many flavors in which Knapsack problem can be asked.  

1. A thief enters a museum and want to steal artifacts from there. Every artifact has a weight and value associated with it. Thief carries a knapsack (bag) which can take only a specific weight. Problem is to find the combination of artifacts thief steals so that he gets maximum value and weight of all taken artifacts is less capacity of  knapsack he has. Thief cannot take any artifact partially. Either he takes it or leaves it. Hence the problem is 0/1 knapsack.

2. We have N files each having a size say Si. We have a total storage capacity of W bytes. For each file to be stored re-computation cost is Vi. Problem is to store as many files on storage that combined size of all files is less than W and their re-computation value is maximum. We can either store or leave a file, we cannot store partial file. Hence this is a case of 0/1 knapsack problem.

0/1 knapsack problem : Line of thoughts

Brute force method would try all subsets of set of items, whose weight adds up to maximum capacity of knapsack and see which one gives maximum value. Complexity of brute force algorithm would be of exponential order as there will be 2n possible subsets for n items.

Can we do better?  If we consider each item, there are two possibilities associated with it.
First, current item is included in optimal subset. Then we need to find out all the items in remaining N-1 items which can optimize the subproblem for weight W-wk. Value of this item is added to candidate maximum value.

Second, current item is not included in optimal subset. In that case, we need to find out items in remaining N-1 items which can optimize the the original problem. Value of current item is not added into candidate maximum value.

Inclusion depends on two conditions :

  1. Weight of the item is less than the total capacity of knapsack.
  2. Inclusion of item increases current max value with K-1 items with W-Wk weight.  

Since every steps reduces the problem to a smaller problem in terms of items of weight, recursive solution would be our first refuge. To implement this problem, what are the base cases? First, we cannot add any items to knapsack capacity is zero i.e. W == 0. Second, no item can be taken if there are no items remaining, i.e. n == 0.

Recursive implementation of 0/1 knapsack problem

package com.company;
/**
	* Created by sangar on 19.8.18.
*/
public class KnapsackProblem {
	static int knapSack(int W, int[] weights, int[] val, int n) {
		/*
			If there is no item or weight that can be carried is
			zero, return zero
		*/
		if (n &lt; 0 || W == 0)
			return 0;

		/* 
			If weight of the nth item is more than Knapsack 
			capacity W,then this item cannot be included
			in the optimal solution
		*/
		if (weights[n] &gt; W)
			return knapSack(W, weights, val, n - 1);

		/* Consider two cases, including item and excluding item.*/
		else return Integer.max(
			(val[n]
				+ knapSack(W - weights[n], weights, val, n - 1)),
			(knapSack(W, weights, val, n - 1))
		);
	}

	public static void main(String args[]) {
		int[] val = {60, 100, 120};
		int[] wt = {10, 20, 30};
		int W = 50;
	
		int n = val.length;
		System.out.println(knapSack(W, wt, val, n - 1));
	}
}

If we look at the execution trace of the function, it looks like this.

0/1 knapsack problem

There are seven problems to be solved at the leaf level. For N = 3, there are 7 problems to be solved before we start optimizing for the max value. For N, in general, it will take 2N subproblems to be solved. Hence, complexity of recursive implementation is O(2N).

If we take another example, it will become evident that there are some subproblems which are solved again and again. Overlapping subproblems is one of the criteria, we should be thinking about dynamic programming. Also, notice that optimal solution to a smaller problem leads to optimal solution to bigger problem, which is second condition for DP.  This problem satisfy both these conditions, hence let’s design DP solution for it.

0/1 knapsack problem : Dynamic programming approach

We define two dimensional array V[N,W] where N is number of items and W is capacity. For 1<= i <= n and  0<=w<=W, V[i,w] represents the optimal solution for items I1, I2, ..In with maximum weight of w.  If we can compute all the entries of this array, then the array entry V[N,W] is the solution to our problem

For i =0 and w=0, all values will be zero. So, first column and first row will be filled with all zero values.
Recursively, we can fill the table bottom up as follows.

V[i, w ] = max (V[i-1, w], V[i-1, w-w[i]) + V[i] )
V[0, w ] = 0; there are no items
V[i, 0 ] = 0; no items can be picked.
package com.company;

/**
	* Created by sangar on 19.8.18.
*/
public class KnapsackProblem {
	public static int knapsackDP(int W, int[] weights, int[] val, int n) {
		int[][] V = new int[n+1][W + 1];
		for(int i = 1 ; i &lt; V[0].length; i++){
			/*
				If weight of item is less than current value
				we can achieve minimum value V[i] with 1..i items
			*/
			if(weights[0] &lte; i){
				V[0][i] = val[0];
			}else{
				V[0][i] = 0;
			}
		}

		//Loop for all items
		for (int i = 1; i &lt; V.length; i++) {
			for (int j = 1; j &lt; V[i].length; j++) {
				/*if a weight is more than the allowed weight, 
				that weight cannot be picked. */
				if(weights[i] &gt; j){
					V[i][j] = V[i-1][j];
				}else{
					V[i][j] = Math.max(V[i-1][j], 
							val[i] + V[i-1][j-weights[i]]);
				}
			}
		}
		return V[V.length-1][W];
	}

	public static void main(String args[]) {
		int[] val = {60, 100, 120};
		int[] wt = {10, 20, 30};
		int W = 50;

		int n = val.length;
		System.out.println(knapsackDP(W, wt, val, n - 1));
	}
}

One similar problem which can be solved with same approach is minimum number of coins to be used to get change of a particular amount. I am skipping the whole analysis and directly pasting the code here.  

Complexity of the dynamic programming implementation of knapsack problem is O(N *W). Space complexity is again O(N*W). It is thumb rule that we trade space for time in dynamic programming.

Please share if there is something is wrong or missing. If you want to contribute to website, please reach out to us at communications@algorithmsandme.com