# Minimum cost path in matrix : Dynamic programming

Given a 2D matrix, Cost[][], where Cost[i][j] represent cost of visiting cell (i,j), **find minimum cost path ** to reach cell (n,m), where any cell can be reach from it’s left (by moving one step right) or from top (by moving one step down).

For example, to reach (3,3) would be 16 by following path = ((0,0), (1,0), (2,0), (3,0), (3,1), (3,2), (3,3))

## Minimum cost path : line of thoughts

This problem is similar to Finding possible paths in grid. As mentioned there, grid problem reduces to smaller sub-problems once choice at cell is made, but here move will be in reverse direction. To find minimum cost at cell (i,j), first find minimum cost to cell (i-1, j) and cell (i, j-1). Take the minimum of those two and add cost of cell (i,j) which will be minimum cost to reach (i,j).

Solution(n) = Cost of choice + Solution(n-1).

CostToMove(i,j) = Min(CostToMove(i-1,j), CostToMove (i, j-1)) + Cost(i,j)

Above equation can be implemented in recursive function. What should be terminating condition for recursion? It’s obvious, at starting cell i.e i=0 and j=0.

findCost(i,j, cost) = cost(i,j) + Min( findCost(i-1,j, cost), findCost(i,j-1, cost))

### Recursive implementation: minimum cost path in matrix

#include <stdio.h> #include <stdlib.h> int min( int a, int b){ if(a > b) return b; return a; } int findMinimumCostPath(int Cost[3][3], int i, int j){ if( i == 0 && j == 0 ) return Cost[0][0]; if( i == 0 ) return findMinimumCostPath(Cost,i, j-1) + Cost[i][j]; if( j == 0) return findMinimumCostPath(Cost,i-1, j) + Cost[i][j]; return min(findMinimumCostPath(Cost,i-1, j), findMinimumCostPath(Cost,i, j-1) + Cost[i][j]); } int main() { int M,N; M = N = 3; int Cost[3][3] = { 1,3,4, 5,3,2, 3,4,5 }; printf("Minimum cost of path : %d" , findMinimumCostPath(Cost, M-1, N-1)); }

Every possible path in matrix is tried before we reach to solution. Number of paths in matrix with given conditions are exponential and hence, complexity of recursive method is exponential too. Can we do better than that?

We saw that problem reduces to sub problem at every cell and optimal solution to bigger problem depends on optimal solution of sub problem, which is know as optimal Sub-structure. This is one of the conditions to apply dynamic programming.

Look at execution tree of recursive method.

There are several subproblems which are solved again and again in recursive solution i.e there are overlapping subproblems. How can we avoid re-solving subproblems? Answer is memoization. Store solution to subproblems and refer to them back when needed.

These are perfect conditions to apply dynamic programming. Let’s see implementation details now.

Create a 2 dimensional array to save solutions of subproblem. Each cell M[i][j] will store minimum cost path till cell (i,j).

Top most row in array is peculiar as any cell in that row can be reach only from left.

MinCost(0,j) = MinCost(0,j-1) + Cost[0][j]

Similarly, cell in leftmost column can only be reached from top.

MinCost(i,0) = MinCost(i-1,0) + Cost[i][0]

For all other cells,

MinCost(i,j) = Min( MinCost(i-1),j), MinCost(i, j-1)) + Cost[i][j]

Since, solution of (i-1, j) and (i, j-1) is prerequisite for solution of (i,j), this filling method is called bottom up.

## Minimum cost path : Dynamic programming implementation

#include <stdio.h> #include <stdlib.h> int min( int a, int b){ if(a > b) return b; return a; } int findMinimumCostPath(int Cost[3][3], int M, int N){ //declare the minCost matrix int MinCost[M][N]; MinCost[0][0] = Cost[0][0]; // initialize first row of MinCost matrix for (int i=1; i<N; i++){ MinCost[0][i] = MinCost[0][i-1] + Cost[0][i]; } for (int i=1; i<M; i++){ MinCost[i][0] = MinCost[i-1][0] + Cost[i][0]; } for (int i=1;i<M; i++){ for (int j=1; j<N; j++){ MinCost[i][j] = min(MinCost[i-1][j], MinCost[i][j-1]) + Cost[i][j]; } } return MinCost[M-1][N-1]; } int main() { int M,N; M = N = 3; int Cost[3][3] = { 1,3,4, 5,3,2, 3,4,5 }; printf("Minimum cost of path : %d" , findMinimumCostPath(Cost, M, N)); }

Complexity of dynamic programming approach to find minimum cost path in grid is O(n^{2}) with additional space complexity of O(n^{2}).

Extend this problem by actually finding path that lead to destination. Solution is simple, start from destination cell, as that will be part of final path anyways, start moving either to cell to left or top of cell, whichever is less till you reach origin cell.

One more variant of this problem is adding a flexibility that one can move from left to right, top to down and diagonally as well. Nothing changes in solution except take minimum of three cells instead of two (left, top and diagonal).

Please share if there is something wrong or missing. If you want to contribute, please write to us