Given a positive integer N, find the count of distinct binary strings of length N that have no consecutive 1’s.
For example,

Input:
N = 2
Output:
3.
Explanation:
There are 3 possible strings: 00, 01, 10 N=3 There are 5 possible strings: 000, 001, 010, 100,101

Thought process to find binary strings with consecutive 1s

This problem is an easier variation of digit DP problem. Since these are binary strings for every position in the string there are just two choices: 0 and 1. To form a string of length N, at any position –

We can choose 0 and then for the next position we again have two choices.

We can choose 1 but then for the next position we cannot choose 1 as we don’t want consecutive 1’s in the string. So once we choose 1, we are also setting next position to 0.

So in case (a), we set 0 at current position and the problem then reduces to count the number of strings having length N-1 with the given condition.

And in case (b), we set 1 at current position and 0 at next position, hence the problem reduces to count the number of strings having length N-2 with the given condition.

With this we can write

Count(n) = Count(n-1) + Count(n-2)

Does this formula ring a bell? Yes, it’s the same one that is is used to find Fibonacci numbers.

#include <iostream>
#include <vector>
using namespace std;
long long int fib(int N)
{
vector<long long int> DPVec(N+1, 0);
DPVec[1] = 1; DPVec[2] = 1;
for (int i=3; i<=N; ++i)
{
DPVec[i] = DPVec[i-1] + DPVec[i-2];
}
return DPVec[N];
}
long long int Cnt_strings(int N)
{
return fib(N+2);
}
int main()
{
int n = 3;
cout<<Cnt_strings(n)<<endl;
return 0;
}

public class Count_Strings
{
static int fib(int N)
{
int DPArr[] = new int[N+1];
DPArr[1] = 1; DPArr[2] = 1;
for (int i=3; i<=N; ++i)
{
DPArr[i] = DPArr[i-1] + DPArr[i-2];
}
return DPArr[N];
}
static int Cnt_strings(int N)
{
return fib(N+2);
}
public static void main (String[] args)
{
int n = 4;
int num_strings = Cnt_strings(n);
System.out.println(num_strings);
}
}

The time complexity of the implementation is O(n) and space complexity is also O(n)

Given a 2D binary matrix filled with 0’s and 1’s, find the largest square containing only 1’s and return its area. This problem is known as maximal square area problem. For example,

What is the basic condition for a square? Basic condition for a square is that its length and breadth should be equal. For any cell to be included into the square, it has to be stretch from three sides: length wise, breadth wise and diagonally.
So, to know if a cell will increase the size of the square?
We can recursively find the longest side of the square at each cell i,j. Keep track of the global maximum while getting the side at each cell.

Show me the implementation

class Solution {
int max = 0;
public int maximalSquare(char[][] matrix) {
int rows = matrix.length;
if(rows == 0) return 0;
int cols = matrix[0].length;
for (int i = rows-1; i >=0; i--)
for (int j = cols-1; j>=0; j--)
max = Math.max(max, maxSquareUtil(matrix, i, j));
return max * max;
}
private int maxSquareUtil(char[][] matrix,
int i, int j){
if(i == 0 || j == 0){
return matrix[i][j] -'0';
}
int c = (matrix[i][j] == '1' ) ? Math.min(
maxSquareUtil(matrix, i-1, j),
Math.min(maxSquareUtil(matrix, i, j-1),
maxSquareUtil(matrix, i-1, j-1))
) + 1 : 0;
return c;
}
}

It is quite obvious from the solution that the optimal solution of a subproblem leads to the optimal solution of the original problem. However, let’s look at the execution tree of the code.
Now, the two conditions necessary for applying dynamic programming are present: optimal subproblem solution leads to the optimal solution to a bigger problem and there are overlapping subproblems.

What can we do to avoid recalculating the solution from subproblems again and again? We can save solutions to problems in the cache. Whenever we want to solve the problem, we first check in the cache, if it is present, we do not solve it again and use the cache value. If not present, then we calculate it.

Top-down approach

class Solution {
int max = 0;
public int maximalSquare(char[][] matrix) {
int rows = matrix.length;
if(rows == 0) return 0;
int cols = matrix[0].length;
int [][] cache = new int[rows][cols];
for (int i = 0; i < rows; i++) {
for (int j = 0; j < cols; j++)
cache[i][j] = -1;
}
for (int i = rows-1; i >=0; i--)
for (int j = cols-1; j>=0; j--)
max = Math.max(max,
maxSquareTopDown(matrix, cache,i, j));
return max * max;
}
private int maxSquareTopDown(char[][] matrix, int[][] cache,
int i, int j){
if(i == 0 || j == 0 || matrix[i][j] == '0' ){
cache[i][j] = matrix[i][j] -'0';
return matrix[i][j] -'0';
}
if(cache[i][j] != -1){
return cache[i][j];
}
cache[i][j] = Math.min(
maxSquareTopDown(matrix, cache, i, j-1),
Math.min(maxSquareTopDown(matrix, cache, i-1, j-1),
maxSquareTopDown(matrix, cache, i-1, j))
) + 1;
return cache[i][j];
}
}

The time complexity of the code is O(n * m) with additional space complexity of O(n * m).

Bottom-up approach

We can think of the problem from the bottom-up view. What will be the size of a square for each cell? If the cell is 1, the size will be 1 and if the cell is 0, then it will be zero.

Construct an array dp[][] for the given matrix[][].

Copy first row and first columns as it is from matrix[][] to dp[][]

For other entries, use following expressions to construct dp[][]

Given an array of integers, write a function that returns the largest sum of non-adjacent numbers. Numbers can be 0 or negative.For example,

Input:
[2, 4, 6, 2, 5]
Output:
13
Explanation:
Since we pick 2, 6, and 5.
Input:
[5, 1, 1, 5]
Output:
10
Explanation:
Since we pick 5 and 5.

Thought process

This problem is very similar to the coin change problem, where for each coin we make a decision, whether to include or exclude a coin in the change or not.

In this problem as well, we make the choice for each number. What if we include a number at index i in the sum and what if we do not include it? If we include the number in the sum, which eventually may be the maximum sum, what can we do with the remaining numbers in the array? If we include a[i], then we definitely cannot include a[i+1], due to the constraint of non-adjacent numbers. After making the choice that we will include a[i] into the sum, our problem reduces to find the maximum sum of non-adjacent numbers from index i+2 to a.length-1.

What if I do not include this number a[i] in the sum? In that case, we can choose a[i+1] in the sum, so the problem reduces to find the largest sum of non-adjacent numbers in the array from index i+1 to a.length

We do not know which choice (to include or exclude a[i]) will give us the largest sum, so we try both and take the maximum of both.

Recursive implementation

public int sum(int[] a){
return sumUtil(a,0);
}
private int sumUtil(int[] a, int index){
if(index > a.length-1){
return 0;
}
return Math.max(a[index] + sumUtil(a, index+2),
sumUtil(a, index+1)
);
}

For each number we take two choices and follow them, overall complexity of above implementation is O(2^{n}) where n is the length of the array.
Let’s see the execution tree of the recursive implementation with one of the examples, it looks like this:

It is evident from the execution tree that there are many subproblems colored red, blue, and light blue as groups, which are solved again and again. This is called overlapping subproblems and is a necessary condition to think in dynamic programming terms. We already know that an optimal solution to subproblem leads to an optimal solution to the original problem, hence, we can apply the dynamic programming approach here.

The best way to avoid calculating subproblems, again and again, is to memorize what is already calculated, so let’s modify the code to use a cache, this approach is called a top-down approach.

Top down implementation

public int sum(int[] a){
int [] cache = new int[a.length];
return sumUtil(a,0, cache);
}
private int sumUtil(int[] a, int index){
if(index > a.length-1){
return 0;
}
if (cache[index] > 0){
return cache[index];
}
cache[index] = Math.max(a[index] + sumUtil(a, index+2),
sumUtil(a, index+1)
);
return cache[index];
}

There will be a maximum n calls to the sumUtil() function, so time complexity reduces to O(n) along space complexity of O(n).

How can we implement the bottom-up solution for this problem? If we defined a 1D array dp[] where dp[i] represents the maximum sum which can be achieved till index i of array. To include a[i] into that sum, we have to look for maximum sum that can be achieved till index i-2 i.e dp[i-2]. If we exclude the index, then we get the maximum sum till index i-1 i.e dp[i-1]. We take whatever is the maximum.
Recurrece relation is as follows.

dp[i] = max(dp[i-2] + a[i], dp[i-1]);

Bottom up implementation

private int sumDP(int[] a){
if(a.length == 0) return 0;
if(a.length == 1) return a[0];
if(a.length == 2) return Math.max(a[0], a[1]);
int [] dp = new int[a.length];
dp[0] = a[0];
dp[1] = Math.max(a[0], a[1]);
int max = 0;
for(int i=2; i<a.length; i++){
dp[i] = Math.max(a[i] + dp[i-2], dp[i-1]);
max = Math.max(max, dp[i]);
}
return max;
}

The time complexity of bottom-up approach is also O(n) along space complexity of O(n).

Given a rod of length ‘n’ units and list of prices of pieces of lengths ‘1’ to ‘n’, the problem is to determine the maximum value that can be obtained by cutting the rod and selling the pieces.
For eg.

Input:
Rod is of length 4 and list of prices is:
Piece length 1 2 3 4
Price 2 5 7 8
Output:
10
Explanation:
The maximum value that can be obtained in this case is ‘10’ by cutting the rod in pieces of length {2, 2}.

Thought process

One might get tempted to use a Greedy approach for solving problems where we need to find either minimum or maximum values, but remember that for the Greedy approach one has to give proof of its correctness as it takes just a single example/test case to prove that Greedy approach fails. Hence in general such problems are solved using DP.
Solution Approach:
To find the maximum value using the price list and length of the rod, instead of considering the extremes (like we do in Greedy) from the given input, we consider all the input values. Meaning, in a Greedy approach one might consider cutting the rod in pieces with the highest available price only so that he can obtain the maximum value, but this approach fails in case of the example above.

Hence to find the maximum value we will consider cutting the rod in pieces of all possible lengths, i.e. for a rod of length ‘4’, if get_max_val(price_list, rod_length = 4) gives the maximum value, then:
We’ll first consider making a cut at length 1: This gives us a piece of length 1 and we can sell this piece to get value 2.
Now we are left with the remaining rod of length 4 – 1 = 3. The problem now reduces to finding the maximum value that can be obtained from a rod of length 3 with the same input price list. The total value that can be obtained in this configuration will be:

2 + get_max_val(price_list, rod_length = 3).

Similarly, we can make a cut at length 2: this gives us a piece of length 2 and we can sell this piece to get value 5.

Now we are left with the remaining rod of length 4 – 2 = 2. The problem now reduces to finding the maximum value that can be obtained from a rod of length 2 with the same input price list. The total value that can be obtained in this configuration will be:

5 + get_max_val(price_list, rod_length = 2).

Also, we can make a cut at length 3: This gives us a piece of length 3 and we can sell this piece to get value 7.

Now we are left with the remaining rod of length 4 – 3 = 1. The problem now reduces to finding the maximum value that can be obtained from the rod of length 1 with the same input price list. The total value that can be obtained in this configuration will be:

7 + get_max_val(price_list, rod_length = 1).

We can also sell the complete rod of length ‘4’ as it is, which will fetch us the total value => 8

For each sub-problem, we’ll take a similar approach of cutting the current piece of a rod into pieces of length ranging from ‘1’ to ‘current rod length’. We keep cutting the rod and solving the resulting sub-problems till the length of the rod reaches zero. While trying out these different configurations we will keep a track of the maximum of the total value of each configuration.

The approach discussed above shows that this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).

Let’s try to implement this using recursion:
C++:

#include <iostream>
#include <vector>
using namespace std;
int get_max_value(vector<int>& price_list, int rod_length)
{
if(rod_length <= 0)
return 0;
int max_value = 0;
for(int i=0; i<rod_length; i++)
{
max_value = max(max_value,
price_list[i]
+ get_max_value(price_list, rod_length - i - 1)
);
}
return max_value;
}
int main(void)
{
vector<int> price_list{2, 5, 7, 8};
int rod_length = 4;
cout<<get_max_value(price_list, rod_length)<<endl;
return 0;
}

Java:

import java.io.*;
class rod_cutting
{
static int max(int a, int b)
{
return (a < b) ? b : a;
}
static int get_max_value(int price_list[], int rod_length)
{
if (rod_length <= 0)
return 0;
int max_value = 0;
for(int i=0; i<rod_length; ++i)
{
max_value = max(max_value,
price_list[i]
+ get_max_value(price_list, rod_length - i - 1)
);
}
return max_value;
}
public static void main(String args[])
{
int price_list[] = new int[] {2, 5, 7, 8};
int rod_length = price_list.length;
System.out.println(get_max_value(price_list, rod_length));
}
};

But this recursive implementation has exponential time complexity. Let’s have a look at the function call tree to understand the reason behind this:

In this example of finding max value for a rod of length = 4 and a list of prices for its pieces, subproblems such as get_max_value(price_list, 2) and get_max_value(price_list, 1) are calculated repeatedly. Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems.
As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. Unlike recursion, Dynamic Programming uses a bottom-up approach.

Dynamic Programming Approach

In DP we start calculating from the bottom and move up towards the final solution. The solutions of all subproblems are stored in a 2D array / DP table so that they can be reused when required.

#include <iostream>
#include <vector>
using namespace std;
int get_max_value(vector<int>& price_list, int rod_length)
{
int dp_table[rod_length+1][rod_length+1] = {};
for (int i = 1; i <= rod_length; i++)
{
for (int j = 1; j <= rod_length; j++)
{
if(j >= i)
{
dp_table[i][j] = max(dp_table[i-1][j],
price_list[i-1] + dp_table[i][j-i]);
}
else
{
dp_table[i][j] = dp_table[i-1][j];
}
}
}
return dp_table[rod_length][rod_length];
}
int main(void)
{
vector<int> price_list{2, 5, 7, 8};
int rod_length = 8;
court<< get_max_value(price_list, rod_length)<<endl;
return 0;
}

The time complexity of the solution is O(n^{2}) along with space complexity of O(n^{2}) Where n is rod length.

The space complexity of the DP code can be reduced by skipping storing solutions of each subproblem so that instead of a 2D array we can work with a 1D array.

With a 2D array, we were storing solutions for subproblems that have a particular rod length with a subset of given price_list, but instead of this for a particular rod length we can compare total values obtained from all subproblems and storing only the maximum value, this way using a 1D array suffices.
But also note that a question, if asked, that what cuts were actually made to get the maximum value, cannot be answered with a 1D array solution. In 2D array, we can traverse back starting from dp_table[rod_length][rod_length] and check if the value at the current cell is calculated from the same row and if that’s the case then the current row number is considered as one of the piece lengths that were cut to get the maximum value.

import java.io.*;
class rod_cutting
{
static int max(int a, int b)
{
return (a < b) ? b : a;
}
static int get_max_value(int price_list[], int rod_length)
{
int dp_table[] = new int[rod_length+1];
dp_table[0] = 0;
for (int i=1; i <= rod_length; ++i)
{
int max_value = 0;
for (int j=0; j < i; ++j)
{
max_value = max(max_value,
price_list[j] + dp_table[i - j - 1]);
}
dp_table[i] = max_value;
}
return dp_table[rod_length];
}
public static void main(String args[])
{
int price_list[] = new int[] {2, 5, 7, 8};
int rod_length = price_list.length;
System.out.println(get_max_value(price_list, rod_length));
}
};

The time complexity remains same O(n^{2}) but the space complexity reduces to O(n) where n is rod length.

There are a number of envelopes with widths and heights given as a pair of integers (w, h). An envelope can fit into another if and only if both the width and height of one envelope is greater than the width and height of the other envelope. Given the list of such envelope dimensions, find the maximum number of envelopes can you Russian doll? (put one inside other) For example:

Input: [[5,4],[6,4],[6,7],[2,3]]
Output: 3
Explanation: The maximum number of envelopes you can Russian doll is 3 ([2,3] => [5,4] => [6,7]).

How should we go about the solution? One of the ways is to find all the subsets of the envelopes and check every subset if envelopes can fit in each other. This solution has exponential time complexity. Can we do better than that?

Let’s look at the conditions again, an envelope can fit in another one only if width and height are smaller than the other envelope. Let’s simplify this. An envelope cannot fit into another one if the width is bigger than the other. What if we sort the envelopes based on their width? Then we are sure that i-th envelop can have all the envelopes inside it that are at index 0 to i-1 as far as the height permits.

Once, we order the envelopes based on width, our problem reduces to one dimension, which is height. Now if we want to find how many will fit in each other till envelope i, we find the envelopes from 0 to i-1 in increasing order. Why? Because even if the height of the envelope j is greater than height of envelope k, where k > j, it will not fit because width of envelope j is less than width of envelope k because of the sorted order.

To find the maximum number, all we have to is to find the longest increasing subsequence based on the height of envelopes.

One thing which you should take care of is what if the width of the two envelopes is the same? In that case, those two will not fit in together in each other even if heights are increasing. To handle this case, if widths are equal, we will put the envelope with a bigger height in front of the other, so that both envelopes are not part of increasing subsequence.

Show me the implementation

class Solution {
public int maxEnvelopes(int[][] envelopes) {
if(envelopes == null || envelopes.length == 0) return 0;
/*Sort the array based on width in increasing order,
if the width is equal, sort it based on height
in decreasing order.
*/
Arrays.sort(envelopes, (a,b) -> a[0] == b[0]
? b[1] - a[1] : a[0] - b[0]);
int n = envelopes.length;
int [] dp = new int[n];
int len = 0;
for(int[] envelope : envelopes){
int index = Arrays.binarySearch(dp, 0, len, envelope[1]);
if(index < 0)
index = -(index + 1);
dp[index] = envelope[1];
if(index == len)
len++;
}
return len;
}
}

The time complexity of implementation is O(nlogn); we are doing n times binary search on maximum n numbers and space complexity is O(n).

You can solve another problem with the same approach: Box stacking problem, where one box can be put on another one only if the area of the base is less than the area of the base of another.

Find the count of n-digit numbers whose sum of digits is equal to given sum s.
For example,

Input:
n=2 and s=5
Output:
14, 23, 32, 41, 50
Explanation:
we have to find those 2-digit numbers whose sum of digits is 5. Note that leading zeroes are not considered as digits, i.e. in above eg. we will not consider 05 as a 2-digit number.

Sum of n-digit numbers thought process

A brute force solution that first comes to mind is to consider each of the n-digits numbers, calculate the sum of digits of each number and count the ones where the sum of digits is s. But this solution is of exponential time complexity, we need to find a better solution. Let’s consider a few examples: Numbers: numbers with n-digits and sum of digits s Cnt(n,s): count of numbers

If we look closely at above example we can see that actual numbers in case of (n=3, sum=5) are generated from numbers in case of (n=2, sum=4), (n=2, sum=3), (n=2, sum=2), (n=2, sum=1) and (n=2, sum=0) by fixing the most significant digit. As stated in the question, leading zeroes cannot be considered as digits hence for the most significant digit available choices are ranging from 1 to 9 or given sum (whichever is less):

1 _ _ 2 _ _ 3 _ _ 4 _ _ 5 _ _

After fixing one digit we have got just 2 (n-1 i.e. 3-1 = 2) digits left with us and also the sum reduces by the digit value i, where i ranges from 1 to 9.

Let’s consider the case of (1 _ _), after fixing most significant digit to 1, problem reduces to remaining n – 1 = 3 – 1 = 2-digit numbers and reduced s = s – 1 = 5 – 1 = 4, which is nothing but the sub-problem cnt(n=2, s=4).
But also note that while fixing any digit other than the most significant digit, 0 can also be used as a digit, and the sub-problem is instead denoted by cntWithZero(n=2, s=4). The second digit can similarly be fixed but this time 0 is also a candidate:

1 0 _ 1 1 _ 1 2 _ 1 3 _ 1 4 _

After fixing 2 digits, the problem now reduces to 1-digit numbers and reduced sum (s-i) where i ranges from 0 to 9 or last reduced sum (whichever is less).

In this way for any n-digit number we can keep fixing a digit at a time and keep solving the sub-problems, until all n digits are fixed or in case s was small and it reaches zero before all digits were fixed.
As we had observed numbers for cnt(n=3, s=5) are generated by fixing the most significant digit, to find the solution for cnt(n=3, s=5) from its sub-problems –

Considering this observation we can say that cnt(n, s) can be calculated using sub-problems such as cnt(n-1, s-i) where i ranges from 0 to s:

cnt(n, s) = SUM(cntWithZero(n-1, s-i)), where 1 <= i <= s
cntWithZero() = SUM(cntWithZero(n-1, s-i)), where 0 <= i <= s

This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems), which is the first condition for application of dynamic programming.

Recursive implementation C++

long long int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (sum == 0)
return 1;
long long int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
long long int cnt(int n, int sum)
{
long long int ans = 0;
for(int i=1; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
int main(void)
{
int n=3, s=5;
printf("%lld", cnt(n, s));
return 0;
}

Recursive implementation Java

import java.io.*;
class digit_sum
{
static int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (sum == 0)
return 1;
int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
static int cnt(int n, int sum)
{
int ans = 0;
for(int i=1; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
public static void main(String args[])
{
int n=3, s=5;
System.out.println(cnt(n, s));
}
}

This implementation has exponential time complexity. Let’s have a look at the recursive function call tree to find the reason behind this:

In this example of cnt(n=3, s=5), subproblems such as cntWithZero(n=1, s=2) and cntWithZero(n=1, s=1) are calculated repeatedly. Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems, the second condition to apply dynamic programming.

As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. But before jumping to DP solution, there’s another way to resolve the issue of overlapping subproblems in a recursive approach: Memoized approach.

Memoised approach C++

long long int memoized_table[101][101];
long long int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (memoized_table[n][ sum] == -1)
{
long long int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
memoized_table[n][ sum] = ans;
}
}
return memoized_table[n][ sum];
}
long long int cnt(int n, int sum)
{
long long int ans = 0;
memset(memoized_table, -1, sizeof(memoized_table));
for(int i=1; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
int main(void)
{
int n=3, s=5;
printf("%lld", cnt(n, s));
return 0;
}

Memoised approach Java

Java:
import java.io.*;
class digit_sum
{
static int memoized_table [][] = new int[101][101];
static int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (memoized_table[n][sum] == -1)
{
int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
memoized_table[n][ sum] = ans;
}
}
return memoized_table[n][ sum];
}
static int cnt(int n, int sum)
{
int ans = 0;
for(int i = 0; i <= 100; ++i)
{
for(int j = 0; j <= 100; ++j)
{
memoized_table[i][j] = -1;
}
}
for(int i=1; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
}
return ans;
}
public static void main(String[] args)
{
int n=3, s=5;
System.out.println(cnt(n, s));
}
}

The time complexity of the above implementation is O(ns) along with the space complexity of O(ns)

Sum of digits Dynamic Programming Approach

We go bottom-up in a dynamic programming approach. We will use a 2D array / DP table in the implementation. Start from the bottom i.e. i=0, j=0, and keep solving each sub-problem and store its result in DP table until we reach i=n and j=s.

Dynamic programming Java solution of sum of digits problem

In mathematics, Binomial Coefficients are the positive integers that occur as coefficients in the binomial theorem.

A binomial coefficient is the coefficient of the term (x^{k}) in the polynomial expansion of (1 + x)^{n} and is indexed by a pair of integers n >= k >= 0

In this post, we will denote it as bc(n, k).

For example: The fourth power of (1 + x) is –

(1 + x)^{4} = 1 + 4x + 6x^{2} + 4x^{3} + x^{4}
Writing this expression in terms of Binomial Coefficients -
(1 + x)^{4} = bc(4,0) * x^{0} + bc(4,1) * x^{1} + bc(4,2) * x^{2} + bc(4,3) * x^{3} + bc(4,4) * x^{4}
(1 + x)^{4} = 1 * x^{0} + 4 * x^{1} + 6 * x^{2} + 4 * x^{3} + 1 * x^{4}
The Binomial coefficient for n=4 and k=2 is bc(4, 2) = 6, the coefficient of (x^{2}) in the expansion of (1+x)^{4}.

Problem is – given values of n and k, find the binomial coefficient bc(n, k).

The binomial coefficients from these expansions can be arranged to form following triangle like structure called Pascal’s Triangle:

From Pascal’s triangle, we observe that an entry in any of the rows is addition of two entries from the row above –

bc(n, k) = bc(n-1, k-1) + bc(n-1, k)

This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems). Also we can observe that first and last binomial coefficients of each are 1 i.e. –

bc(n, 0) = bc(n, n) = 1

Recursive Approach C++ implementation

long long int findBinomialCoeff (int n, int k){
if(k == 0 || n == k) return 1;
return findBinomialCoeff (n-1, k-1) + findBinomialCoeff (n-1, k);
}
int main(void){
int n = 4, k=2;
long long int binomialCoeff = findBinomialCoeff(n, k);
printf("%lld", binomialCoeff);
return 0;
}

Same can be implemented in Java as well.

Recursive Approach Java implementation

import java.util.*;
class BinomialCoefficient
{
static int findBinomialCoeff (int n, int k){
if(k == 0 || n == k) return 1;
return findBinomialCoeff (n-1, k-1) + findBinomialCoeff (n-1, k);
}
public static void main(String[] args){
int n = 4, k=2;
int binomialCoeff = findBinomialCoeff(n, k);
System.out.printf("%d", binomialCoeff);
}
}

The recursive code has exponential time complexity, let’s have a look at the reasoning behind this:

In this example of finding Binomial coefficient for n=5 and k=3, subproblems such as bc(3,2) and bc(2,2) are calculated again and again. Our recursive algorithm for this problem solves the same subproblem over and over. These are called overlapping subproblems.

As the two properties required for using Dynamic Programming : optimal substructure and overlapping subproblems are holding here, we can use DP for this problem.

In DP, we start calculating from the bottom and move up towards the final solution. Solution of all subproblems are stored in a 2D array / DP table so that they can be reused when required.

Binomial coefficient with dynamic programming C++

long long int findBinomialCoeff (int n, int k){
long long int DP_table[n+1][k+1];
for(int i=0; i<=n; ++i){
for(int j=0; j <= min(i,k); ++j){
if (j == 0 || i == j) dp[i][j] = 1;
else dp[i][j] = dp[i-1][j-1] + dp[i-1][j];
}
}
return dp[n][k];
}

Java implementation is very similar.

Java implementation

import java.util.*;

class BinomialCoefficient{
static int min(int a, int b){
return (a<b) ? a : b;
}

static int findBinomialCoeff (int n, int k){
int dp[][] = new int [n+1][k+1];
for(int i=0; i<=n; ++i){
for(int j=0; j <= min(i,k); ++j){
if (j == 0 || i == j) dp[i][j] = 1;
else dp[i][j] = dp[i-1][j-1] + dp[i-1][j];
}
}
return dp[n][k];
}
[/code]

The time complexity of the dynamic programming approach to implementing the binomial coefficient is O(nk) whereas space complexity is O(nk)
The space complexity can be reduced by storing just the last row of Pascal’s triangle and that too just the (k+1) elements from the last row are required, with this approach instead of a 2D array we can work with a 1D array.

Egg dropping problem goes like given n eggs and f floors, our task is to find the minimum number of trials required to find the highest floor from which it is safe to drop eggs (i.e. the eggs don’t break) considering worst-case scenario and a few conditions:

An egg can be reused only if it survives the fall

If an egg breaks from the x^{th} floor, it will also break from all floors above it (x+1^{th} floor to f^{th} floor)

If an egg survives a fall from the x^{th} floor, it will also survive the falling from all floors below it (1st floor to x-1^{th} floor)

A couple of variations of this problem where the number of eggs and the number of floors is fixed are famous puzzles like 2 eggs and 100 floors puzzle or 2 eggs and 36 floors puzzle. Finding solutions to such puzzles take a more mathematical approach.

Solution
Let’s say we have 1 egg and 4 floors and we need to find the minimum number of trials required to determine the highest floor from which it is safe to drop eggs. In this case, if we try to drop the egg from a random floor, it might break and we would not be able to conduct further trials to find the required floor.
Hence in such a case, we have to be careful and start from the 1st floor and keep checking on each floor above, as soon as the egg breaks on a floor, we would have found the required floor.

In this example let’s say starting from the 1st floor and reaching the 3rd floor (f-1^{th}th floor) we still couldn’t determine the required floor, only after the trial on the 4^{th} floor, we would find the answer. Hence in the worst case, we require 4 trials.

Now instead of 1 egg if we have 2 eggs and 4 floors, will it reduce the minimum number of trials required? As we have an extra egg we can afford to start from any floor. If we start from the 4th floor, the egg might break or it might not –

If it doesn’t break we can say that it won’t break on any of the lower floors and hence we were lucky enough to have found the answer in the first trial itself.

If the egg breaks, we now have 1 egg fewer, but we also know the 4^{th} floor is not the required floors and hence we would perform further trials on floors below it only.

But now since we have just 1 egg, we again have to be careful and fall back to the strategy of starting trials on the 1st floor and checking each floor one by one until the 3rd floor. Again in the worst case, we will check each floor from 1st to 3rd and in this way, we will need 3 trials for that.

So in total, we need (1 trial on 4th floor) + (3 trials from 1st to 3rd floor) = 4 trials if we start trials from the 4th floor. Also note in this case the worse of the two possibilities is 4 and we will consider the worst-case scenario always.

But what if we choose some other floor?

If we start from the 3rd floor, the egg might break or it might not –

If it doesn’t break, we now have to perform trials on floors above the 3rd floor. In this case, there’s only one floor above: 4th floor and we have two eggs. When we have just one floor to check we just have to drop an egg from it and we can determine our answer from just that one trial.

So in total we will have to do 2 trials = (1 trial on 3rd floor) + (1 trial on floors above it i.e. 4th floor).

If the egg breaks, we now have 1 egg fewer, but we also know that the 3rd floor is not the required floors and even floors above it will also give the same outcomes, and hence we would perform further trials on floors below it only.

Now since we have just 1 egg, we again have to be careful and fall back to the strategy of starting trials on the 1st floor and checking each floor one by one until the 2nd floor. Again in the worst case, we will check each floor from 1st to 2nd and in this way, we will need 2 trials for that.

So in total, we need (1 trial on 3rd floor) + (2 trials from 1st to 2nd floor) = 3 trials if we start trials from the 3rd floor. Also note in this case the worse of the two possibilities is 3 and we will consider the worst-case scenario always.

Similarly, we will start trials from 2nd floor –

If the egg breaks on the 2nd floor, we just have to check the 1st floor. Hence total trials required are: 2

If the egg doesn’t break, then we have to check for the floors above that are 3rd and 4th floor, i.e. we have 2 eggs and 2 floors for trials.

Note that we do not care about which floor we are or we’ll be (either floor above us or below) doing trials on, after the current trial we just consider which direction will we do next trials and how many floors remaining (either ground or terrace) so that we can calculate the number of required trials accordingly.

Using this similar approach we can find that with 2 eggs and 2 floors we’ll need 2 trials in the worst case. Hence total trials required are: (1 trial on 2nd floor) + (2 trials on 2 floors above 2nd floor) = 3 trials.

Also note in this case the worse of the two possibilities is 3 and we will consider the worst-case scenario always.

Similarly when we start trials from 1st floor-

If the egg breaks on the 1st floor, we don’t need more trials as we know there’s no safe floor. Hence total trials required are 1.

If the egg doesn’t break, then we have to check for the floors above that are 2nd, 3rd and 4th floor, i.e. we have 2 eggs and 3 floors for trials.

Using this similar approach we can find that with 2 eggs and 3 floors we’ll need 2 trials in the worst case. Hence total trials required are: (1 trial on 1st floor) + (2 trials on 3 floors above 1st floor) = 3 trials. The worse of the two possibilities is 3.

So we tried on all 4 floors: except for starting from the 4th floor all others require 3 trials to find the required answer. So our answer for min_total_trials(2 eggs, 4 floors): 3.

Considering the above simulation and our approach we can, in general, say that when we conduct a trial at x^{th} floor, there are two possible outcomes:

The egg breaks or it does not break

In both cases, we have to conduct more trials, but in case if egg breaks at x^{th} floor, we need to conduct trials on floors below x and in case if the egg does not break at x^{th} floor we need to conduct trials on floors above x. Only exceptions to “conducting more trials” statement are:

When we are doing a trial on the 1st floor and the egg breaks, we don’t need more trials as we now know that there are no safe floors.

When we are doing a trial at the f^{th} floor (i.e. highest floor) and the egg doesn’t break, we don’t need more trials as we now know that all floors are safe.

And obviously, if we run out of eggs, no more trials can be conducted.

Note that while conducting a trial at any floor we need to consider the worst possible scenario meaning that we cannot make assumptions about the outcomes of the trial conducted and we have to consider the worse of the two.
Also from our approach in the above example, one can observe that we have to think less about finding the highest safe floor and more about minimizing the number of trials. We actually don’t have to find the highest safe floor but have to focus on minimizing the number of trials by simulating drops from different floors.

We’ll consider doing trials on all the floors and take the worst case for every floor and find our final answer by getting the minimum of all the trials from all floors we just calculated. We aim to reduce our current problem to sub-problems. This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of sub-problems).

Also, we can derive a formula for our finalized approach that will give us the minimum number of trials required to find the highest safe floor given n eggs and f floors:

min_total_trials(n, f) = 1 + MIN( MAX(min_total_trials(n-1, x-1), min_total_trials(n, f – x) ) for each x in {1…f} )

We conduct trials on each floor from 1 to f and for each trial, we consider worse (MAX) of the two possible outcomes, and of all those we consider the one that is minimum. You must have observed in our simulation that we added 1 to the outcome from the sub-problem that was for each of the trials that were done on the current floor, that’s the same 1 that we can see in the formula here.

Using this formula and the exceptions for “conducting more trials” described above as base cases we write a recursive implementation:

int min_total_trials(int e, int f)
{
/* for 0 floors no trials are required
for 1 floor only 1 trial is required
if only 1 egg is available then we have to be
careful and perform trials on each of the f floors,
hence f trials
*/
if (f == 0 || f == 1 || e == 1) return f;
int min_trials = INT_MAX, current_floor_min_trials;
for (int current_floor = 1 ; current_floor <= f ; ++current_floor)
{
current_floor_min_trials = 1 +
max( min_total_trials(e-1, current_floor-1),
min_total_trials(e, f - current_floor) );
min_trials = min(min_trials, current_floor_min_trials);
}
return min_trials;
}
int main(void)
{
int e = 2, f = 4;
int min_trials = min_total_trials(e, f);
printf("%d", min_trials);
return 0;
}

package com.company;
import java.io.*;
class MinTrials
{
static int max(int a, int b) { return (a > b)? a: b; }
static int min(int a, int b) { return (a < b)? a: b; }
static int min_total_trials(int e, int f)
{
/* for 0 floors no trials are required
for 1 floor only 1 trial is required
if only 1 egg is available then we have to
be careful and perform trials on each of
the f floors, hence f trials
*/
if (f == 0 || f == 1 || e == 1) return f;
int min_trials = Integer.MAX_VALUE, current_floor_min_trials;
for (int current_floor = 1 ; current_floor <= f ; ++current_floor)
{
current_floor_min_trials = 1
+ max( min_total_trials(e-1, current_floor-1),
min_total_trials(e, f - current_floor) );
min_trials = min(min_trials, current_floor_min_trials);
}
return min_trials;
}
public static void main (String[] args)
{
int e = 2, f = 4;
System.out.println(min_total_trials(e, f));
}
}

This implementation takes more than 3 secs for e = 2 and f = 28, and the execution time increases exponentially with increasing inputs.
To find the reason behind such high runtime let’s have a look at the recursive function call tree:

As we can see while calculating min_total_trials(2,4), min_total_trials(2,2) and min_total_trials(2,2) are calculated repeated.
This means recursive solution solves the same sub-problem over and over rather than always generating new sub-problems. These are called overlapping sub-problems.
As the two properties required for using Dynamic Programming: optimal substructure and overlapping sub-problems hold, we can use dynamic programming for this problem.
Let’s look at the DP solution for this problem: We’ll use a 2D array / DP table to store solutions to the sub-problems and reuse it when required. In DP when moving from bottom-up we will find a solution for each of the sub-problems where-

e = 1 and f = 1, e = 1 and f = 2, e = 1 and f = 3,….. e = 1 and f = F
e = 2 and f = 1, e = 2 and f = 2, e = 2 and f = 3,….. e = 2 and f = F
.
.
.
e = E and f = 1, e = E and f = 2, e = E and f = 3,…… e = E and f = F

DP_table[E][F] will give the final answer for ‘E’ eggs and ‘F’ floors.

int min_total_trials(int e, int f)
{
int DP_table[e+1][f+1] = {};
for(int i=1; i<=e; ++i)
{
/* for 0 floors no trials are required */
DP_table[i][0] = 0;
/* for 1 floor only 1 trial is required */
DP_table[i][1] = 1;
}
for(int i=1; i <= f; ++i)
{
/* if only 1 egg is available then we have to
be careful and perform trials on each of
the f floors, hence f trials */
DP_table[1][i] = i;
}
for(int i=2; i <= e; ++i)
{
for(int j=2; j <= f; ++j)
{
int min_trials = INT_MAX, current_floor_min_trials;
for (int current_floor=1 ; current_floor<=j; ++current_floor)
{
current_floor_min_trials = 1
+ max( DP_table[i-1][current_floor-1],
DP_table[i][j - current_floor] );
min_trials = min(min_trials, current_floor_min_trials);
}
DP_table[i][j] = min_trials;
}
}
return DP_table[e][f];
}
int main(void)
{
int e = 2, f = 100;
int min_trials = min_total_trials(e, f);
printf("%d", min_trials);
return 0;
}

Javacode

package com.company;
import java.io.*;
class MinTrials
{
static int max(int a, int b) { return (a > b)? a: b; }
static int min(int a, int b) { return (a < b)? a: b; }
static int min_total_trials(int e, int f)
{
int DP_table[][] = new int[e+1][f+1];
for(int i=1; i<=e; ++i)
{
DP_table[i][0] = 0;
DP_table[i][1] = 1;
}
for(int i=1; i <= f; ++i)
{
DP_table[1][i] = i;
}
for(int i=2; i <= e; ++i)
{
for(int j=2; j <= f; ++j)
{
int min_trials = Integer.MAX_VALUE, current_floor_min_trials;
for (int current_floor=1; current_floor<=j; ++current_floor)
{
current_floor_min_trials = 1
+ max( DP_table[i-1][current_floor-1],
DP_table[i][j - current_floor] );
min_trials = min(min_trials, current_floor_min_trials);
}
DP_table[i][j] = min_trials;
}
}
return DP_table[e][f];
}
public static void main (String[] args)
{
int n = 2, f = 100;
System.out.println(min_total_trials(n, f));
}
}

Time complexity of DP solution is O(n * f * f) Space complexity of DP solution is O(n * f) where n is no. of eggs and f is no. of floors.

Even though this works for small input, above implementation times out when you run it at Leetcode. We need to change our thinking process. What if we do not think floors and eggs together and think differently in terms of moves and the number of eggs. How about we find that give m moves and k eggs, how many floors can we test?

If we have 1 egg and 1 move only, then we can test only 1 floor. Imagine we are at floor at X, and we have k eggs and m moves allowed. If we make a move on floor x, then there are two possibilities, either the egg will break or it will survive. If the egg breaks, then we are left with k-1 eggs and m-1 moves, this means with k eggs and m moves, we cannot find certainly the floor greater than dp[k-1][m-1]+1. If it does not break, then we have m-1 moves, but still, k eggs remaining, then we can test dp[k][m-1] floors.
If the egg doesn’t break, the egg which is still the first one can start its second try from x + (x-1)floor. Because, if the current egg breaks when it is dropped from the x + (x-1) floor, the second egg still has x-2 opportunities and it can still try from x+1 floor to x + (x-2) floor. If the second egg doesn’t break again, the third try can start from x + (x-1) + (x-2) floor. Hence, the DP recurrence comes up as

dp[m][k] = dp[k-1][m-1] + 1 + dp[k][m-1]

Below is the implementation.

public int superEggDrop(int K, int N) {
int[][] dp = new int[N + 1][K + 1];
int m = 0;
while (dp[m][K] < N) {
++m;
for (int k = 1; k <= K; ++k)
dp[m][k] = dp[m - 1][k - 1] + dp[m - 1][k] + 1;
}
return m;
}

We have to climb a staircase and it takes N steps to reach the top. Given that we can climb 1 or 2 steps at a time, in how many possible ways can we climb to the top? For example, let N = 3 i.e. there are 3 steps to climb. In this case, we can reach the top by:
– 1 step, 1 step, 1 step
– 1 step, 2 steps
– 2 steps, 1 step
So there are 3 possible ways to reach the top if we can take {1, 2} steps at a time.

How can we think of a solution? At the position that we are currently standing (be it start of the staircase or in the middle somewhere on the staircase) we have two choices, we can either climb 1 step or we can climb 2 steps. This also means that we reached our current position on the staircase by climbing either 1 step or 2 steps, i.e. if we are at step say x we reached here by climbing from either (x-1)^{th} step or (x-2)^{th} step. To actually find in how many ways we can reach x, we need to know in how many ways we reached x-1 and x-2 in the first place and then add them to find the number of ways to reach x. To make this easier to understand let’s look at some examples.

Consider we are at step 0 (ground) and the staircase has just 1 step, so the number of ways to reach the top is:
1st way : [0🡪1] => take 1 step and we reach the top.

Consider the staircase has 2 steps, so the number of ways to reach the top is:
1st way: [0🡪1🡪2] => take 1 step at a time and climb one by one step to reach the top
2nd way: [0🡪2] => take 2 steps and reach the top

Consider the staircase has 3 steps, so the number of ways to reach the top is:
now consider we are actually at the top at step 3, according to our thoughts above we can reach step 3 (x) either by from step 2 (x-1) or from step 1 (x-2). Let’s consider the case for step 1 (x-2) first: We already know number of ways to reach step 1 and that is 1 =>
1) [0🡪1]: using this we reach step 3 => [0🡪1🡪3].

Similarly, we already know the number of ways to reach step 2 and that are 2 =>
1) [0🡪1🡪2]: using this we get another way to reach step 3 => [0🡪1🡪2🡪3]
2) [0🡪2]: using this we get another way to reach step 3 => [0🡪2🡪3].

So finally we have our three ways to reach step 3:
[0🡪1🡪3], [0🡪1🡪2🡪3], and [0🡪2🡪3].

Notice that we just added 3 to already known paths for step 1 and step 2.

Consider the staircase has 4 steps, so the number of ways to reach the top is:
number of ways to reach step 2 (x-2) + number of ways to reach step 3 (x-1) = 5
From step 2: [0🡪1🡪2🡪4], [0🡪2🡪4]
From step 3: [0🡪1🡪3🡪4], [0🡪1🡪2🡪3🡪4] and [0🡪2🡪3🡪4].

So from the examples we understand that: total_ways(n) = total_ways(n-1) + total_ways(n-2).

Note that this means this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).
This formula is similar to the one we used to find N^{th} Fibonacci number. Actually total_ways(n) = findNthFib(n+1).

Notice it is (N+1)^{th} Fibonacci number and not N^{th}, that’s because of the bases cases from where we start our calculations. Consider a case that the staircase has 0 steps to climb and we start from the ground (i.e. step 0), so in this case we argue that there’s only 1 way to reach the top i.e. [0].
With this argument:
total_ways(0) = 1, total_ways(1) = 1…
findNthFib(0) = 0, findNthFib(1) = 1, findNthFib(2) = 1…
and hence the formula total_ways(n) = findNthFib(n+1).

There’s another variation of this problem where the number of steps that can be taken at a time is not limited to just {1, 2}.

Total possible ways to climb a staircase (variation 2)

We have to climb a staircase and it takes N steps to reach the top. Given that we can climb 1 or 2 or 3 steps at a time, in how many possible ways can we climb to the top?
For example, let n = 3 i.e. there are 3 steps to climb. In this case, we can reach the top by:
– 1 step, 1 step, 1 step
– 1 step, 2 steps
– 2 steps, 1 step
– 3 steps
So there are 4 possible ways to reach the top of a staircase of 3 steps if we can take {1, 2, 3} steps at a time

Similar to the approach taken above, we can reach our current position on staircase by climbing either 1 step or 2 steps or 3 steps, i.e. if we are at step say x we reached here by climbing from either x-1^{th} step or x-2^{th} step or x-3^{th} step. So to actually find in how many ways we can reach x, we need to know in how many ways we reached x-1, x-2 and x-3 in the first place and then adding them will give number of ways to reach x.

total_ways(n) = total_ways(n-1) + total_ways(n-2) + total_ways(n-3)
This formula maps to a recursive solution:

long long int findTotalWays (int n)
{
if(n == 0 || n == 1) return 1;
if(n == 2) return 2;
return findTotalWays(n-1) + findTotalWays(n-2) + findTotalWays(n-3);
}
int main(void)
{
int N = 5;
long long int total_ways = findTotalWays(N);
printf("%lld", total_ways);
return 0;
}

But this solution is not feasible as it has exponential time complexity. To find the reason behind this let’s look at the function call tree:

As we can see here that the subproblems such as total_ways(3) and total_ways(2) are calculated repeatedly. This means that the subproblems are overlapping.
As the two properties required for using dynamic programming : optimal substructure and overlapping subproblems hold true, we can use DP for this problem.

int main(void)
{
int N = 5;
vector<long long int> DPVec(N+1, 0);
DPVec[0] = 1; DPVec[1] = 1; DPVec[2] = 2;
for (int i=3; i<=N; ++i)
{
DPVec[i] = DPVec[i-1] + DPVec[i-2] + DPVec[i-3];
}
printf("%lld", DPVec[N]);
return 0;
}

The time and space complexity of the implementation is O(n).

Total possible ways to climb a staircase (variation 3)

We have to climb a staircase and it takes N steps to reach the top. Given that we can climb 1 or 3 or 5 steps at a time, in how many possible ways can we climb to the top? To generalize we can be given any set of the allowed number of steps that can be taken at a time and we have to find the number of possible ways to climb to the top?

Keeping in mind the above described approach, the formula again here is: total_ways(n) = total_ways(n-1) + total_ways(n-3) + total_ways(n-5) As negative step does not make any sense, we will recur for non-negative steps only.

long long int findTotalWays (int n, vector<int> &Step_Set)
{
if(n == 0) return 1;
long long int total_ways = 0;
for(int i=0; i<Step_Set.size();++i)
{
if ((n – Step_Set[i]) >= 0)
total_ways += findTotalWays(n- Step_Set[i], Step_Set);
}
return total_ways;
}
int main(void)
{
int N = 5;
vector<int> Step_Set{1, 3, 5};
long long int total_ways = findTotalWays(N, Step_Set);
printf("%lld", total_ways);
return 0;
}

Dynamic programming solution for the problem

int main(void)
{
int N = 5;
vector<int> Step_Set{1, 3, 5};
vector<long long int> DPVec(N+1, 0);
DPVec[0] = 1;
for (int i=1; i<=N; ++i)
{
for (int j=0; j<Step_Set.size(); ++j)
{
if ((i – Step_Set[j]) >= 0)
{
DPVec[i] += DPVec[i- Step_Set[j]];
}
}
}
printf("%lld", DPVec[N]);
return 0;
}

The time is O(m*n) and space complexity: O(m + n) where m is size of the set of allowed steps and n is the number of steps in the staircase. These implementations work in general for any set of allowed steps.

Given a Fibonacci series: 1, 1, 2, 3, 5, 8, 13 … which is defined as fib(n) = fib(n-1) + fib(n-2), find N^{th} number in this series. For example, 5^{th} Fibonacci number is 5. 11^{th} Fibonacci number is 89.

By definition of the Fibonacci series, it is clear that every number in the series is a sum of the last two numbers in the series. So to find nth number we just need to know (n-1)^{th} and (n-2)^{th} number, once we find these two numbers just adding them will give us the answer.

fib(n) = fib(n-1) + fib(n-2)

But how do we find these numbers? We keep taking the same approach for (n-1)^{th} and (n-2)^{th} number, i.e.
fib(n-1) = fib(n-2) + fib(n-3) and
fib(n-2) = fib(n-3) + fib(n-4)
…

We stop only when we hit fib(1) = 1 and fib(2) = 1.
This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).

Recursive approach

The explanation/formula given above maps to simple recursion:

long long int findNthFib(int n)
{
if(n == 1 || n == 2) return 1;
return findNthFib(n-1) + findNthFib(n-2);
}
int main(void)
{
int N = 11;
long long int NthFib = findNthFib(N);
printf("%lld", NthFib);
return 0;
}

The recursive code looks extremely simple. This is one of the advantages of recursion, it saves the efforts of writing lots of code.
Everything looks fine and we are happy with our solution until we try to find 40th or so Fibonacci number using this approach. This implementation takes over 3 secs to find 43rd Fibonacci number and the execution time increases exponentially with increasing inputs.
To find the reason behind such high runtime let’s have a look at the recursive function call tree:

In this example of finding 6th Fibonacci number, subproblems such as fib(4) and fib(3) are calculated repeatedly. Imagine how many such repeated calculations would be there when we use this implementation for finding 43rd Fibonacci number!! Hence the time 3secs.

Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems. As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. But before jumping to a dynamic programming solution, there’s another way to resolve the issue of overlapping subproblems in a recursive approach: Memoised approach. Let’s have a look at it first.

Memoised approach for Fibonacci series

To avoid repeated calculation in the recursive approach, we can store the base cases and the results of each fib() call in an array. The code would look like this:

long long int findNthFib(int n, vector<long long int> &memo)
{
if (memo[n] != 0)
return memo[n];
memo[n] = findNthFib(n-1, memo) + findNthFib(n-2, memo);
return memo[n];
}
int main(void)
{
int N = 43;
vector<long long int> memo(N+1, 0);
memo[1] = 1; memo[2] = 1;
long long int NthFib = findNthFib(N, memo);
printf("%lld", NthFib);
return 0;
}

With the memoized approach the function call tree looks like this:

By memoizing intermediate results, we avoid repeated calculations. The time complexity of the memoized approach is O(n) and Space complexity is O(n). In both the approaches described above, observe that we took a top-down approach, i.e. we started from n and went down till 1.
Unlike recursion, Dynamic Programming uses a bottom-up approach, let’s see how it’s done in DP.

Dynamic Programming approach

In DP we start calculating from the bottom and move up towards the final solution. For this problem we first find 1st Fibonacci number, then 2nd, then 3rd and so on until N^{th} Fibonacci number. To aid this approach we use an array/vector where we will store the intermediate results while we move towards the final solution. The code looks like this:

int main(void)
{
int N = 43;
vector<long long int> DPVec(N+1, 0);
DPVec[1] = 1; DPVec[2] = 1;
for (int i=3; i<=N; ++i)
{
DPVec[i] = DPVec[i-1] + DPVec[i-2];
}
printf("%lld", DPVec[N]);
return 0;
}

Time complexity: O(n) Space complexity: O(n)
The space complexity of the DP code can be reduced by storing just the last two results instead of every result and in this way array/vector is no more required.
Moreover, there are other solutions for finding N^{th} Fibonacci number in O(log N) time using matrices and in O(1) constant time using the golden ratio, but this article is limited to DP approach.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT

Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.