Given a positive integer N, find the count of distinct binary strings of length N that have no consecutive 1’s.
For example,

Input:
N = 2
Output:
3.
Explanation:
There are 3 possible strings: 00, 01, 10 N=3 There are 5 possible strings: 000, 001, 010, 100,101

Thought process to find binary strings with consecutive 1s

This problem is an easier variation of digit DP problem. Since these are binary strings for every position in the string there are just two choices: 0 and 1. To form a string of length N, at any position –

We can choose 0 and then for the next position we again have two choices.

We can choose 1 but then for the next position we cannot choose 1 as we don’t want consecutive 1’s in the string. So once we choose 1, we are also setting next position to 0.

So in case (a), we set 0 at current position and the problem then reduces to count the number of strings having length N-1 with the given condition.

And in case (b), we set 1 at current position and 0 at next position, hence the problem reduces to count the number of strings having length N-2 with the given condition.

With this we can write

Count(n) = Count(n-1) + Count(n-2)

Does this formula ring a bell? Yes, it’s the same one that is is used to find Fibonacci numbers.

#include <iostream>
#include <vector>
using namespace std;
long long int fib(int N)
{
vector<long long int> DPVec(N+1, 0);
DPVec[1] = 1; DPVec[2] = 1;
for (int i=3; i<=N; ++i)
{
DPVec[i] = DPVec[i-1] + DPVec[i-2];
}
return DPVec[N];
}
long long int Cnt_strings(int N)
{
return fib(N+2);
}
int main()
{
int n = 3;
cout<<Cnt_strings(n)<<endl;
return 0;
}

public class Count_Strings
{
static int fib(int N)
{
int DPArr[] = new int[N+1];
DPArr[1] = 1; DPArr[2] = 1;
for (int i=3; i<=N; ++i)
{
DPArr[i] = DPArr[i-1] + DPArr[i-2];
}
return DPArr[N];
}
static int Cnt_strings(int N)
{
return fib(N+2);
}
public static void main (String[] args)
{
int n = 4;
int num_strings = Cnt_strings(n);
System.out.println(num_strings);
}
}

The time complexity of the implementation is O(n) and space complexity is also O(n)

BSTs are used to organize a set of search keys for fast access: the tree maintains the keys in-order so that comparison with the query at any node either results in a match, or directs us to continue the search in left or right sub-tree.

For this problem we are given a set of search keys (0, 1, … n) along with the search frequency count (f0, f1, …. fn) of each key. The set of keys is sorted. A BST can be constructed from such a set of keys, so that keys can be searched quickly but there’s a cost associated with the search operation on BST. Searching cost for a key/node in the BST is defined as – level of that key/node multiplied by its frequency. Level of root node is 1. Total searching cost of the BST is the sum of searching cost of all the keys/nodes in the BST. Given a set of keys the problem is to arrange the keys in a BST that minimizes the total searching cost.

For example: Keys: {0 ,1} and Freq: {10, 20} Possible BSTs created from this set of keys are:

1) Total cost of BST = (level of key0 * freq of key0) +
(level of key1 * freq of key1)
= (1 * 10) + (2 * 20)
= 50
2) Total cost of BST = (level of key1 * freq of key1) +
(level of key0 * freq of key0)
= (1 * 20) + (2 * 10)
= 40

Hence, the minimum total searching cost for given set of keys is 40.

Thought Process:

As per definition of searching cost for a key in the BST – (level of key (L) * freq of key (F)), here we can observe that starting from level ‘1’ till level ‘L’ at each level the key contributes ‘F’ to the total cost and that’s why its searching cost is (L * F).

In order to minimize the total search cost, a simple Greedy approach comes to mind where we try to keep keys with higher frequency at the top of the tree like we choose the key with highest frequency as root, then from the keys on the left of it we again choose a key with highest frequency and make it the left child of root and similarly we choose the right child of the root from the keys on the right and build a BST.

But will this approach build a BST that give minimum total search cost? To prove a Greedy approach works, we have to give a proof. But to prove a Greedy approach fails, we just have to give an example where it doesn’t work.

Let’s consider this example:

Keys

0

1

2

3

4

5

6

Freq

22

18

20

5

25

2

8

Using the Greedy approach discussed above let’s build a BST and calculate its total cost: Key ‘4’ has highest frequency: 25, so it will be the root. Keys {0,…,3} will be used to build the left sub-tree and keys {5, 6} will be used to build the right sub-tree. Among keys {0,…,3}, key ‘0’ has the highest frequency, hence it will be the left child of the root. Among keys {5, 6}, key ‘6’ has the highest frequency, hence it will be the right child of the root. We keep doing this all the remaining keys and the final BST would look like this:

If Level of Key(k) is Lk and Frequency of Key(k) is Fk, then -
Total cost of Greedy BST = (L4 * F4)+(L0 * F0)+(L6 * F6)+
(L2 * F2)+(L5 * F5)+(L1 * F1)+
(L3 * F3)
= (1 * 25)+(2 * 22)+(2 * 8) +
(3 * 20)+(3 * 2) +(4 * 18)+
(4 * 5)
= 243

But is there any other possible BST that has lower total searching cost? Let’s consider this BST:

This BST has lower total cost (215) than the BST created using Greedy approach (243). Hence the Greedy approach fails to solve this problem. But then how do we find the optimal BST?

Solution:

Let’s consider a given set of keys {Ki, … , Kj} and Min_Total_Cost(i, j) returns the total searching cost for the optimal BST for this set. Let’s say we have created the optimal BST for this set of keys and ‘Kr’ is the root of this BST such that i <= r <= j, the tree would look like this:

Kr

/ \

Ki,…, Kr-1 Kr+1,…, Kj

The keys on the left of Kr in the given set will be part of left sub-tree and the keys on the right will be part of right sub-tree. If Total_Cost(i, j) gives the total searching cost for this BST, then it includes –

The searching cost of the root which is – level of root (1) * frequency of root key,

The total cost of the left sub-tree and the total cost of the right sub-tree (the sub-problems),

And as explained earlier that making the keys on the left and right of Kr in the given set the children of Kr will increase their path length by 1 and hence all these keys will incur that cost to the total cost, i.e. all keys which are yet to be included in the BST contribute a cost equal to their frequency to the total cost at every level, hence at each level we have sum of frequency of all such keys/nodes.

Total_Cost(i, j) = (Level of Kr * Freq of Kr)
+(Total searching cost of left sub-tree)
+(Total searching cost of right sub-tree)
+(Sum of frequency of all the keys in the
left sub-tree)
+(Sum of frequency of all the keys in the
right sub-tree)
= Total_Cost(i, r-1) +
Total_Cost(r+1, j) +
(Sum of frequency of all the keys {Ki,…,
Kj})

Since we do not know the key Kr, we will have to try out each key in the set as root of the BST and we will keep track of the minimum of the total searching cost of the BSTs as we calculate them. Using this formula above we can write for Min_Total_Cost(i, j) as –

Min_Total_Cost(i, j) = min ( Min_Total_Cost(i, r-1)
+ Min_Total_Cost(r+1, j)
+ Sum of all Fx for x in
{i,..,j} )
for all r in {i,..,j}
If i > j which doesn’t make a valid set of keys, Min_Total_Cost(i, j) = 0.

Also this shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).

Recursive Approach:

Using this we can write a recursive implementation:

C++:

#include <bits/stdc++.h>
using namespace std;
int Min_Total_Cost(int freq[], int i, int j)
{
if (i > j)
return 0;
int min_total_cost = INT_MAX;
for (int k = i; k <= j; ++k)
{
int total_cost = ( Min_Total_Cost(freq, i, k-1)
+ Min_Total_Cost(freq, k+1, j)
+ accumulate(freq+i, freq+j+1, 0));
if (total_cost < min_total_cost)
min_total_cost = total_cost;
}
return min_total_cost;
}
int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
return Min_Total_Cost(freq, 0, num_keys-1);
}
int main()
{
int keys[] = {0, 1, 2};
int freq[] = {34, 8, 50};
int n = sizeof(keys) / sizeof(keys[0]);
cout<<"Total cost of Optimal BST:"<<getTotalCostOfOptimalBST(keys, freq, n)<<endl;
return 0;
}

Java:

import java.io.*;
class OptimalBST
{
static int sum(int freq[], int left_idx, int right_idx)
{
int sum = 0;
for (int i=left_idx; i <= right_idx; ++i)
{
sum += freq[i];
}
return sum;
}
static int Min_Total_Cost(int freq[], int i, int j)
{
if (i > j)
return 0;
int min_total_cost = Integer.MAX_VALUE;
for (int k = i; k <= j; ++k)
{
int total_cost = ( Min_Total_Cost(freq, i, k-1)
+ Min_Total_Cost(freq, k+1, j)
+ sum(freq, i, j));
if (total_cost < min_total_cost)
min_total_cost = total_cost;
}
return min_total_cost;
}
static int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
return Min_Total_Cost(freq, 0, num_keys-1);
}
public static void main (String[] args)
{
int keys[] = {0, 1, 2};
int freq[] = {34, 8, 50};
int n = keys.length;
System.out.println("Total cost of Optimal BST:" +
getTotalCostOfOptimalBST(keys, freq, n));
}
}

But this implementation has exponential time complexity. To find the reason behind such high time complexity let’s have a look at the recursive function call tree:

In this example of a set consisting of 3 keys {0, 1, 2}, we can see that subproblems such as Min_Total_Cost(freq, 2, 2) and Min_Total_Cost(freq, 1, 1) are calculated repeatedly. Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems.

As the two properties required for using Dynamic Programming : ‘optimal substructure’ and ‘overlapping subproblems’ hold, we can use DP for this problem.

Dynamic Programming Solution:

In DP we start calculating from the bottom and move up towards the final solution. Hence we first solve the sub-problem {i=0, j=0}, then we skip all the sub-problems where (i > j), then next we solve {i=1, j=1}, and reuse solutions to these sub-problems to solve {i=0, j=1} and so on. Finally we solve the sub-problem {i=0, j=(n-1)} and this gives us the final answer.

Solution of all subproblems are stored in a 2D array / DP table so that they can be reused when required.

C++:

#include <bits/stdc++.h>
using namespace std;
long long int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
long long int DP_Table[num_keys][num_keys]{};
for (int j = 0; j < num_keys; ++j)
{
for (int i = j; i >= 0; --i)
{
long long int min_total_cost = INT_MAX,
sum_freq = accumulate(freq+i, freq+j+1, 0);
for (int k = i; k <= j; ++k)
{
long long int total_cost = 0,
total_cost_left_subtree = 0,
total_cost_right_subtree = 0;
if (k > i)
{
total_cost_left_subtree = DP_Table[i][k-1];
}
if (k < j)
{
total_cost_right_subtree = DP_Table[k+1][j];
}
total_cost = ( total_cost_left_subtree
+ total_cost_right_subtree
+ sum_freq );
if (total_cost < min_total_cost)
min_total_cost = total_cost;
}
DP_Table[i][j] = min_total_cost;
}
}
return DP_Table[0][num_keys-1];
}
int main()
{
int keys[] = {0, 1, 2, 3, 4, 5, 6};
int freq[] = {22, 18, 20, 5, 25, 2, 8};
int num_keys = (sizeof(keys) / sizeof(keys[0]));
cout<<"Total cost of Optimal BST:"
<<getTotalCostOfOptimalBST(keys, freq, num_keys)<<endl;
return 0;
}

Java:

import java.io.*;
class OptimalBST
{
static int sum(int freq[], int left_idx, int right_idx)
{
int sum = 0;
for (int i=left_idx; i <= right_idx; ++i)
sum += freq[i];
return sum;
}
static int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
int DP_Table[][] = new int[num_keys][num_keys];
for (int j = 0; j < num_keys; ++j)
{
for (int i = j; i >= 0; --i)
{
int min_total_cost = Integer.MAX_VALUE,
sum_freq = sum(freq, i, j);
for (int k = i; k <= j; ++k)
{
int total_cost = 0,
total_cost_left_subtree = 0,
total_cost_right_subtree = 0;
if (k > i)
total_cost_left_subtree = DP_Table[i][k-1];
if (k < j)
total_cost_right_subtree = DP_Table[k+1][j];
total_cost = ( total_cost_left_subtree
+ total_cost_right_subtree
+ sum_freq );
if (total_cost < min_total_cost)
min_total_cost = total_cost;
}
DP_Table[i][j] = min_total_cost;
}
}
return DP_Table[0][num_keys-1];
}
public static void main (String[] args)
{
int keys[] = {0, 1, 2, 3, 4, 5, 6};
int freq[] = {22, 18, 20, 5, 25, 2, 8};
int num_keys = keys.length;
System.out.println("Total cost of Optimal BST is "
+ getTotalCostOfOptimalBST(keys, freq, num_keys));
}
}

Given an array of integers, write a function that returns the largest sum of non-adjacent numbers. Numbers can be 0 or negative.For example,

Input:
[2, 4, 6, 2, 5]
Output:
13
Explanation:
Since we pick 2, 6, and 5.
Input:
[5, 1, 1, 5]
Output:
10
Explanation:
Since we pick 5 and 5.

Thought process

This problem is very similar to the coin change problem, where for each coin we make a decision, whether to include or exclude a coin in the change or not.

In this problem as well, we make the choice for each number. What if we include a number at index i in the sum and what if we do not include it? If we include the number in the sum, which eventually may be the maximum sum, what can we do with the remaining numbers in the array? If we include a[i], then we definitely cannot include a[i+1], due to the constraint of non-adjacent numbers. After making the choice that we will include a[i] into the sum, our problem reduces to find the maximum sum of non-adjacent numbers from index i+2 to a.length-1.

What if I do not include this number a[i] in the sum? In that case, we can choose a[i+1] in the sum, so the problem reduces to find the largest sum of non-adjacent numbers in the array from index i+1 to a.length

We do not know which choice (to include or exclude a[i]) will give us the largest sum, so we try both and take the maximum of both.

Recursive implementation

public int sum(int[] a){
return sumUtil(a,0);
}
private int sumUtil(int[] a, int index){
if(index > a.length-1){
return 0;
}
return Math.max(a[index] + sumUtil(a, index+2),
sumUtil(a, index+1)
);
}

For each number we take two choices and follow them, overall complexity of above implementation is O(2^{n}) where n is the length of the array.
Let’s see the execution tree of the recursive implementation with one of the examples, it looks like this:

It is evident from the execution tree that there are many subproblems colored red, blue, and light blue as groups, which are solved again and again. This is called overlapping subproblems and is a necessary condition to think in dynamic programming terms. We already know that an optimal solution to subproblem leads to an optimal solution to the original problem, hence, we can apply the dynamic programming approach here.

The best way to avoid calculating subproblems, again and again, is to memorize what is already calculated, so let’s modify the code to use a cache, this approach is called a top-down approach.

Top down implementation

public int sum(int[] a){
int [] cache = new int[a.length];
return sumUtil(a,0, cache);
}
private int sumUtil(int[] a, int index){
if(index > a.length-1){
return 0;
}
if (cache[index] > 0){
return cache[index];
}
cache[index] = Math.max(a[index] + sumUtil(a, index+2),
sumUtil(a, index+1)
);
return cache[index];
}

There will be a maximum n calls to the sumUtil() function, so time complexity reduces to O(n) along space complexity of O(n).

How can we implement the bottom-up solution for this problem? If we defined a 1D array dp[] where dp[i] represents the maximum sum which can be achieved till index i of array. To include a[i] into that sum, we have to look for maximum sum that can be achieved till index i-2 i.e dp[i-2]. If we exclude the index, then we get the maximum sum till index i-1 i.e dp[i-1]. We take whatever is the maximum.
Recurrece relation is as follows.

dp[i] = max(dp[i-2] + a[i], dp[i-1]);

Bottom up implementation

private int sumDP(int[] a){
if(a.length == 0) return 0;
if(a.length == 1) return a[0];
if(a.length == 2) return Math.max(a[0], a[1]);
int [] dp = new int[a.length];
dp[0] = a[0];
dp[1] = Math.max(a[0], a[1]);
int max = 0;
for(int i=2; i<a.length; i++){
dp[i] = Math.max(a[i] + dp[i-2], dp[i-1]);
max = Math.max(max, dp[i]);
}
return max;
}

The time complexity of bottom-up approach is also O(n) along space complexity of O(n).

Given a string s and a string t, check if s is subsequence of t. A subsequence of a string is a new string which is formed from the original string by deleting some (can be none) of the characters without disturbing the relative positions of the remaining characters. (ie, “ace” is a subsequence of “abcde” while “aec” is not). For example:

This problem looks similar to edit distance problem which can be solved using dynamic programming. The only difference is only deletions are allowed in this problem.

Recursive approach

What happens if we take a character at index i from string s and a character at index from string t? There are two possibilities: either these characters are equal or they are not equal. If the characters are equal, then we found the corresponding character in the target string, so we have to look for a remaining substring from index i+1 in s in substring in t from index j+1
What if the characters are not equal? In that case, we keep looking in substring of t from index j+1, but the index of s does not change, it remains at i, because we have not found the match for this character yet in the target string.

Implementation note: s.substring(1) actually get the substring of the string from the second character to the last character and achieves the increase in the indices as mentioned above.

If you run the above code in Leetcode, it will give you Time Limit Exceeded error, that becomes obvious when we draw the execution tree of the function, we will notice that we are solving the same problem multiple times.

Top down approach

The first thing we should do it to avoid solving the subproblem, again and again, that can be achieved using memorization. We introduce a caching place to store if we have solved the problem already and if yes, what was the result. If we already have solved the problem, we will just use the result and not solve the problem again. Below is the memorization code.

This code passes all the test cases at Leetcode. This approach is called a top-down approach in dynamic programming, where we start with the top, keep solving smaller problems until we find a solution for the original problem.

Bottom up approach

Since the optimal solution to the subproblems leads to optimal solution to the original problem, we can apply another approach called the bottom-up approach. In this approach, we start from the smallest possible problem and build our solution upwards.

What is the smallest problem possible in this case? If string s is empty and target string t is not empty, s is definitely a subsequence although an empty one.
So, if I create a two dimensional array where rows represent the number of characters in t and column represent number of characters in s, then I can initialize the first column (which represents zero length of the source string) as

Other way around, what if string t is empty and string s is not empty, then there is no way possible string s can be subsequence of string t, this can be filled in the table as follows:

What if we move up a bit, let’s say we have i characters in string t and j characters in string s. We compare the i-th character with j-th character and see if there are equal?
If they are equal and if string t with i-1 characters already has string s with j-1 characters as subsequence, then we can mark that string t with i characters has string s with j characters as subsequence too.

If the characters are not equal, then either string s with j characters should already be subsequence in i-1 characters of string t, or it cannot be a subsequence in i characters either.

If we go through each character in both string. One implementation note, i and j represent the length of strings and not the index, that is why we compare characters at index i-1 and j-1 when solving for dp[i][j]. Also if length of t is less than length of s there is no way s can be subsequence of t.

The time complexity of dynamic programming solution is O(n * m) where n and m are length of string t and s respectively. Also, there is additional space complexity of O(n * m).

We can reduce this complexity by using stack data structure as the relative position of characters in source and target string should remain same and matched characters can be deleted.

Using Stack

Push the characters of source string in reverse order onto the stack.

Iterate through the characters of the target string, and check if the character matches with the top element of the stack. If it matches, pop the element from the stack.

Obtain the end result – If size of stack is zero, then it returns true, else false.

Time Complexity of the stack based solution is O(n + m) and the space Complexity isO(m), where n and m are length of string t and s respectively

The space complexity can be further reduced to O(1) by using two pointers approach.

Two Pointers

Take two pointers i which is used to traverse string t and j, which is used to traverse strings. Increment j whenever there is a match and compare the value of j with the length of source string at the end.

Given a rod of length ‘n’ units and list of prices of pieces of lengths ‘1’ to ‘n’, the problem is to determine the maximum value that can be obtained by cutting the rod and selling the pieces.
For eg.

Input:
Rod is of length 4 and list of prices is:
Piece length 1 2 3 4
Price 2 5 7 8
Output:
10
Explanation:
The maximum value that can be obtained in this case is ‘10’ by cutting the rod in pieces of length {2, 2}.

Thought process

One might get tempted to use a Greedy approach for solving problems where we need to find either minimum or maximum values, but remember that for the Greedy approach one has to give proof of its correctness as it takes just a single example/test case to prove that Greedy approach fails. Hence in general such problems are solved using DP.
Solution Approach:
To find the maximum value using the price list and length of the rod, instead of considering the extremes (like we do in Greedy) from the given input, we consider all the input values. Meaning, in a Greedy approach one might consider cutting the rod in pieces with the highest available price only so that he can obtain the maximum value, but this approach fails in case of the example above.

Hence to find the maximum value we will consider cutting the rod in pieces of all possible lengths, i.e. for a rod of length ‘4’, if get_max_val(price_list, rod_length = 4) gives the maximum value, then:
We’ll first consider making a cut at length 1: This gives us a piece of length 1 and we can sell this piece to get value 2.
Now we are left with the remaining rod of length 4 – 1 = 3. The problem now reduces to finding the maximum value that can be obtained from a rod of length 3 with the same input price list. The total value that can be obtained in this configuration will be:

2 + get_max_val(price_list, rod_length = 3).

Similarly, we can make a cut at length 2: this gives us a piece of length 2 and we can sell this piece to get value 5.

Now we are left with the remaining rod of length 4 – 2 = 2. The problem now reduces to finding the maximum value that can be obtained from a rod of length 2 with the same input price list. The total value that can be obtained in this configuration will be:

5 + get_max_val(price_list, rod_length = 2).

Also, we can make a cut at length 3: This gives us a piece of length 3 and we can sell this piece to get value 7.

Now we are left with the remaining rod of length 4 – 3 = 1. The problem now reduces to finding the maximum value that can be obtained from the rod of length 1 with the same input price list. The total value that can be obtained in this configuration will be:

7 + get_max_val(price_list, rod_length = 1).

We can also sell the complete rod of length ‘4’ as it is, which will fetch us the total value => 8

For each sub-problem, we’ll take a similar approach of cutting the current piece of a rod into pieces of length ranging from ‘1’ to ‘current rod length’. We keep cutting the rod and solving the resulting sub-problems till the length of the rod reaches zero. While trying out these different configurations we will keep a track of the maximum of the total value of each configuration.

The approach discussed above shows that this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).

Let’s try to implement this using recursion:
C++:

#include <iostream>
#include <vector>
using namespace std;
int get_max_value(vector<int>& price_list, int rod_length)
{
if(rod_length <= 0)
return 0;
int max_value = 0;
for(int i=0; i<rod_length; i++)
{
max_value = max(max_value,
price_list[i]
+ get_max_value(price_list, rod_length - i - 1)
);
}
return max_value;
}
int main(void)
{
vector<int> price_list{2, 5, 7, 8};
int rod_length = 4;
cout<<get_max_value(price_list, rod_length)<<endl;
return 0;
}

Java:

import java.io.*;
class rod_cutting
{
static int max(int a, int b)
{
return (a < b) ? b : a;
}
static int get_max_value(int price_list[], int rod_length)
{
if (rod_length <= 0)
return 0;
int max_value = 0;
for(int i=0; i<rod_length; ++i)
{
max_value = max(max_value,
price_list[i]
+ get_max_value(price_list, rod_length - i - 1)
);
}
return max_value;
}
public static void main(String args[])
{
int price_list[] = new int[] {2, 5, 7, 8};
int rod_length = price_list.length;
System.out.println(get_max_value(price_list, rod_length));
}
};

But this recursive implementation has exponential time complexity. Let’s have a look at the function call tree to understand the reason behind this:

In this example of finding max value for a rod of length = 4 and a list of prices for its pieces, subproblems such as get_max_value(price_list, 2) and get_max_value(price_list, 1) are calculated repeatedly. Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems.
As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. Unlike recursion, Dynamic Programming uses a bottom-up approach.

Dynamic Programming Approach

In DP we start calculating from the bottom and move up towards the final solution. The solutions of all subproblems are stored in a 2D array / DP table so that they can be reused when required.

#include <iostream>
#include <vector>
using namespace std;
int get_max_value(vector<int>& price_list, int rod_length)
{
int dp_table[rod_length+1][rod_length+1] = {};
for (int i = 1; i <= rod_length; i++)
{
for (int j = 1; j <= rod_length; j++)
{
if(j >= i)
{
dp_table[i][j] = max(dp_table[i-1][j],
price_list[i-1] + dp_table[i][j-i]);
}
else
{
dp_table[i][j] = dp_table[i-1][j];
}
}
}
return dp_table[rod_length][rod_length];
}
int main(void)
{
vector<int> price_list{2, 5, 7, 8};
int rod_length = 8;
court<< get_max_value(price_list, rod_length)<<endl;
return 0;
}

The time complexity of the solution is O(n^{2}) along with space complexity of O(n^{2}) Where n is rod length.

The space complexity of the DP code can be reduced by skipping storing solutions of each subproblem so that instead of a 2D array we can work with a 1D array.

With a 2D array, we were storing solutions for subproblems that have a particular rod length with a subset of given price_list, but instead of this for a particular rod length we can compare total values obtained from all subproblems and storing only the maximum value, this way using a 1D array suffices.
But also note that a question, if asked, that what cuts were actually made to get the maximum value, cannot be answered with a 1D array solution. In 2D array, we can traverse back starting from dp_table[rod_length][rod_length] and check if the value at the current cell is calculated from the same row and if that’s the case then the current row number is considered as one of the piece lengths that were cut to get the maximum value.

import java.io.*;
class rod_cutting
{
static int max(int a, int b)
{
return (a < b) ? b : a;
}
static int get_max_value(int price_list[], int rod_length)
{
int dp_table[] = new int[rod_length+1];
dp_table[0] = 0;
for (int i=1; i <= rod_length; ++i)
{
int max_value = 0;
for (int j=0; j < i; ++j)
{
max_value = max(max_value,
price_list[j] + dp_table[i - j - 1]);
}
dp_table[i] = max_value;
}
return dp_table[rod_length];
}
public static void main(String args[])
{
int price_list[] = new int[] {2, 5, 7, 8};
int rod_length = price_list.length;
System.out.println(get_max_value(price_list, rod_length));
}
};

The time complexity remains same O(n^{2}) but the space complexity reduces to O(n) where n is rod length.

Find the count of n-digit numbers whose sum of digits is equal to given sum s.
For example,

Input:
n=2 and s=5
Output:
14, 23, 32, 41, 50
Explanation:
we have to find those 2-digit numbers whose sum of digits is 5. Note that leading zeroes are not considered as digits, i.e. in above eg. we will not consider 05 as a 2-digit number.

Sum of n-digit numbers thought process

A brute force solution that first comes to mind is to consider each of the n-digits numbers, calculate the sum of digits of each number and count the ones where the sum of digits is s. But this solution is of exponential time complexity, we need to find a better solution. Let’s consider a few examples: Numbers: numbers with n-digits and sum of digits s Cnt(n,s): count of numbers

If we look closely at above example we can see that actual numbers in case of (n=3, sum=5) are generated from numbers in case of (n=2, sum=4), (n=2, sum=3), (n=2, sum=2), (n=2, sum=1) and (n=2, sum=0) by fixing the most significant digit. As stated in the question, leading zeroes cannot be considered as digits hence for the most significant digit available choices are ranging from 1 to 9 or given sum (whichever is less):

1 _ _ 2 _ _ 3 _ _ 4 _ _ 5 _ _

After fixing one digit we have got just 2 (n-1 i.e. 3-1 = 2) digits left with us and also the sum reduces by the digit value i, where i ranges from 1 to 9.

Let’s consider the case of (1 _ _), after fixing most significant digit to 1, problem reduces to remaining n – 1 = 3 – 1 = 2-digit numbers and reduced s = s – 1 = 5 – 1 = 4, which is nothing but the sub-problem cnt(n=2, s=4).
But also note that while fixing any digit other than the most significant digit, 0 can also be used as a digit, and the sub-problem is instead denoted by cntWithZero(n=2, s=4). The second digit can similarly be fixed but this time 0 is also a candidate:

1 0 _ 1 1 _ 1 2 _ 1 3 _ 1 4 _

After fixing 2 digits, the problem now reduces to 1-digit numbers and reduced sum (s-i) where i ranges from 0 to 9 or last reduced sum (whichever is less).

In this way for any n-digit number we can keep fixing a digit at a time and keep solving the sub-problems, until all n digits are fixed or in case s was small and it reaches zero before all digits were fixed.
As we had observed numbers for cnt(n=3, s=5) are generated by fixing the most significant digit, to find the solution for cnt(n=3, s=5) from its sub-problems –

Considering this observation we can say that cnt(n, s) can be calculated using sub-problems such as cnt(n-1, s-i) where i ranges from 0 to s:

cnt(n, s) = SUM(cntWithZero(n-1, s-i)), where 1 <= i <= s
cntWithZero() = SUM(cntWithZero(n-1, s-i)), where 0 <= i <= s

This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems), which is the first condition for application of dynamic programming.

Recursive implementation C++

long long int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (sum == 0)
return 1;
long long int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
long long int cnt(int n, int sum)
{
long long int ans = 0;
for(int i=1; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
int main(void)
{
int n=3, s=5;
printf("%lld", cnt(n, s));
return 0;
}

Recursive implementation Java

import java.io.*;
class digit_sum
{
static int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (sum == 0)
return 1;
int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
static int cnt(int n, int sum)
{
int ans = 0;
for(int i=1; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
public static void main(String args[])
{
int n=3, s=5;
System.out.println(cnt(n, s));
}
}

This implementation has exponential time complexity. Let’s have a look at the recursive function call tree to find the reason behind this:

In this example of cnt(n=3, s=5), subproblems such as cntWithZero(n=1, s=2) and cntWithZero(n=1, s=1) are calculated repeatedly. Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems, the second condition to apply dynamic programming.

As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. But before jumping to DP solution, there’s another way to resolve the issue of overlapping subproblems in a recursive approach: Memoized approach.

Memoised approach C++

long long int memoized_table[101][101];
long long int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (memoized_table[n][ sum] == -1)
{
long long int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
memoized_table[n][ sum] = ans;
}
}
return memoized_table[n][ sum];
}
long long int cnt(int n, int sum)
{
long long int ans = 0;
memset(memoized_table, -1, sizeof(memoized_table));
for(int i=1; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
int main(void)
{
int n=3, s=5;
printf("%lld", cnt(n, s));
return 0;
}

Memoised approach Java

Java:
import java.io.*;
class digit_sum
{
static int memoized_table [][] = new int[101][101];
static int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (memoized_table[n][sum] == -1)
{
int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
memoized_table[n][ sum] = ans;
}
}
return memoized_table[n][ sum];
}
static int cnt(int n, int sum)
{
int ans = 0;
for(int i = 0; i <= 100; ++i)
{
for(int j = 0; j <= 100; ++j)
{
memoized_table[i][j] = -1;
}
}
for(int i=1; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
}
return ans;
}
public static void main(String[] args)
{
int n=3, s=5;
System.out.println(cnt(n, s));
}
}

The time complexity of the above implementation is O(ns) along with the space complexity of O(ns)

Sum of digits Dynamic Programming Approach

We go bottom-up in a dynamic programming approach. We will use a 2D array / DP table in the implementation. Start from the bottom i.e. i=0, j=0, and keep solving each sub-problem and store its result in DP table until we reach i=n and j=s.

Dynamic programming Java solution of sum of digits problem

Egg dropping problem goes like given n eggs and f floors, our task is to find the minimum number of trials required to find the highest floor from which it is safe to drop eggs (i.e. the eggs don’t break) considering worst-case scenario and a few conditions:

An egg can be reused only if it survives the fall

If an egg breaks from the x^{th} floor, it will also break from all floors above it (x+1^{th} floor to f^{th} floor)

If an egg survives a fall from the x^{th} floor, it will also survive the falling from all floors below it (1st floor to x-1^{th} floor)

A couple of variations of this problem where the number of eggs and the number of floors is fixed are famous puzzles like 2 eggs and 100 floors puzzle or 2 eggs and 36 floors puzzle. Finding solutions to such puzzles take a more mathematical approach.

Solution
Let’s say we have 1 egg and 4 floors and we need to find the minimum number of trials required to determine the highest floor from which it is safe to drop eggs. In this case, if we try to drop the egg from a random floor, it might break and we would not be able to conduct further trials to find the required floor.
Hence in such a case, we have to be careful and start from the 1st floor and keep checking on each floor above, as soon as the egg breaks on a floor, we would have found the required floor.

In this example let’s say starting from the 1st floor and reaching the 3rd floor (f-1^{th}th floor) we still couldn’t determine the required floor, only after the trial on the 4^{th} floor, we would find the answer. Hence in the worst case, we require 4 trials.

Now instead of 1 egg if we have 2 eggs and 4 floors, will it reduce the minimum number of trials required? As we have an extra egg we can afford to start from any floor. If we start from the 4th floor, the egg might break or it might not –

If it doesn’t break we can say that it won’t break on any of the lower floors and hence we were lucky enough to have found the answer in the first trial itself.

If the egg breaks, we now have 1 egg fewer, but we also know the 4^{th} floor is not the required floors and hence we would perform further trials on floors below it only.

But now since we have just 1 egg, we again have to be careful and fall back to the strategy of starting trials on the 1st floor and checking each floor one by one until the 3rd floor. Again in the worst case, we will check each floor from 1st to 3rd and in this way, we will need 3 trials for that.

So in total, we need (1 trial on 4th floor) + (3 trials from 1st to 3rd floor) = 4 trials if we start trials from the 4th floor. Also note in this case the worse of the two possibilities is 4 and we will consider the worst-case scenario always.

But what if we choose some other floor?

If we start from the 3rd floor, the egg might break or it might not –

If it doesn’t break, we now have to perform trials on floors above the 3rd floor. In this case, there’s only one floor above: 4th floor and we have two eggs. When we have just one floor to check we just have to drop an egg from it and we can determine our answer from just that one trial.

So in total we will have to do 2 trials = (1 trial on 3rd floor) + (1 trial on floors above it i.e. 4th floor).

If the egg breaks, we now have 1 egg fewer, but we also know that the 3rd floor is not the required floors and even floors above it will also give the same outcomes, and hence we would perform further trials on floors below it only.

Now since we have just 1 egg, we again have to be careful and fall back to the strategy of starting trials on the 1st floor and checking each floor one by one until the 2nd floor. Again in the worst case, we will check each floor from 1st to 2nd and in this way, we will need 2 trials for that.

So in total, we need (1 trial on 3rd floor) + (2 trials from 1st to 2nd floor) = 3 trials if we start trials from the 3rd floor. Also note in this case the worse of the two possibilities is 3 and we will consider the worst-case scenario always.

Similarly, we will start trials from 2nd floor –

If the egg breaks on the 2nd floor, we just have to check the 1st floor. Hence total trials required are: 2

If the egg doesn’t break, then we have to check for the floors above that are 3rd and 4th floor, i.e. we have 2 eggs and 2 floors for trials.

Note that we do not care about which floor we are or we’ll be (either floor above us or below) doing trials on, after the current trial we just consider which direction will we do next trials and how many floors remaining (either ground or terrace) so that we can calculate the number of required trials accordingly.

Using this similar approach we can find that with 2 eggs and 2 floors we’ll need 2 trials in the worst case. Hence total trials required are: (1 trial on 2nd floor) + (2 trials on 2 floors above 2nd floor) = 3 trials.

Also note in this case the worse of the two possibilities is 3 and we will consider the worst-case scenario always.

Similarly when we start trials from 1st floor-

If the egg breaks on the 1st floor, we don’t need more trials as we know there’s no safe floor. Hence total trials required are 1.

If the egg doesn’t break, then we have to check for the floors above that are 2nd, 3rd and 4th floor, i.e. we have 2 eggs and 3 floors for trials.

Using this similar approach we can find that with 2 eggs and 3 floors we’ll need 2 trials in the worst case. Hence total trials required are: (1 trial on 1st floor) + (2 trials on 3 floors above 1st floor) = 3 trials. The worse of the two possibilities is 3.

So we tried on all 4 floors: except for starting from the 4th floor all others require 3 trials to find the required answer. So our answer for min_total_trials(2 eggs, 4 floors): 3.

Considering the above simulation and our approach we can, in general, say that when we conduct a trial at x^{th} floor, there are two possible outcomes:

The egg breaks or it does not break

In both cases, we have to conduct more trials, but in case if egg breaks at x^{th} floor, we need to conduct trials on floors below x and in case if the egg does not break at x^{th} floor we need to conduct trials on floors above x. Only exceptions to “conducting more trials” statement are:

When we are doing a trial on the 1st floor and the egg breaks, we don’t need more trials as we now know that there are no safe floors.

When we are doing a trial at the f^{th} floor (i.e. highest floor) and the egg doesn’t break, we don’t need more trials as we now know that all floors are safe.

And obviously, if we run out of eggs, no more trials can be conducted.

Note that while conducting a trial at any floor we need to consider the worst possible scenario meaning that we cannot make assumptions about the outcomes of the trial conducted and we have to consider the worse of the two.
Also from our approach in the above example, one can observe that we have to think less about finding the highest safe floor and more about minimizing the number of trials. We actually don’t have to find the highest safe floor but have to focus on minimizing the number of trials by simulating drops from different floors.

We’ll consider doing trials on all the floors and take the worst case for every floor and find our final answer by getting the minimum of all the trials from all floors we just calculated. We aim to reduce our current problem to sub-problems. This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of sub-problems).

Also, we can derive a formula for our finalized approach that will give us the minimum number of trials required to find the highest safe floor given n eggs and f floors:

min_total_trials(n, f) = 1 + MIN( MAX(min_total_trials(n-1, x-1), min_total_trials(n, f – x) ) for each x in {1…f} )

We conduct trials on each floor from 1 to f and for each trial, we consider worse (MAX) of the two possible outcomes, and of all those we consider the one that is minimum. You must have observed in our simulation that we added 1 to the outcome from the sub-problem that was for each of the trials that were done on the current floor, that’s the same 1 that we can see in the formula here.

Using this formula and the exceptions for “conducting more trials” described above as base cases we write a recursive implementation:

int min_total_trials(int e, int f)
{
/* for 0 floors no trials are required
for 1 floor only 1 trial is required
if only 1 egg is available then we have to be
careful and perform trials on each of the f floors,
hence f trials
*/
if (f == 0 || f == 1 || e == 1) return f;
int min_trials = INT_MAX, current_floor_min_trials;
for (int current_floor = 1 ; current_floor <= f ; ++current_floor)
{
current_floor_min_trials = 1 +
max( min_total_trials(e-1, current_floor-1),
min_total_trials(e, f - current_floor) );
min_trials = min(min_trials, current_floor_min_trials);
}
return min_trials;
}
int main(void)
{
int e = 2, f = 4;
int min_trials = min_total_trials(e, f);
printf("%d", min_trials);
return 0;
}

package com.company;
import java.io.*;
class MinTrials
{
static int max(int a, int b) { return (a > b)? a: b; }
static int min(int a, int b) { return (a < b)? a: b; }
static int min_total_trials(int e, int f)
{
/* for 0 floors no trials are required
for 1 floor only 1 trial is required
if only 1 egg is available then we have to
be careful and perform trials on each of
the f floors, hence f trials
*/
if (f == 0 || f == 1 || e == 1) return f;
int min_trials = Integer.MAX_VALUE, current_floor_min_trials;
for (int current_floor = 1 ; current_floor <= f ; ++current_floor)
{
current_floor_min_trials = 1
+ max( min_total_trials(e-1, current_floor-1),
min_total_trials(e, f - current_floor) );
min_trials = min(min_trials, current_floor_min_trials);
}
return min_trials;
}
public static void main (String[] args)
{
int e = 2, f = 4;
System.out.println(min_total_trials(e, f));
}
}

This implementation takes more than 3 secs for e = 2 and f = 28, and the execution time increases exponentially with increasing inputs.
To find the reason behind such high runtime let’s have a look at the recursive function call tree:

As we can see while calculating min_total_trials(2,4), min_total_trials(2,2) and min_total_trials(2,2) are calculated repeated.
This means recursive solution solves the same sub-problem over and over rather than always generating new sub-problems. These are called overlapping sub-problems.
As the two properties required for using Dynamic Programming: optimal substructure and overlapping sub-problems hold, we can use dynamic programming for this problem.
Let’s look at the DP solution for this problem: We’ll use a 2D array / DP table to store solutions to the sub-problems and reuse it when required. In DP when moving from bottom-up we will find a solution for each of the sub-problems where-

e = 1 and f = 1, e = 1 and f = 2, e = 1 and f = 3,….. e = 1 and f = F
e = 2 and f = 1, e = 2 and f = 2, e = 2 and f = 3,….. e = 2 and f = F
.
.
.
e = E and f = 1, e = E and f = 2, e = E and f = 3,…… e = E and f = F

DP_table[E][F] will give the final answer for ‘E’ eggs and ‘F’ floors.

int min_total_trials(int e, int f)
{
int DP_table[e+1][f+1] = {};
for(int i=1; i<=e; ++i)
{
/* for 0 floors no trials are required */
DP_table[i][0] = 0;
/* for 1 floor only 1 trial is required */
DP_table[i][1] = 1;
}
for(int i=1; i <= f; ++i)
{
/* if only 1 egg is available then we have to
be careful and perform trials on each of
the f floors, hence f trials */
DP_table[1][i] = i;
}
for(int i=2; i <= e; ++i)
{
for(int j=2; j <= f; ++j)
{
int min_trials = INT_MAX, current_floor_min_trials;
for (int current_floor=1 ; current_floor<=j; ++current_floor)
{
current_floor_min_trials = 1
+ max( DP_table[i-1][current_floor-1],
DP_table[i][j - current_floor] );
min_trials = min(min_trials, current_floor_min_trials);
}
DP_table[i][j] = min_trials;
}
}
return DP_table[e][f];
}
int main(void)
{
int e = 2, f = 100;
int min_trials = min_total_trials(e, f);
printf("%d", min_trials);
return 0;
}

Javacode

package com.company;
import java.io.*;
class MinTrials
{
static int max(int a, int b) { return (a > b)? a: b; }
static int min(int a, int b) { return (a < b)? a: b; }
static int min_total_trials(int e, int f)
{
int DP_table[][] = new int[e+1][f+1];
for(int i=1; i<=e; ++i)
{
DP_table[i][0] = 0;
DP_table[i][1] = 1;
}
for(int i=1; i <= f; ++i)
{
DP_table[1][i] = i;
}
for(int i=2; i <= e; ++i)
{
for(int j=2; j <= f; ++j)
{
int min_trials = Integer.MAX_VALUE, current_floor_min_trials;
for (int current_floor=1; current_floor<=j; ++current_floor)
{
current_floor_min_trials = 1
+ max( DP_table[i-1][current_floor-1],
DP_table[i][j - current_floor] );
min_trials = min(min_trials, current_floor_min_trials);
}
DP_table[i][j] = min_trials;
}
}
return DP_table[e][f];
}
public static void main (String[] args)
{
int n = 2, f = 100;
System.out.println(min_total_trials(n, f));
}
}

Time complexity of DP solution is O(n * f * f) Space complexity of DP solution is O(n * f) where n is no. of eggs and f is no. of floors.

Even though this works for small input, above implementation times out when you run it at Leetcode. We need to change our thinking process. What if we do not think floors and eggs together and think differently in terms of moves and the number of eggs. How about we find that give m moves and k eggs, how many floors can we test?

If we have 1 egg and 1 move only, then we can test only 1 floor. Imagine we are at floor at X, and we have k eggs and m moves allowed. If we make a move on floor x, then there are two possibilities, either the egg will break or it will survive. If the egg breaks, then we are left with k-1 eggs and m-1 moves, this means with k eggs and m moves, we cannot find certainly the floor greater than dp[k-1][m-1]+1. If it does not break, then we have m-1 moves, but still, k eggs remaining, then we can test dp[k][m-1] floors.
If the egg doesn’t break, the egg which is still the first one can start its second try from x + (x-1)floor. Because, if the current egg breaks when it is dropped from the x + (x-1) floor, the second egg still has x-2 opportunities and it can still try from x+1 floor to x + (x-2) floor. If the second egg doesn’t break again, the third try can start from x + (x-1) + (x-2) floor. Hence, the DP recurrence comes up as

dp[m][k] = dp[k-1][m-1] + 1 + dp[k][m-1]

Below is the implementation.

public int superEggDrop(int K, int N) {
int[][] dp = new int[N + 1][K + 1];
int m = 0;
while (dp[m][K] < N) {
++m;
for (int k = 1; k <= K; ++k)
dp[m][k] = dp[m - 1][k - 1] + dp[m - 1][k] + 1;
}
return m;
}

We have to climb a staircase and it takes N steps to reach the top. Given that we can climb 1 or 2 steps at a time, in how many possible ways can we climb to the top? For example, let N = 3 i.e. there are 3 steps to climb. In this case, we can reach the top by:
– 1 step, 1 step, 1 step
– 1 step, 2 steps
– 2 steps, 1 step
So there are 3 possible ways to reach the top if we can take {1, 2} steps at a time.

How can we think of a solution? At the position that we are currently standing (be it start of the staircase or in the middle somewhere on the staircase) we have two choices, we can either climb 1 step or we can climb 2 steps. This also means that we reached our current position on the staircase by climbing either 1 step or 2 steps, i.e. if we are at step say x we reached here by climbing from either (x-1)^{th} step or (x-2)^{th} step. To actually find in how many ways we can reach x, we need to know in how many ways we reached x-1 and x-2 in the first place and then add them to find the number of ways to reach x. To make this easier to understand let’s look at some examples.

Consider we are at step 0 (ground) and the staircase has just 1 step, so the number of ways to reach the top is:
1st way : [0🡪1] => take 1 step and we reach the top.

Consider the staircase has 2 steps, so the number of ways to reach the top is:
1st way: [0🡪1🡪2] => take 1 step at a time and climb one by one step to reach the top
2nd way: [0🡪2] => take 2 steps and reach the top

Consider the staircase has 3 steps, so the number of ways to reach the top is:
now consider we are actually at the top at step 3, according to our thoughts above we can reach step 3 (x) either by from step 2 (x-1) or from step 1 (x-2). Let’s consider the case for step 1 (x-2) first: We already know number of ways to reach step 1 and that is 1 =>
1) [0🡪1]: using this we reach step 3 => [0🡪1🡪3].

Similarly, we already know the number of ways to reach step 2 and that are 2 =>
1) [0🡪1🡪2]: using this we get another way to reach step 3 => [0🡪1🡪2🡪3]
2) [0🡪2]: using this we get another way to reach step 3 => [0🡪2🡪3].

So finally we have our three ways to reach step 3:
[0🡪1🡪3], [0🡪1🡪2🡪3], and [0🡪2🡪3].

Notice that we just added 3 to already known paths for step 1 and step 2.

Consider the staircase has 4 steps, so the number of ways to reach the top is:
number of ways to reach step 2 (x-2) + number of ways to reach step 3 (x-1) = 5
From step 2: [0🡪1🡪2🡪4], [0🡪2🡪4]
From step 3: [0🡪1🡪3🡪4], [0🡪1🡪2🡪3🡪4] and [0🡪2🡪3🡪4].

So from the examples we understand that: total_ways(n) = total_ways(n-1) + total_ways(n-2).

Note that this means this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).
This formula is similar to the one we used to find N^{th} Fibonacci number. Actually total_ways(n) = findNthFib(n+1).

Notice it is (N+1)^{th} Fibonacci number and not N^{th}, that’s because of the bases cases from where we start our calculations. Consider a case that the staircase has 0 steps to climb and we start from the ground (i.e. step 0), so in this case we argue that there’s only 1 way to reach the top i.e. [0].
With this argument:
total_ways(0) = 1, total_ways(1) = 1…
findNthFib(0) = 0, findNthFib(1) = 1, findNthFib(2) = 1…
and hence the formula total_ways(n) = findNthFib(n+1).

There’s another variation of this problem where the number of steps that can be taken at a time is not limited to just {1, 2}.

Total possible ways to climb a staircase (variation 2)

We have to climb a staircase and it takes N steps to reach the top. Given that we can climb 1 or 2 or 3 steps at a time, in how many possible ways can we climb to the top?
For example, let n = 3 i.e. there are 3 steps to climb. In this case, we can reach the top by:
– 1 step, 1 step, 1 step
– 1 step, 2 steps
– 2 steps, 1 step
– 3 steps
So there are 4 possible ways to reach the top of a staircase of 3 steps if we can take {1, 2, 3} steps at a time

Similar to the approach taken above, we can reach our current position on staircase by climbing either 1 step or 2 steps or 3 steps, i.e. if we are at step say x we reached here by climbing from either x-1^{th} step or x-2^{th} step or x-3^{th} step. So to actually find in how many ways we can reach x, we need to know in how many ways we reached x-1, x-2 and x-3 in the first place and then adding them will give number of ways to reach x.

total_ways(n) = total_ways(n-1) + total_ways(n-2) + total_ways(n-3)
This formula maps to a recursive solution:

long long int findTotalWays (int n)
{
if(n == 0 || n == 1) return 1;
if(n == 2) return 2;
return findTotalWays(n-1) + findTotalWays(n-2) + findTotalWays(n-3);
}
int main(void)
{
int N = 5;
long long int total_ways = findTotalWays(N);
printf("%lld", total_ways);
return 0;
}

But this solution is not feasible as it has exponential time complexity. To find the reason behind this let’s look at the function call tree:

As we can see here that the subproblems such as total_ways(3) and total_ways(2) are calculated repeatedly. This means that the subproblems are overlapping.
As the two properties required for using dynamic programming : optimal substructure and overlapping subproblems hold true, we can use DP for this problem.

int main(void)
{
int N = 5;
vector<long long int> DPVec(N+1, 0);
DPVec[0] = 1; DPVec[1] = 1; DPVec[2] = 2;
for (int i=3; i<=N; ++i)
{
DPVec[i] = DPVec[i-1] + DPVec[i-2] + DPVec[i-3];
}
printf("%lld", DPVec[N]);
return 0;
}

The time and space complexity of the implementation is O(n).

Total possible ways to climb a staircase (variation 3)

We have to climb a staircase and it takes N steps to reach the top. Given that we can climb 1 or 3 or 5 steps at a time, in how many possible ways can we climb to the top? To generalize we can be given any set of the allowed number of steps that can be taken at a time and we have to find the number of possible ways to climb to the top?

Keeping in mind the above described approach, the formula again here is: total_ways(n) = total_ways(n-1) + total_ways(n-3) + total_ways(n-5) As negative step does not make any sense, we will recur for non-negative steps only.

long long int findTotalWays (int n, vector<int> &Step_Set)
{
if(n == 0) return 1;
long long int total_ways = 0;
for(int i=0; i<Step_Set.size();++i)
{
if ((n – Step_Set[i]) >= 0)
total_ways += findTotalWays(n- Step_Set[i], Step_Set);
}
return total_ways;
}
int main(void)
{
int N = 5;
vector<int> Step_Set{1, 3, 5};
long long int total_ways = findTotalWays(N, Step_Set);
printf("%lld", total_ways);
return 0;
}

Dynamic programming solution for the problem

int main(void)
{
int N = 5;
vector<int> Step_Set{1, 3, 5};
vector<long long int> DPVec(N+1, 0);
DPVec[0] = 1;
for (int i=1; i<=N; ++i)
{
for (int j=0; j<Step_Set.size(); ++j)
{
if ((i – Step_Set[j]) >= 0)
{
DPVec[i] += DPVec[i- Step_Set[j]];
}
}
}
printf("%lld", DPVec[N]);
return 0;
}

The time is O(m*n) and space complexity: O(m + n) where m is size of the set of allowed steps and n is the number of steps in the staircase. These implementations work in general for any set of allowed steps.

Given a Fibonacci series: 1, 1, 2, 3, 5, 8, 13 … which is defined as fib(n) = fib(n-1) + fib(n-2), find N^{th} number in this series. For example, 5^{th} Fibonacci number is 5. 11^{th} Fibonacci number is 89.

By definition of the Fibonacci series, it is clear that every number in the series is a sum of the last two numbers in the series. So to find nth number we just need to know (n-1)^{th} and (n-2)^{th} number, once we find these two numbers just adding them will give us the answer.

fib(n) = fib(n-1) + fib(n-2)

But how do we find these numbers? We keep taking the same approach for (n-1)^{th} and (n-2)^{th} number, i.e.
fib(n-1) = fib(n-2) + fib(n-3) and
fib(n-2) = fib(n-3) + fib(n-4)
…

We stop only when we hit fib(1) = 1 and fib(2) = 1.
This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).

Recursive approach

The explanation/formula given above maps to simple recursion:

long long int findNthFib(int n)
{
if(n == 1 || n == 2) return 1;
return findNthFib(n-1) + findNthFib(n-2);
}
int main(void)
{
int N = 11;
long long int NthFib = findNthFib(N);
printf("%lld", NthFib);
return 0;
}

The recursive code looks extremely simple. This is one of the advantages of recursion, it saves the efforts of writing lots of code.
Everything looks fine and we are happy with our solution until we try to find 40th or so Fibonacci number using this approach. This implementation takes over 3 secs to find 43rd Fibonacci number and the execution time increases exponentially with increasing inputs.
To find the reason behind such high runtime let’s have a look at the recursive function call tree:

In this example of finding 6th Fibonacci number, subproblems such as fib(4) and fib(3) are calculated repeatedly. Imagine how many such repeated calculations would be there when we use this implementation for finding 43rd Fibonacci number!! Hence the time 3secs.

Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems. As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. But before jumping to a dynamic programming solution, there’s another way to resolve the issue of overlapping subproblems in a recursive approach: Memoised approach. Let’s have a look at it first.

Memoised approach for Fibonacci series

To avoid repeated calculation in the recursive approach, we can store the base cases and the results of each fib() call in an array. The code would look like this:

long long int findNthFib(int n, vector<long long int> &memo)
{
if (memo[n] != 0)
return memo[n];
memo[n] = findNthFib(n-1, memo) + findNthFib(n-2, memo);
return memo[n];
}
int main(void)
{
int N = 43;
vector<long long int> memo(N+1, 0);
memo[1] = 1; memo[2] = 1;
long long int NthFib = findNthFib(N, memo);
printf("%lld", NthFib);
return 0;
}

With the memoized approach the function call tree looks like this:

By memoizing intermediate results, we avoid repeated calculations. The time complexity of the memoized approach is O(n) and Space complexity is O(n). In both the approaches described above, observe that we took a top-down approach, i.e. we started from n and went down till 1.
Unlike recursion, Dynamic Programming uses a bottom-up approach, let’s see how it’s done in DP.

Dynamic Programming approach

In DP we start calculating from the bottom and move up towards the final solution. For this problem we first find 1st Fibonacci number, then 2nd, then 3rd and so on until N^{th} Fibonacci number. To aid this approach we use an array/vector where we will store the intermediate results while we move towards the final solution. The code looks like this:

int main(void)
{
int N = 43;
vector<long long int> DPVec(N+1, 0);
DPVec[1] = 1; DPVec[2] = 1;
for (int i=3; i<=N; ++i)
{
DPVec[i] = DPVec[i-1] + DPVec[i-2];
}
printf("%lld", DPVec[N]);
return 0;
}

Time complexity: O(n) Space complexity: O(n)
The space complexity of the DP code can be reduced by storing just the last two results instead of every result and in this way array/vector is no more required.
Moreover, there are other solutions for finding N^{th} Fibonacci number in O(log N) time using matrices and in O(1) constant time using the golden ratio, but this article is limited to DP approach.

There is a wall with 4 x N dimensions and we have a brick with 4 x 1 dimension. We have to fill the wall with given brick and find out how may ways possible to fill that wall.

For example, if there is wall with N = 3, we have only one way to fill the wall, with three brick laid horizontally.

Where as with N = 4, there are two ways, one with putting four bricks horizontally, or 4 bricks vertically.

Actually, examples themselves give away the answer to the our problem. Let’s start small and build on top of it. What if N = 1 , then wall dimensions are 4 x 1, and there is only one way to fill that wall with brick of 4 x 1, which is to lay the brick horizontally.

What if N = 2, i.e. wall is 4 x 2, , again, there is only one way possible, put two bricks horizontally,we cannot put bricks vertical. Why?

Take N = 3, i.e. wall with 4 x 3, only way we can fill the wall is to put three bricks horizontally, can’t use vertical brick.

What if N = 4, wall with 4 x 4 dimensions, in this scenario, we have two options, put four bricks horizontally or four bricks vertically, so there are two ways to fill a wall of 4 x 4 with brick of 4 x 1.

Now, if number of ways to fill a wall of dimension 4 x N is f(N) then f(N) for values 1, 2 and 3 is as follows.

f(1)=1, f(2)=1, f(3)=1

We have two choices for each brick for wall of size greater than 4 X 3. Either to keep brick vertically or to keep brick horizontally.

If we keep brick vertically, we cover four units out of N units height of wall with each brick, require four vertical bricks to cover horizontally, so problem reduces to N-4 units.

If we keep brick horizontally, then it covers only 1 unit height of wall, hence we need to cover N-1 units of height further.
So, for N we have relationship as

f(N) = f(N-1) + f(N-4)

We have the recurrence relation and the base conditions, let’s implement it.

Fill wall with brick : recursive implementation

int findWays(int n){
if(n == 0 || n == 1 || n == 2 || n == 3) return 1;
return findWays(n-1) + findWays(n-4);
}
int main(void) {
int N = 5;
int ways = findWays(N);
printf("%d", ways);
return 0;
}

Do you think this solution is optimized? Why do you think, it can be optimized and how? If you closely look at the recursion tree of implementation, you will see the problem. Some of the subproblems are solved repeatedly. Can we avoid solving them again and again? Yes, that’s called memoization.

Well, this problem can be solved using dynamic programming, because two properties hold : First, optimal solution to subproblem gives solution to original problem. Second, overlapping subproblems.

Dynamic programming approach would be to fill a table bottom up where table [N] will be the solution. table[0] = table[1] = table[2] = table[3] = 1 as discussed above.

Now from N = 4, we can fill the table bottom up as

table[N] = table[N-1] + table[N-4]

Fill wall with brick : dynamic programming implementation

int find_ways(int n, int table[]){
int i;
for(i = 4; i<= n; i++){
table[i] = table[i-1] + table[i-4];
}
}
int main(void) {
int N =5;
int table[N+1];
table[0] = 1;
table[1] = 1;
table[2] = 1;
table[3] = 1;
find_ways(N, table);
printf("%d", table[N]);
return 0;
}

Complexity of dynamic programming approach is O (N) with space complexity of O(N).

Please share if there is something wrong or missing. If you are willing to share your knowledge and help thousands of learners across the world, please reach out to us on [email protected]

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT

Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.