Given an array of integers A[], find the length of the longest subarray with the sum at most k where k is an integer.
For example:

Input:
A[] = [10, 5, 2, 7, 1, 9],
k = 15
Output:
4
Explanation
longest subarray with sum at most K is [ 5, 2, 7, 1 ].
Input:
A[] = {-5, 8, -14, 2, 4, 12},
k = 5
Output :
5

Longest subarray with a sum k

First of all, if you are not aware of the fundamentals of a sliding window, I would strongly recommend reading: Sliding window concept

The question is how do we know that this is a sliding window problem? Two hints are present in the problem itself, first, we are looking for a subarray i.e. a window, and second, that window has a certain property i.e with a sum of elements at most k.

We will follow the standard pattern for the sliding window; define two pointers, left(l)and right(r). l shrinks the window whenever a property of the window is violated and r expands the window. Every time we expand the window, we add the number to the current sum. If the sum becomes greater than k, we note down the length of this window and update the maximum length if this greater than the previous length. Then we start shrinking the window and subtract the numbers from the sum until the sum is less or equal to k again.

Let’s take an example and see how it works.

Show me the implementation

package AlgorithmsAndMe;
public class SubarrayWithSumK {
public int lengthOfSubarray(int[] a, int k){
int l = 0;
int r = 0;
int len = a.length;
int max = 0;
int currentSum = 0;
while(r < len){
currentSum += a[r];
while(l < r && currentSum > k){
max = Math.max(max, r -l);
currentSum -= a[l];
l++;
}
r++;
}
return max;
}
}

The time complexity of the above code is linear i.e O(n) with O(1) space complexity.

If you are interested, there is another similar problem on Leetcode called Subarray Product Less Than K, you can try there.

Show me the answer

class Solution {
public int numSubarrayProductLessThanK(int[] nums, int k) {
int l = 0;
int r = 0;
if(k == 0) return 0;
int len = nums.length;
int count = 0;
int currentProd = 1;
while(r < len){
currentProd *= nums[r];
while(l <= r && currentProd >= k){
currentProd /= nums[l];
l++;
}
count += r - l + 1;
r++;
}
return count;
}
}

Interview coming up? Check out our full coding interview prep course. There’s a get-the-job-or-your-money-back guarantee, so it only costs money if it actually works.

As always, shoot me an email if there’s anything I can help with.

Given a string, remove consecutive duplicate characters from the string. For example:

Input:
S = "aabccacc"
Output:
"ba"
Input:
S = "accacc"
Output:
""

Solution for this problem is in the idea that we need two pointers, one to read a character and another to write a character.
In this case, let’s say we take two pointers, i and j, where i points to the index where next character will be written, whereas j will is the position of current character.

As String in most languages is an immutable object and we are planning to modify it, it is better idea to convert string to a character array. For each character at index j, we will copy it at index i. Check if the chars[i-1] and chars[i] are equal? If yes, we found consecutive duplicate characters.
If found duplicate characters, we will move i to i-2 to remove both instances of the character. If not, nothing changes.

Removed consecutive duplicates from string implementation

package AlgorithmsAndMe;
public class RemovedDuplicates {
public String removeDuplicates(String s){
char [] res = s.toCharArray();
int i = 0;
int [] count = new int[res.length];
for(int j=0; j<res.length; j++, i++){
res[i] = res[j];
if(i > 0 && res[i] == res[i-1]){
i -=2;
}
}
return String.valueOf(res).substring(0, i);
}
}

Sometimes the same question is asked with k as parameters: given a string, remove all consecutive K duplicate characters.
For example:

Input:
S = deeedbbcccbdaa, K = 3.
Output:
"aa"

The idea is the same, keep two pointers i and j, one for writing and another for reading. We will copy the character onto the index i. If the new character is the same as the previous character, we increment the count. If not, we reset the count. If count is equal to k, we remove all the instance of the character by moving i by k to the left.

package AlgorithmsAndMe;
public class RemovedDuplicates {
public String removeDuplicates(String s, int k){
char [] res = s.toCharArray();
int i = 0;
int [] count = new int[res.length];
for(int j=0; j<res.length; j++, i++){
res[i] = res[j];
count[i] = i>0 && res[i-1] == res[j] ? count[i-1] + 1 : 1;
if(count[i] == k){
i -=k;
}
}
return String.valueOf(res).substring(0, i);
}
}

The time complexity of both the implementations is O(n).

Given a singly linked list, reverse it. For example:

Input:
1->2->3->4->5
Input:
5->4->3->2->1

This problem is never asked standalone in an interview but very important to answer a few of the questions which are commonly asked in Amazon and Facebook interviews like reverse K nodes in a linked list or reverse K alternating nodes.

The idea to reverse a linked list is simple, you take each node, make it point to its previous node, and go to the next node. For each node, we need to maintain three different pointers: current which points to the node we are working on, prev, the node which is immediately previous to the current node and next, the node which is immediately next to the current node.

How do we change the pointers? If we change current.next to point to previous node, we cannot go forward. So, the first thing to secure if that we can move forward, by storing current.next to next.

Now, that we have secured the next pointer, we can change current.next to point to the prev. As we will be moving forward to the next node, the new previous node will be the current node, change prev to current.
The last step, move forward, make current as next.

Here is an example of reversing a linked list

Keep doing this till we have no nodes remaining in the linked list. What should we return now? current, prev or next? When current is at the null, next would already be null and we cannot return a null value of a linked list which was non-null, right? Only pointer which now points to the last node is prev and should be returned.

There is another confusion and that is how to initialize these three pointers. Which node you will process first? The head. So, the current should be initialized to the head of the linked list. What would be the next pointer of the head in the reversed linked list? It will be null. Initialize prev as null. For next it does not matter, you can choose to initialize it with null or null.

Reverse a linked list implementation

package AlgorithmsAndMe;
/**
* Definition for singly-linked list.
* public class ListNode {
* int val;
* ListNode next;
* ListNode(int x) { val = x; }
* }
*/
public class ReverseLinkList {
public ListNode reverseList(ListNode head) {
ListNode prev = null;
ListNode next = head;
ListNode current = head;
while(current != null){
next = current.next;
current.next = prev;
prev = current;
current = next;
}
return prev;
}
}

The time complexity of reversing a linked list is O(n).

Given an array nums of n integers, are there elements a, b,c in nums such that a+ b + c= 0? Find all unique triplets in the array which gives the sum of zero.

The solution set must not contain duplicate triplets.

This problem is commonly known as three sum problem.

This problem is commonly asked in Amazon and Google interviews.

Three Sum problem : thought process

Before going into the details of find three numbers with a given sum, do you know how to find two numbers with a given sum s? The idea is simple, we keep a map of all the numbers in we already seen, if see a number a[i], we see in the map if S-a[i] is present or not. If yes, then we found the pair, if not, we put a[i] and move forward.

This solution has linear time complexity i.e O(n) with a space complexity of O(n) as well. There is a way to avoid space complexity by sorting the input array first and using two pointer approach.

Can we use our understanding of the two-sum problem to solve this three-sum problem?

Hint 1:
What happens if we take one number from the array and say that this number will be one of the three numbers which add up to zero? What happens to our problem statement now?

Hint 2:
If you have decided that A[0] will be part of three numbers which add up to zero, all we need is two find out two numbers from index 1..len-1 which add up to O – A[0]. What does that mean? It looks like a 2-sum problem, doesn’t it?

Can you please explain more?

We will start with index i from 0 and len-1, each time claiming that A[i] is part of the solution. Once, we fixed A[i] in solution, we will find two numbers which add to 0-A[i] solution to that problem we already know. If we find those two numbers let’s say A[j] and A[k], then A[i],A[j] and A[k] from the triplet. If we do not find those two numbers, discard A[i] and move forward.

Show me the implementation of three sum problem

class Solution {
public List<List<Integer>> threeSum(int[] nums) {
List<List<Integer>> result = new ArrayList<>();
//O(n log n)
Arrays.sort(nums);
for(int i=0; i<nums.length; i++){ if (i > 0 && nums[i] == nums[i - 1]) {
continue;
}
if(nums[i] > 0) continue;
int j = i+1;
int k = nums .length -1;
int target = 0-nums[i];
/*
Below code is exactly the same to find
two numbers with a given sum using two pointers
*/
while(j<k){
if(nums[j] + nums[k] == target ){
/* If find j and k, such that element at
index i, j and K add to zero, put them in result
*/
ArrayList<Integer> triplet = new ArrayList<Integer>(
Arrays.asList(nums[i], nums[j], nums[k]));
j++;
k--;
while (j < k && nums[j] == nums[j - 1]) j++; // skip same result
while (j < k && nums[k] == nums[k + 1]) k--; // skip same result //since we want only unique triplets, using set result.add(triplet); } else if(nums[j] + nums[k] > target){
k--;
}
else{
j++;
}
}
}
return result;
}
}

If you do not come up with the idea of moving i, j and k to get unique triplets, it is OK. You can always use HashSet to filter out duplicates and then copy the HashSet to the result list. It will take more time but will give the correct result.

The time complexity of the solution is O(n^{2}) plus the complexity of sorting algorithm (never forget that to mention!!). However, sorting complexity is O(nlogn), hence the overall complexity is dominated by the quadratic expression.

Find the count of n-digit numbers whose sum of digits is equal to given sum s.
For example,

Input:
n=2 and s=5
Output:
14, 23, 32, 41, 50
Explanation:
we have to find those 2-digit numbers whose sum of digits is 5. Note that leading zeroes are not considered as digits, i.e. in above eg. we will not consider 05 as a 2-digit number.

Sum of n-digit numbers thought process

A brute force solution that first comes to mind is to consider each of the n-digits numbers, calculate the sum of digits of each number and count the ones where the sum of digits is s. But this solution is of exponential time complexity, we need to find a better solution. Let’s consider a few examples: Numbers: numbers with n-digits and sum of digits s Cnt(n,s): count of numbers

If we look closely at above example we can see that actual numbers in case of (n=3, sum=5) are generated from numbers in case of (n=2, sum=4), (n=2, sum=3), (n=2, sum=2), (n=2, sum=1) and (n=2, sum=0) by fixing the most significant digit. As stated in the question, leading zeroes cannot be considered as digits hence for the most significant digit available choices are ranging from 1 to 9 or given sum (whichever is less):

1 _ _ 2 _ _ 3 _ _ 4 _ _ 5 _ _

After fixing one digit we have got just 2 (n-1 i.e. 3-1 = 2) digits left with us and also the sum reduces by the digit value i, where i ranges from 1 to 9.

Let’s consider the case of (1 _ _), after fixing most significant digit to 1, problem reduces to remaining n – 1 = 3 – 1 = 2-digit numbers and reduced s = s – 1 = 5 – 1 = 4, which is nothing but the sub-problem cnt(n=2, s=4).
But also note that while fixing any digit other than the most significant digit, 0 can also be used as a digit, and the sub-problem is instead denoted by cntWithZero(n=2, s=4). The second digit can similarly be fixed but this time 0 is also a candidate:

1 0 _ 1 1 _ 1 2 _ 1 3 _ 1 4 _

After fixing 2 digits, the problem now reduces to 1-digit numbers and reduced sum (s-i) where i ranges from 0 to 9 or last reduced sum (whichever is less).

In this way for any n-digit number we can keep fixing a digit at a time and keep solving the sub-problems, until all n digits are fixed or in case s was small and it reaches zero before all digits were fixed.
As we had observed numbers for cnt(n=3, s=5) are generated by fixing the most significant digit, to find the solution for cnt(n=3, s=5) from its sub-problems –

Considering this observation we can say that cnt(n, s) can be calculated using sub-problems such as cnt(n-1, s-i) where i ranges from 0 to s:

cnt(n, s) = SUM(cntWithZero(n-1, s-i)), where 1 <= i <= s
cntWithZero() = SUM(cntWithZero(n-1, s-i)), where 0 <= i <= s

This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems), which is the first condition for application of dynamic programming.

Recursive implementation C++

long long int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (sum == 0)
return 1;
long long int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
long long int cnt(int n, int sum)
{
long long int ans = 0;
for(int i=1; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
int main(void)
{
int n=3, s=5;
printf("%lld", cnt(n, s));
return 0;
}

Recursive implementation Java

import java.io.*;
class digit_sum
{
static int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (sum == 0)
return 1;
int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
static int cnt(int n, int sum)
{
int ans = 0;
for(int i=1; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
public static void main(String args[])
{
int n=3, s=5;
System.out.println(cnt(n, s));
}
}

This implementation has exponential time complexity. Let’s have a look at the recursive function call tree to find the reason behind this:

In this example of cnt(n=3, s=5), subproblems such as cntWithZero(n=1, s=2) and cntWithZero(n=1, s=1) are calculated repeatedly. Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems, the second condition to apply dynamic programming.

As the two properties required for using Dynamic Programming: optimal substructure and overlapping subproblems hold, we can use DP for this problem. But before jumping to DP solution, there’s another way to resolve the issue of overlapping subproblems in a recursive approach: Memoized approach.

Memoised approach C++

long long int memoized_table[101][101];
long long int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (memoized_table[n][ sum] == -1)
{
long long int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
memoized_table[n][ sum] = ans;
}
}
return memoized_table[n][ sum];
}
long long int cnt(int n, int sum)
{
long long int ans = 0;
memset(memoized_table, -1, sizeof(memoized_table));
for(int i=1; i<10; ++i)
{
if(sum – i >= 0)
{
ans += cntWithZero(n-1, sum-i);
}
}
return ans;
}
int main(void)
{
int n=3, s=5;
printf("%lld", cnt(n, s));
return 0;
}

Memoised approach Java

Java:
import java.io.*;
class digit_sum
{
static int memoized_table [][] = new int[101][101];
static int cntWithZero(int n, int sum)
{
if (n == 0)
{
if (sum == 0)
return 1;
return 0;
}
if (memoized_table[n][sum] == -1)
{
int ans = 0;
for(int i=0; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
memoized_table[n][ sum] = ans;
}
}
return memoized_table[n][ sum];
}
static int cnt(int n, int sum)
{
int ans = 0;
for(int i = 0; i <= 100; ++i)
{
for(int j = 0; j <= 100; ++j)
{
memoized_table[i][j] = -1;
}
}
for(int i=1; i<10; ++i)
{
if(sum - i >= 0)
{
ans += cntWithZero(n-1, sum -i);
}
}
return ans;
}
public static void main(String[] args)
{
int n=3, s=5;
System.out.println(cnt(n, s));
}
}

The time complexity of the above implementation is O(ns) along with the space complexity of O(ns)

Sum of digits Dynamic Programming Approach

We go bottom-up in a dynamic programming approach. We will use a 2D array / DP table in the implementation. Start from the bottom i.e. i=0, j=0, and keep solving each sub-problem and store its result in DP table until we reach i=n and j=s.

Dynamic programming Java solution of sum of digits problem

There are some problems which do not appear to be a binary search problem at first. For example, ship capacity problem on leetcode. Problem statement is:
A conveyor belt has packages that must be shipped from one port to another within D days.
The i-th package on the conveyor belt has a weight of weights[i]. Each day, we load the ship with packages on the conveyor belt (in the order given by weights). We may not load more weight than the maximum weight capacity of the ship.
Return the least weight capacity of the ship that will result in all the packages on the conveyor belt being shipped within D days.

For example:

Input: weights = [1,2,3,4,5,6,7,8,9,10], D = 5
Output: 15
Explanation:
A ship capacity of 15 is the minimum to ship all the packages in 5 days like this:
1st day: 1, 2, 3, 4, 5
2nd day: 6, 7
3rd day: 8
4th day: 9
5th day: 10
Note that the cargo must be shipped in the order given, so using a ship of capacity 14 and splitting the packages into parts like (2, 3, 4, 5), (1, 6, 7), (8), (9), (10) is not allowed.

Ship capacity problem and binary search algorithm

At first glance, it does not appear to be a binary search problem. Where is the input we will search on? That’s where we have to remember that to apply a binary search, we actually do not need an input set, all we need is lower and upper bound.

Hint 1
What is the minimum capacity you would need to ship this cargo? You can choose the lowest possible capacity if you infinite number of days to ship all weights.

Hint 2
What if you have only one day to ship all the weights? In that case, what will be the capacity of the ship? Do you need more than capacity ever?

To find these bounds, try to vary the constraint to extremes. In the example of ship capacity, try to put the number of days constraints to an extreme. What if we had an infinite number of days to ship the cargo?

Can you please explain more?

In that case, we will need a ship that has at least capacity greater than equal to the heaviest weight cargo, as no matter how many days, we cannot move the cargo.

Again, what if I had only 1 day to ship the whole cargo? In that case, we need a ship that can take all the weights in 1 day, so ship capacity should be the sum of all the weights.

Now, we know the lower and upper bound of the ship, all we have to adjust the capacity and see if we can ship cargo in D days? Start with mid, see if we can. If yes, then try a smaller capacity. If not, then try greater capacity. All we are doing is find the first capacity between lower and upper bounds. It seems like the first occurrence problem now.

Understood the concept, you still need help with implementation? Please see the code below.

Ship capacity problem implementation

public int shipWithinDays(int[] weights, int D) {
int upperLimit = 0;
int lowerLimit = 0;
for(int i = 0; i<weights.length; i++){
upperLimit+= weights[i];
}
//Not returning from while loop :)
while(lowerLimit < upperLimit){
int shipCapacity = lowerLimit + (upperLimit - lowerLimit)/2;
if(isItPossible(D, shipCapacity, weights)){
upperLimit = shipCapacity;
}
else{
lowerLimit = shipCapacity + 1;
}
}
return lowerLimit;
}
private boolean isItPossible(int D, int shipCapacity, int[] weights){
int currentLoad = 0;
int days = 1;
int i = 0;
while(i<weights.length){
if(weights[i] > shipCapacity) return false;
currentLoad+= weights[i];
if(currentLoad == shipCapacity){
days++; i++;
currentLoad = 0;
}
else if(currentLoad > shipCapacity){
days++;
currentLoad = 0;
}
else{
i++;
}
}
return days <= D;
}

The time complexity of the solution is O(nlogc) where n is number of weights and c is capacity range.

Please share if you find anything wrong or missing. If you are looking for coaching to prepare for your technical interviews, please book a free session with us.

Given a string S, partition the string into maximum possible partitions so that each letter appears in at most one part, and return a list of integers containing the size of each partition.
For example:

Input:
S = "ababcbacadefegdehijhklij"
Output:
[9,7,8]
Explanation:
The partition is "ababcbaca", "defegde", "hijhklij".
This is a partition so that each letter appears in at most one part.
A partition like "ababcbacadefegde", "hijhklij" is incorrect, because it splits S into fewer parts.

Fundamental idea is that if two instances of characters should be in the same partition. So, we start with the first character and see at what point we can finish the partition. The earliest we can do is at the last instance of this character.
What if we find a character between the first character and the last instance of it? In this case, we increase the length of the partition. Why? If we do not increase the partition, the new character ends up into two partitions, which violates the constraint of the problem.

If we have gone through all the characters between the start of partition and maximum of the last instance of characters, we can close the partition and start a new one.

Let’s take an example and see how it works. Start with the first character of the string and find the last instance of that character, in this case, it is a

Go through all the characters until the last instance, if the last instance of any of the characters is greater than the current last instance, expand the partition to include new last instance. In this example, none of the characters have the last index beyond the last index of a

Once, we reach the last index, we can close the current partition and start a new one. Find the last instance of the first character of the new partition.

Go though the characters in between. In this example, character e has last instance greater than current one, so we update the partition till e.

There is no character in between with the last instance greater than the last instance of e, so we close the partition there and start the next one from the next character in the string which is h

Next character i will expand the partition to index 22 and next to next character j will push it to 23. There is no other character that has the last occurrence after that index. Hence, we would close the partition there.

Whenever we are closing the partition, we would add the length of that partition end – start + 1 in a list to return.

Partition labels implementation

class Solution {
public List<Integer> partitionLabels(String S) {
Map<Character, Integer> map = new HashMap<>();
//Get the last index the character.
for(int i=0; i< S.length(); i++){
map.put(S.charAt(i), i);
}
int last = 0;
int start = 0;
List<Integer> list = new ArrayList<>();
for(int i=0; i< S.length(); i++){
last = Math.max(last, map.get(S.charAt(i)));
if(last == i){
list.add(last - start + 1);
start = last + 1;
}
}
return list;
}
}

The time complexity of implementation is O(n), however, we are scanning the string twice. We also need constant space. Do not get confused with map, the maximum size of the map is bound by 52 (upper and lower case characters) irrespective of the size of the string.

Please share if there is something wrong or missing.

Tarjan Algorithm is a very useful algorithm in finding strongly connected components in a directed graph. What is a strongly connected component?

What is Strongly Connected Component?

A directed graph is said to be strongly connected if we can reach any node from every other node in the graph. A strongly connected component of a directed graph is a subset of the nodes in the graph such that any two nodes of this subset are reachable from each other.
For any two nodes u and v in graph, if they are part of a strongly connected component, there exists a path from u to v and vice-a-versa.

For example, in the graph given below, there are 4 strongly connected components.

There are two algorithms to strongly connected components one is Kosaraju’s algorithm and another one is the Tarjan algorithm.
Tarjan algorithm requires only one depth-first search traversal to find out all strongly connected components present in the graph. It is efficient as compared to Kosaraju’s algorithm as the other requires two DFS traversal.
An important thing to known to understand Tarjan’s algorithm is that depth-first traversal creates a DFS tree, meaning when we traverse a graph, it does traversal in a manner that a node is not visited again.

Tarjan’s algorithm to find Strongly Connected Component?

Fundamental is very simple: From a node u if we can reach to any of its ancestors for any of its descendants (including itself), then there must be a cycle containing this node, all these nodes should be part of the same connected component.

For example, if x is the root of the tree and y is the descendant of x and there is an edge from y to another node z that is an ancestor of x, then we can say that edge y – Z together with the paths in the DFS tree from z to x and from x to z form a cycle, which must all be part of the same component.

The edge from y to z is called back edge.

How can we know if there is a back-edge from a node to another? We keep track of two things for each node v of the graph:

tin[v]: Represents the number of node visited before v in DFS.
low[v]: Represents smallest tin[x] of a node reachable by a back edge from the subtree of v.

The below figure displays tin[i] for each node. In the simplest way, it can be calculated as in-time of the node in depth-first traversal.

The interesting part is how to calculate low[i]? To start with, low[i] is the node the node i itself. Going forward, for each node u, go through its neighbors:
1. If the neighbor v not already visited, that means, it is not a back edge. We will go forward and do the DFS on the v. Once DFS is finished, we will update the low[u] as follows:

low[u] = min(low[u], low[v]).

2. If the node v is already visited, that means there is a back edge from node v to any of the ancestor of node u. In that case, low[u] will be defined as follows

low[u] = min(low[u], tin[v])

For node v, if tin[v] == low[v], v is then the root of the next connected component. If you want to print nodes in strongly connected components, keep track of them on the stack. As soon as we get a node with low[i] == tin[i], pop all the nodes from the stack till we get the node i. All of these nodes belong to same SCC.

If low[v] > tin[u], we can say that node u and v belong to different SCCs and edge (u,v) is a bridge edge.

Tarjan's algorithm to find strongly connected components

package AlgorithmsAndMe;
import java.util.*;
public class TarjansAlgorithm {
Set<Integer> visited = new HashSet<>();
/* This map stores the time when the
current node is visited
*/
int [] tin;
/*
low will store minimum on
tin[v]
tin[p] for all p for which (v,p) is a back edge
low[to] for all to for which (v,to) is a tree edge
*/
int [] low;
//To maintain monotonic increasing order.
int timer;
void dfs(Graph g, int u, int parent, Stack<Integer> stack) {
visited.add(u);
//Put the current timer.
tin[u] = timer;
//Low is the time of entry to start with
low[u] = timer;
stack.push(u);
timer++;
/*
Go through all the neighbors
*/
for (int to : g.getNeighbors(u)) {
//If it is parent, nothing to be done
if (to == parent) continue;
/* If the neighbor was already visited
get the minimum of the neighbor entry time
or the current low of the node.
*/
if (visited.contains(to)) {
low[u] = Math.min(low[u], tin[to]);
} else {
//Else do the DFS
dfs(g, to, u,stack);
/*
Normal edge scenario,
take the minimum of low of the parent and the child.
*/
low[u] = Math.min(low[u], low[to]);
}
}
System.out.println();
if (low[u] == tin[u]){
//Print the stack.
while(!stack.isEmpty() && stack.peek() != u){
System.out.print(stack.pop() + " ");
}
System.out.print(u + " ");
stack.pop();
}
}
public void findConnectedComponents(Graph g) {
timer = 0;
Stack<Integer> stack = new Stack<>();
Iterator it = g.getNodes().iterator();
tin = new int[g.getSize() + 1];
low = new int[g.getSize() + 1];
Arrays.fill(tin, Integer.MAX_VALUE);
Arrays.fill(low, Integer.MAX_VALUE);
while(it.hasNext()){
int i = (int) it.next();
if (!visited.contains(i))
dfs(g, i, -1, stack);
}
}
}

As we can see from the code that we only need one DFS traversal,therefore compexity of the algorithm will be O(|V| + |E|), where V and E are total number of vertex and edges present in the graph.

Given a 2d grid map of ‘1’s (land) and ‘0’s (water), count the number of islands. An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water.

Input:
11110
11010
11000
00000
Output: 1

The first challenge of this problem is to identify that this is a graph problem. To identify that, we have to look if cells of the matrix can represent nodes, and are there any relationship between cells that can act as edges of the graph?
One hint is to the relationship between the cells of the matrix. For example, in the island problem, 0s are water, and 1s are land. Land can be connected to land only which is adjacent to the land and a piece of land is isolated if surrounded by water from all four sides.

To find the number of islands in the matrix, we have to find the number of connected components in the graphs as explained above. Which traversal can we use to find the number of connected components of a graph? It is a depth-first search problem.

In this case, nodes seem to be the cells with value 1(land). Most of the time, in these kinds of problems, each cell of the matrix with a certain value is a node. If we have an n x m matrix, we are looking at n x m possible nodes. Also, here be aware that we are not interested in all the cells. Based on the problem statement, we can discard some cells. For example, in the island problem, we are interested in the cells which represent the land.

Now, what will be the edges? In these kinds of problems, a statement is given like an island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. It means that a cell with value 1 is connected to neighbor cells only if they also contain 1.

Now that we know that cells containing 1 in the matrix are nodes of the graphs and connected to neighbors in they are also 1. To count the number of islands, all we have to do is find connected components in the graph. We reduced a matrix problem to a graph problem after all. Should we create an entire graph? Well no, at any point in time, we do not need the entire graph.

Number of islands implementation

public int numIslands(char[][] grid) {
int count = 0;
Set<Integer> visited = new HashSet<>();
int n = grid.length;
if(n <=0) return 0;
int m = grid[0].length;
for(int i=0; i<n; i++){
for(int j=0; j<m; j++){
int cell = i * m + j;
if(!visited.contains(cell) && grid[i][j] == '1'){
count++;
dfs(i, j, visited, grid);
}
}
}
return count;
}
void dfs(int i, int j, Set<Integer> visited, char[][] a){
//Check if the node we are traversing is actually a valid node.
if(i<0 || i>=a.length
|| j<0 || j>=a[0].length
|| a[i][j] == '0') return;
int cell = i * a[0].length + j;
if(visited.contains(cell)) return;
visited.add(cell);
//Visit neighbors
dfs(i+1, j, visited, a);
dfs(i-1, j, visited, a);
dfs(i, j+1, visited, a);
dfs(i, j-1, visited, a);
}

The complexity of the above code is O(n * m) where m and n are rows and columns of the matrix.

In mathematics, Binomial Coefficients are the positive integers that occur as coefficients in the binomial theorem.

A binomial coefficient is the coefficient of the term (x^{k}) in the polynomial expansion of (1 + x)^{n} and is indexed by a pair of integers n >= k >= 0

In this post, we will denote it as bc(n, k).

For example: The fourth power of (1 + x) is –

(1 + x)^{4} = 1 + 4x + 6x^{2} + 4x^{3} + x^{4}
Writing this expression in terms of Binomial Coefficients -
(1 + x)^{4} = bc(4,0) * x^{0} + bc(4,1) * x^{1} + bc(4,2) * x^{2} + bc(4,3) * x^{3} + bc(4,4) * x^{4}
(1 + x)^{4} = 1 * x^{0} + 4 * x^{1} + 6 * x^{2} + 4 * x^{3} + 1 * x^{4}
The Binomial coefficient for n=4 and k=2 is bc(4, 2) = 6, the coefficient of (x^{2}) in the expansion of (1+x)^{4}.

Problem is – given values of n and k, find the binomial coefficient bc(n, k).

The binomial coefficients from these expansions can be arranged to form following triangle like structure called Pascal’s Triangle:

From Pascal’s triangle, we observe that an entry in any of the rows is addition of two entries from the row above –

bc(n, k) = bc(n-1, k-1) + bc(n-1, k)

This shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems). Also we can observe that first and last binomial coefficients of each are 1 i.e. –

bc(n, 0) = bc(n, n) = 1

Recursive Approach C++ implementation

long long int findBinomialCoeff (int n, int k){
if(k == 0 || n == k) return 1;
return findBinomialCoeff (n-1, k-1) + findBinomialCoeff (n-1, k);
}
int main(void){
int n = 4, k=2;
long long int binomialCoeff = findBinomialCoeff(n, k);
printf("%lld", binomialCoeff);
return 0;
}

Same can be implemented in Java as well.

Recursive Approach Java implementation

import java.util.*;
class BinomialCoefficient
{
static int findBinomialCoeff (int n, int k){
if(k == 0 || n == k) return 1;
return findBinomialCoeff (n-1, k-1) + findBinomialCoeff (n-1, k);
}
public static void main(String[] args){
int n = 4, k=2;
int binomialCoeff = findBinomialCoeff(n, k);
System.out.printf("%d", binomialCoeff);
}
}

The recursive code has exponential time complexity, let’s have a look at the reasoning behind this:

In this example of finding Binomial coefficient for n=5 and k=3, subproblems such as bc(3,2) and bc(2,2) are calculated again and again. Our recursive algorithm for this problem solves the same subproblem over and over. These are called overlapping subproblems.

As the two properties required for using Dynamic Programming : optimal substructure and overlapping subproblems are holding here, we can use DP for this problem.

In DP, we start calculating from the bottom and move up towards the final solution. Solution of all subproblems are stored in a 2D array / DP table so that they can be reused when required.

Binomial coefficient with dynamic programming C++

long long int findBinomialCoeff (int n, int k){
long long int DP_table[n+1][k+1];
for(int i=0; i<=n; ++i){
for(int j=0; j <= min(i,k); ++j){
if (j == 0 || i == j) dp[i][j] = 1;
else dp[i][j] = dp[i-1][j-1] + dp[i-1][j];
}
}
return dp[n][k];
}

Java implementation is very similar.
Java implementation

import java.util.*;

class BinomialCoefficient{
static int min(int a, int b){
return (a<b) ? a : b;
}

static int findBinomialCoeff (int n, int k){
int dp[][] = new int [n+1][k+1];
for(int i=0; i<=n; ++i){
for(int j=0; j <= min(i,k); ++j){
if (j == 0 || i == j) dp[i][j] = 1;
else dp[i][j] = dp[i-1][j-1] + dp[i-1][j];
}
}
return dp[n][k];
}
[/code]

The time complexity of the dynamic programming approach to implementing the binomial coefficient is O(nk) whereas space complexity is O(nk)
The space complexity can be reduced by storing just the last row of Pascal’s triangle and that too just the (k+1) elements from the last row are required, with this approach instead of a 2D array we can work with a 1D array.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Cookie settingsACCEPT

Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.