## Meetings Room 2

Given a list of intervals representing the start and end time of N meetings, find the minimum number of rooms required to hold all the meetings.

```Input:
Meetings: [[4,5], [2,3], [2,4], [3,5]]
Output:
2
Explanation:
We will need one room for [2,3] and [3,5], and another room for [2,4] and [4,5].
```

Let us visualize it.

## Meetings Room thought Process

A person can not attend two or more meetings at one time in one room. It means if any meeting is going on in one room and another meeting starts before the end of the previous meeting, we need to allocate a different room to that meeting.
So, In this problem, we have to find out how many minimum rooms required to hold all meetings. This A simple way to solve this problem is to allocate a different room if the current meeting timings are clashing with the previous meeting timings. We have to Sort the meetings according to their starting time so that we can able to schedule the meetings and allocate the rooms according to it.

We will schedule the meeting M1 in one room R1. If the next meeting M2 is not clashing with the first meeting we can continue it in M1 room(no need to allocate the different room for it). If the next meeting M3 is clashing with M2, we can not allocate the same room R1 to that meeting, we have to allocate different room R2 to it. If meeting M4 is overlapping with the meeting M3, then we need to see if the room R1 has become free or not. For this, we need to keep track of the end time of the meeting happening in it. If the end time of M2 is before the start time of M4, we can use that room R1, otherwise, we need to schedule (M4 in another room R3.

The one way is to iterate through all the rooms and check if that room is empty or not. If any previous allocated room is empty, schedule that meeting in that room otherwise allocates a new room for it. Let’s think like that, most probably that meeting in the meeting room comes to an end whose ending time is less. Are you getting that point? It means we have to check among previous meetings which meeting has less ending time. We have one data structure that gives us the smallest element in constant time, its name is min-heap. We can keep all the rooms in a min-heap where the key for the min-heap would be the ending time of the meeting. Before allocating the new room to any meeting we will check the top element of the min-heap, if the top element(room containing the end time) is less than the start time of the meeting, go allocate that particular room to the meeting otherwise allocate the new room.

#### Algorithm

1. Sort all the meetings on the basis of the start time.
2. Create a min-heap add the first meeting’s ending timing to the heap.It will keep track when a meeting room will get free.
3. For every meeting, check the top element of heap(smallest element).
• If the top element of the heap is less than the current meeting start timing, then we extract the topmost element and push the ending timing of the current meeting.
• else we will allocate a new room and push the ending time into the heap.
4. Every time we allocate a new room, increement the counter which will let you know that how many rooms we needed.
```class Solution {
public:
int minMeetingRooms(vector<vector<int>>& intervals) {
int i,j;

if(intervals.size()==0)
{
return 0;
}
sort(intervals.begin(), intervals.end());
priority_queue<int, vector<int>, greater<int>> pq;
int count=0;

for(i=0;i<intervals.size();i++)
{
if(pq.size()==0)
{
count=count+1;
pq.push(intervals[i]);
}
else if(pq.size()!=0 && pq.top() <= intervals[i])
{
pq.pop();
pq.push(intervals[i]);
}
else
{
count=count+1;
pq.push(intervals[i]);
}
}
return count;

}
};
```

Time Complexity of the implementation is O(N*logN), where N is the total number of meetings. This is due to the sorting that we did in the beginning. Also, while iterating the meetings we might need to add meeting to the priority queue. It takes O(logN). Overall it takes O(NlogN). Space Complexity is O(N)

## The DS/Algorithm Interview Script

Disclaimer: This post deals with the traditional “coding” interview where the focus is on solving one or more coding problems. I have interviewed in a double-digit number of companies from India, the US, and Europe and this post is a culmination of those experiences. This is the stable, generic approach I use in my interviews.

Interviews have certain aspects of randomness due to factors like the company you’re interviewing for, the interviewer, job role etc. Despite this, there are certain steps you can take which will help the interviewer judge the clarity of your thought process, thoroughness while solving problems, and overall problem solving ability. While the two scripts below try to provide a general path on approaching problems during an interview, remember that you have to improvise depending on how the interview is going, and the best way to achieve good results is to be yourself during the interview. A calm mindset while approaching the interview helps significantly, and there is no better way to establish a rapport with the interviewer than to be sincere :).

Another important aspect of the interviewing that I have noticed a lack of, is that most people try to approach an interview as a showcase-my-abilities experience, or a me-vs-the-company battle. It is not. An interview is a journey you and the interviewer take together where you solve a problem and get to know each other better(not just your abilities, but also who you are). Don’t be afraid to ask questions or speak your mind.

As an interviewee, you do not always play at the backhand. An example of this is concepts which I’m not completely sure about. I’ll often let the interviewer know that I’ll be looking it up, or straight up Googling it. I follow it up by explaining exactly what it was that I looked up and how that concept helps with my solution. Actively guiding the interview showcases that you can take charge when needed and bring the problem to its logical conclusion. After all, you are interviewing the company and judging it as much as they are you. Don’t come on too strong though ;).

## Solving a problem

Remember: Think out loud! (more tips on this at the end)

Solving a particular coding problem usually involves the following steps:

I discuss the flow of the interview and each step in detail below:

• Question – This is the question that is provided by the interviewer. You need to read it in thorough detail. Read it multiple times if necessary. There are multiple ways to approach it at this point. If you have a general idea of how to solve the problem, good for you. If not, try to identify if the problem is a mutation of a known, classical problem(e.g. searching in a graph), or some combination of those. If this doesn’t work either, try to come up with the simplest way you can solve the problem. Don’t think about complexities and optimizations at this point.

• Discussion – Your first step should be telling the interviewer exactly what you’re thinking and how you are working on refining the idea. It is important that you talk to the interviewer because they will help guide you in the right direction and correct any obvious mistakes you might be making. You might also have missed some details previously, and the interviewer can point those out to you. You know that feeling when someone you’re talking to is obviously wrong and you can’t help it but try and correct them? Yeah, that’s what we’re going for.
ASK QUESTIONS. I cannot stress this enough. Interviewers are often banking on the interviewee asking clarification questions and might leave out a detail or two to see if you understand the problem enough to notice it. They might have made a mistake and won’t know till you ask them. Any assumptions you make should be clarified. And the most important thing discussing the problem with the interviewer does at this point, is open up two-way communication and build a rapport between you two from the get go. Both of you are more comfortable and will have a better overall interview experience. Win-win!

• Test Case check – At this point in the interview when I understand the question really well, I will often look at the test cases to identify if the idea I’m thinking of works with the given test cases. Are there any obvious gotchas that they show me? Can I come up with a test case where my solution is still in a grey area? Can I ask the interviewer about this grey area and get more test cases? And as always, think out loud.

• Discuss Brute-force – Unless I am sure I have an efficient solution completely figured out in my head, I always, always, always, discuss the brute-force solution with the interviewer. The brute-force solution is usually easy to implement, gets working (pseudo?)code written down which the interviewer can use to score you/come back to later, and has easier time and space complexity discussions. Usually when discussing the brute-force solution, basic optimizations become obvious, and it gives us future paths which are useful in coming up with an efficient solution.
My process with this is to explain all the steps in my algorithm to the interviewer as a gist, since we will be going into more detail while writing actual code.

Edit: Divye has correctly pointed out that over time, as you practice giving interviews, you develop an intuition of developing the efficient solution in the first go. “But if a preconceived notion of needing to start out with brute force exists, often you lose out on that spark.”, as he wisely put it. I have noticed that I tend to rely more on my instinct, now that I know that I will follow a process and only need to worry about coming up with a solution. Experience helps with this too, I’m sure. Others have pointed out that an efficient solution can strike you midway at any step, and depending on the time remaining, you should judge whether to switch to that solution. Sound advice. But you always have this process to fall back on in case anything starts going awry ;).
Another good point brought up is thinking about the competition. Somebody out there is going to come up with an optimal solution no matter what. It’s a question of showcasing technical ability vs superiority. I am fine with the former, but you do you.

• Implement Brute-force – There is a decision to be made here: Should you implement the brute-force solution, or is it better to discuss and implement a more efficient solution? The more efficient solution will score you more points in that vertical, but if you are unable to come up with a complete efficient solution, it is better to have complete brute-force code. Completeness scores high as a measure because it showcases that you could correctly assess the constraints, including your abilities and the problem, and come up with a solution that works. Taking this decision involves quick judgment on your abilities and how deeply you understand the problem. I tend to not give myself the benefit of doubt and implement the brute-force if I cannot map out an efficient solution in my head yet. Leave a few minutes if the time for the interview is ending to complete the final discussions.
There are some general coding tips at the end of this section.

• Discuss Efficient solution – Wooo! Look at you discussing an efficient solution! You can get here either before implementing a brute-force solution, or after. In either case, you should be able to talk out loud how you came up with the approach or optimizations (if you knew it before, why didn’t you just implement it before?). Depending on the time remaining, and whether I have implemented a brute-force solution already, I will go into significant detail explaining this solution if I feel I will not be able to implement it completely, or less detail otherwise.

• Implement Efficient solution – If you’re here, you’re in a good spot in the interview :). This is you putting your best foot/solution forward, so you can confidently go forward. Again, leave some time at the end for further discussions. Coding tips are at the end.

• Dry run cases – Use the given test cases to verbally, STEP-BY-STEP walk through each line of your code and how it will execute. You don’t need to write down the intermediate values if you’re confident, but in some complex cases, I prefer writing those variable values down. This catches basic errors in your code, makes it easier for the interviewer to understand exactly what you’ve done, and shows the interviewer that you care about testing your code.

• Edge cases – The next step is stress-testing your code. Try out null values. Check for overflows. Any complex logic in your code? Make an edge case for that. Try out as many crazy inputs in your code as you can. DRY RUN THEM VERBALLY so your interviewer knows what you’re trying out. Increases your confidence in your code, and shows the interviewer how well you understand the system and underlying constraints.

• Time & Space complexity – I prefer having this discussion before discussing scalability, concurrency etc. so that we have bolted down this problem as solved and completed our discussion. Everything else is a perk. I usually approach time and space complexity in a bottom-up fashion, starting from the most basic operation and going up the tree. This approach verifies my values, and shows that I understand the process of calculating the complexity. I usually write this down at the end of my code in comments.

• Further discussion – Depending on the company, the time remaining and how the interview has gone, I will often discuss further with the interviewer, for e.g., how do I implement such an algorithm concurrently? How would I scale this approach to X users? How would I manage memory in case this grows too large?

### Coding tips

• Please use good variable names! var1 is useless when you come back to the code 2 months later, and to understand what the variable does. activeUserCounter is a much better descriptor. The typing speed argument should not matter (auto-complete, copy-paste), and the benefits are absolutely worth it.

• Follow a standard coding style. I don’t care what you use unless you’re doing something unheard of, but follow a consistent pattern. This is often language and convention dependent.

• Don’t be afraid of classes/functions etc. Sure, you might feel that they add extra time not spent on writing logic, but if it simplifies understanding of your code, it will often be worth it. Not just for the interviewer, if you structure your code well, you have less cognitive load when writing other parts of your code.

## The Technical Interview Script

A technical interview is not just problem-solving, though that is primary. The basic script looks like:

• YOU NEED AN ELEVATOR PITCH. Don’t worry, it doesn’t make you one of those MBA-types. And you will invariably be asked to describe yourself at the beginning of the interview. The elevator pitch for an engineer is usually a 2 minute monologue on your work, or your journey, interspersed with some technical details and some real-life impact. You don’t need to explain everything. You shouldn’t say too little either. Remember the skirt analogy. If you don’t have one, make one and then tell it to as many people as you can and ask them what they think. In a few iterations, you should have something ready. Also, practice it till you can speak it confidently, since this is the first impression you’re going to leave on the interviewer.

• N Questions – Depending on the company, you will be expected to solve some questions. We’ve already discussed the question approach beforehand.

• Your questions – Depending on the time remaining, you can ask all the questions that might be plaguing you about the company. Don’t hesitate. Don’t be an idiot, but if you have a genuine doubt, no matter what it is, you shouldn’t hesitate. I usually ask about the creative freedom and ownership responsibilities because I am always curious about those, but your mileage may vary. Depending on how my interview went, I usually ask the interviewer if I performed well in all verticals they were judging me on.

• Call me maybe? – Well, that’s it. You’re done with the interview and now the only thing to do it wait. I’m not good at that, so best of luck! 🙂

### Practicing

These are my practices(?) for practicing interviews:

• Nothing replaces the real thing. Give mock interviews. I’m lucky enough to have friends who are already interviewers, and know how to create an atmosphere of a real interview. We try to not be friends during that time and recreate an interview as much as possible. Finding people who can interview you is a difficult personal journey, but I would spend significant time finding these people so that I can get working on this part of the process.
I started interviewing after I had revised all my concepts, and solved a large number of problems of all types on an online judge(I used Leetcode). Thus, I learnt to jump from question to solution in my head, detracting from training in the process of solving a question. In hindsight, that is the wrong approach, and I should have started right after I had revised most of my theory concepts and was practicing problems. A lot of steps in the question script came about as a result of feedback from my mock interviews. I had to consciously change how I approached solving problems while interviewing, and that is definitely the hard way. Why not just train yourself to think the interview way when practicing from the first time?

• If (and this is not a political statement) in the years post 2020, paper-pen and whiteboard interviews are in fashion again, you need to practice solving problems on the medium. You can still pull off a decently similar experience using video calls and paper-pen, or a whiteboard, I imagine. Most of my post quarantine interviews have been completely over telephone or video, though.

## Thinking out loud

Ok, I know this is the bane of coding rounds in interview. If you say things out loud, you’re gonna think slower and might not be able to finish, but if you don’t, the interview is just a black hole where sometimes you might discuss something, but there is a major communication gap between you two. It is hard in the beginning, but if you remember to do a few simple things, you should be doing this in no time!

• Practice by teaching someone in extreme detail. It doesn’t matter what algorithm or concept you teach, this will help you in being able to talk about detailed aspects of the work using simple language and analogous concepts which will help you when you’re trying to explain what you’re thinking of on the fly. I notice that I clear the idea out in my head the fastest if I explain a concept to a friend from a widely different field, e.g. binary to decimal conversion and vice versa to my doctor friend.

• You don’t have to say everythingKnowing what to say is an art form in itself, but a simple tradeoff I make is if I know I’m not going to be able to talk while performing a task, e.g. dry-running on a case, I will prefix all my silent computations in my head by explaining what I will be doing in my head, e.g. “I’m about to try and prove if collinearity exists in this case, so I might fall silent, but I’ll try to keep you updated on what I’m thinking”. In any case, every few minutes I will let them know where I am.

• You don’t have to be completely coherent – They know you’re thinking on the fly and might not have things figured out in your head. You might talk about unrelated topics, concepts which you’re skimming to see if they fit the problem, or weird ideas which pop up in your head. They don’t have to be coherent together. But you should speak in complete sentences as much as possible :p. And once you’ve ideated and have an approach ready, you should be fairly coherent then.

• Ask them questions – You don’t have to be the only one talking. I often ask validation questions when talking, e.g. “Does that sound good?”. The responses often let me know how confidently the interviewer understands what I’m talking about, and I can course correct if I make any mistakes. These questions also buy you some time while making the interviewer pay active attention to you.

That does it for this post. Phew. That was a long one, wasn’t it? This is a very subjective post, and let me know any comments/suggestions/improvements.

Authored by Khushman, with feedback from too many people to list. I’m grateful for my smart friends. The rest of you, you I know I love y’all :).

## Optimal Binary Search Trees

BSTs are used to organize a set of search keys for fast access: the tree maintains the keys in-order so that comparison with the query at any node either results in a match, or directs us to continue the search in left or right sub-tree.

For this problem we are given a set of search keys (0, 1, … n) along with the search frequency count (f0, f1, …. fn) of each key. The set of keys is sorted. A BST can be constructed from such a set of keys, so that keys can be searched quickly but there’s a cost associated with the search operation on BST. Searching cost for a key/node in the BST is defined as – level of that key/node multiplied by its frequency. Level of root node is 1. Total searching cost of the BST is the sum of searching cost of all the keys/nodes in the BST. Given a set of keys the problem is to arrange the keys in a BST that minimizes the total searching cost.

For example:
Keys: {0 ,1} and Freq: {10, 20}
Possible BSTs created from this set of keys are:

```1) Total cost of BST = (level of key0 * freq of key0) +
(level of key1 * freq of key1)
= (1 * 10) + (2 * 20)
= 50
2) Total cost of BST = (level of key1 * freq of key1) +
(level of key0 * freq of key0)
= (1 * 20) + (2 * 10)
= 40 ```

Hence, the minimum total searching cost for given set of keys is 40.

## Thought Process:

As per definition of searching cost for a key in the BST – (level of key (L) * freq of key (F)), here we can observe that starting from level ‘1’ till level ‘L’ at each level the key contributes ‘F’ to the total cost and that’s why its searching cost is (L * F).

In order to minimize the total search cost, a simple Greedy approach comes to mind where we try to keep keys with higher frequency at the top of the tree like we choose the key with highest frequency as root, then from the keys on the left of it we again choose a key with highest frequency and make it the left child of root and similarly we choose the right child of the root from the keys on the right and build a BST.

But will this approach build a BST that give minimum total search cost? To prove a Greedy approach works, we have to give a proof. But to prove a Greedy approach fails, we just have to give an example where it doesn’t work.

Let’s consider this example:

Using the Greedy approach discussed above let’s build a BST and calculate its total cost:
Key ‘4’ has highest frequency: 25, so it will be the root. Keys {0,…,3} will be used to build the left sub-tree and keys {5, 6} will be used to build the right sub-tree.
Among keys {0,…,3}, key ‘0’ has the highest frequency, hence it will be the left child of the root.
Among keys {5, 6}, key ‘6’ has the highest frequency, hence it will be the right child of the root.
We keep doing this all the remaining keys and the final BST would look like this:

```If Level of Key(k) is Lk and Frequency of Key(k) is Fk, then -

Total cost of Greedy BST = (L4 * F4)+(L0 * F0)+(L6 * F6)+
(L2 * F2)+(L5 * F5)+(L1 * F1)+
(L3 * F3)
= (1 * 25)+(2 * 22)+(2 * 8) +
(3 * 20)+(3 * 2) +(4 * 18)+
(4 * 5)
= 243```

But is there any other possible BST that has lower total searching cost?
Let’s consider this BST:

```Total cost of this BST = (1 * 20) + (2 * 22) + (2 * 25) +
(3 * 18) + (3 * 5) + (3 * 8) +
(4 * 2)
= 215```

This BST has lower total cost (215) than the BST created using Greedy approach (243). Hence the Greedy approach fails to solve this problem. But then how do we find the optimal BST?

## Solution:

Let’s consider a given set of keys {Ki, … , Kj} and Min_Total_Cost(i, j) returns the total searching cost for the optimal BST for this set.
Let’s say we have created the optimal BST for this set of keys and ‘Kr’ is the root of this BST such that i <= r <= j, the tree would look like this:

Kr

/ \

Ki,…, Kr-1    Kr+1,…, Kj

The keys on the left of Kr in the given set will be part of left sub-tree and the keys on the right will be part of right sub-tree. If Total_Cost(i, j) gives the total searching cost for this BST, then it includes –

1. The searching cost of the root which is – level of root (1) * frequency of root key,
2. The total cost of the left sub-tree and the total cost of the right sub-tree (the sub-problems),
3. And as explained earlier that making the keys on the left and right of Kr in the given set the children of Kr will increase their path length by 1 and hence all these keys will incur that cost to the total cost, i.e. all keys which are yet to be included in the BST contribute a cost equal to their frequency to the total cost at every level, hence at each level we have sum of frequency of all such keys/nodes.
```Total_Cost(i, j) =  (Level of Kr * Freq of Kr)
+(Total searching cost of left sub-tree)
+(Total searching cost of right sub-tree)
+(Sum of frequency of all the keys in the
left sub-tree)
+(Sum of frequency of all the keys in the
right sub-tree)

= Total_Cost(i, r-1) +
Total_Cost(r+1, j) +
(Sum of frequency of all the keys {Ki,…,
Kj})```

Since we do not know the key Kr, we will have to try out each key in the set as root of the BST and we will keep track of the minimum of the total searching cost of the BSTs as we calculate them.
Using this formula above we can write for Min_Total_Cost(i, j) as –

```Min_Total_Cost(i, j) = min ( Min_Total_Cost(i, r-1)
+ Min_Total_Cost(r+1, j)
+ Sum of all Fx for x in
{i,..,j} )
for all r in {i,..,j}

If i > j which doesn’t make a valid set of keys, Min_Total_Cost(i, j) = 0.```

Also this shows this problem has optimal substructure (i.e. an optimal solution can be constructed from optimal solutions of subproblems).

### Recursive Approach:

Using this we can write a recursive implementation:

C++:

```#include <bits/stdc++.h>
using namespace std;

int Min_Total_Cost(int freq[], int i, int j)
{
if (i > j)
return 0;

int min_total_cost = INT_MAX;

for (int k = i; k <= j; ++k)
{
int total_cost = ( Min_Total_Cost(freq, i, k-1)
+ Min_Total_Cost(freq, k+1, j)
+ accumulate(freq+i, freq+j+1, 0));

if (total_cost < min_total_cost)
min_total_cost = total_cost;
}

return min_total_cost;
}

int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
return Min_Total_Cost(freq, 0, num_keys-1);
}

int main()
{
int keys[] = {0, 1, 2};
int freq[] = {34, 8, 50};
int n = sizeof(keys) / sizeof(keys);

cout<<"Total cost of Optimal BST:"<<getTotalCostOfOptimalBST(keys, freq, n)<<endl;

return 0;
}```

Java:

```import java.io.*;

class OptimalBST
{
static int sum(int freq[], int left_idx, int right_idx)
{
int sum = 0;
for (int i=left_idx; i <= right_idx; ++i)
{
sum += freq[i];
}
return sum;
}

static int Min_Total_Cost(int freq[], int i, int j)
{
if (i > j)
return 0;

int min_total_cost = Integer.MAX_VALUE;

for (int k = i; k <= j; ++k)
{
int total_cost = ( Min_Total_Cost(freq, i, k-1)
+ Min_Total_Cost(freq, k+1, j)
+ sum(freq, i, j));

if (total_cost < min_total_cost)
min_total_cost = total_cost;
}

return min_total_cost;
}

static int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
return Min_Total_Cost(freq, 0, num_keys-1);
}

public static void main (String[] args)
{
int keys[] = {0, 1, 2};
int freq[] = {34, 8, 50};
int n = keys.length;

System.out.println("Total cost of Optimal BST:" +
getTotalCostOfOptimalBST(keys, freq, n));
}
}```

But this implementation has exponential time complexity. To find the reason behind such high time complexity let’s have a look at the recursive function call tree:

In this example of a set consisting of 3 keys {0, 1, 2}, we can see that subproblems such as  Min_Total_Cost(freq, 2, 2) and Min_Total_Cost(freq, 1, 1) are calculated repeatedly.
Our recursive algorithm for this problem solves the same subproblem over and over rather than always generating new subproblems. These are called overlapping subproblems.

As the two properties required for using Dynamic Programming : ‘optimal substructure’ and ‘overlapping subproblems’ hold, we can use DP for this problem.

### Dynamic Programming Solution:

In DP we start calculating from the bottom and move up towards the final solution.
Hence we first solve the sub-problem {i=0, j=0}, then we skip all the sub-problems where (i > j), then next we solve {i=1, j=1}, and reuse solutions to these sub-problems to solve {i=0, j=1} and so on.
Finally we solve the sub-problem {i=0, j=(n-1)} and this gives us the final answer.

Solution of all subproblems are stored in a 2D array / DP table so that they can be reused when required.

C++:

```#include <bits/stdc++.h>
using namespace std;

long long int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
long long int DP_Table[num_keys][num_keys]{};

for (int j = 0; j < num_keys; ++j)
{
for (int i = j; i >= 0; --i)
{
long long int min_total_cost = INT_MAX,
sum_freq = accumulate(freq+i, freq+j+1, 0);

for (int k = i; k <= j; ++k)
{
long long int total_cost = 0,
total_cost_left_subtree = 0,
total_cost_right_subtree = 0;

if (k > i)
{
total_cost_left_subtree = DP_Table[i][k-1];
}

if (k < j)
{
total_cost_right_subtree = DP_Table[k+1][j];
}

total_cost = ( total_cost_left_subtree
+ total_cost_right_subtree
+ sum_freq );

if (total_cost < min_total_cost)
min_total_cost = total_cost;
}

DP_Table[i][j] = min_total_cost;
}
}

return DP_Table[num_keys-1];
}

int main()
{
int keys[] = {0, 1, 2, 3, 4, 5, 6};
int freq[] = {22, 18, 20, 5, 25, 2, 8};
int num_keys = (sizeof(keys) / sizeof(keys));

cout<<"Total cost of Optimal BST:"
<<getTotalCostOfOptimalBST(keys, freq, num_keys)<<endl;

return 0;
}```

Java:

```import java.io.*;

class OptimalBST
{
static int sum(int freq[], int left_idx, int right_idx)
{
int sum = 0;
for (int i=left_idx; i <= right_idx; ++i)
sum += freq[i];
return sum;
}

static int getTotalCostOfOptimalBST(int keys[], int freq[], int num_keys)
{
int DP_Table[][] = new int[num_keys][num_keys];

for (int j = 0; j < num_keys; ++j)
{
for (int i = j; i >= 0; --i)
{
int min_total_cost = Integer.MAX_VALUE,
sum_freq = sum(freq, i, j);

for (int k = i; k <= j; ++k)
{
int total_cost = 0,
total_cost_left_subtree = 0,
total_cost_right_subtree = 0;

if (k > i)
total_cost_left_subtree = DP_Table[i][k-1];

if (k < j)
total_cost_right_subtree = DP_Table[k+1][j];

total_cost = ( total_cost_left_subtree
+ total_cost_right_subtree
+ sum_freq );

if (total_cost < min_total_cost)
min_total_cost = total_cost;
}

DP_Table[i][j] = min_total_cost;
}
}
return DP_Table[num_keys-1];
}

public static void main (String[] args)
{
int keys[] = {0, 1, 2, 3, 4, 5, 6};
int freq[] = {22, 18, 20, 5, 25, 2, 8};
int num_keys = keys.length;

System.out.println("Total cost of Optimal BST is "
+ getTotalCostOfOptimalBST(keys, freq, num_keys));
}
}```

Time complexity: O(n^3)
Space complexity: O(n^2)

## Meeting Rooms

Given an array of intervals representing N meetings, find out if a person can attend all the meetings.

```Input:
[[6,7],[2,4],[8,12]]
Output:
true
Explanation:
None of the meetings overlap with each other.

Input:
[[1,4],[2,5],[7,9]]
Output:
false
Explanation:
Meetings [1,4] and [2,5] overlap with each other.
```

## Thought Process

A person can not attend two or more meetings at one time. It means if the timings of two meetings are overlapping, then she/he will not able to attend it.
Now, the question comes in your mind that How to recognize/check that the two meetings are overlapping or not. We will use the time interval to check that the meetings are overlapping or not.

Now, we have to check if one meeting interval is overlapping with other, then it is impossible to attend that meeting.

Brute Force

The Simple Solution is to compare every two meetings using the nested for loop and to check whether the intervals are overlapping or not. Two meetings overlap if one meeting is going on and other meeting starts before finishing the first meeting.

```class Solution {
public:
bool check_overlap(vector<int>first, vector<int>second)
{
if((first>=second && first<second)
|| (second>=first && second <first)){
return true;
}
return false;
}
bool canAttendMeetings(vector<vector<int>>& intervals) {
int i,j;

for(i=0;i<intervals.size();i++)
{
for(j=i+1;j<intervals.size();j++)
{
if(check_overlap(intervals[i],intervals[j]))
{
return false;
}
}
}
return true;
}
};
```

Time Complexity of the brute force implementation is O(n2), due to nested for loop where as space complexity is O(1)

Using Merge Intervals Technique

In this, what we do is we merge all the overlapping intervals. After Merging, we compare the number of intervals before merging and after merging. If the number of intervals is the same, then there are no conflicts in the meetings, it will run smoothly(no overlapping situation). If the total number of intervals are less after merging, then it means there were some overlapping intervals, so there will be conflicts in meetings.

If you wan to learn merging intervals in detail, go here.

First, Sort the intervals in ascending order.
1. Initiate a 2-D vector array.
2. Add the first interval into it.
3. for every other intervals
• check if the last interval in the vector array is overlapping with current interval, then pop the last interval from vector array and merge the both intervals and push it in the vector array.
• check if the last interval in he vector array is not overlapping with current interval, push current interval in the vector array.
4. if(size of vector array formed < size of initial intervals array given)
• return false
• else return true;
```class Solution {
public:
bool canAttendMeetings(vector<vector<int>>& intervals) {
int i,j;

vector<vector<int>>v;
if(intervals.size()==0)
{
return true;
}
sort(intervals.begin(), intervals.end());
vector<int>l;
l.push_back(intervals);
l.push_back(intervals);
v.push_back(l);

for(i=1;i<intervals.size();i++)
{
vector<int> prev=v.back();
//time to merge
if(intervals[i]<prev)
{
l.clear();
v.pop_back();
l.push_back(prev);
l.push_back(max(prev,intervals[i]));
v.push_back(l);
}
else
{
v.push_back(intervals[i]);
}
}
if(intervals.size()==v.size())
{
return true;
}
return false;

}
};
```

The time complexity is O(nlogn) and space complexity is O(n)

Sorting

what we do here is that we sort the array in ascending order. After sorting, we compare the meeting with the previous meeting and make sure that the meeting should not overlap. If it overlaps, return false otherwise return true.

```class Solution {
static bool compare(vector<int>v1, vector<int>v2) {
return v1 == v2 ? v1 > v2 : v1 < v2;
}
public:
bool canAttendMeetings(vector<vector<int>>& intervals) {

sort(intervals.begin(), intervals.end(), compare);

for (int i = 1; i < intervals.size(); i++) {
if (intervals[i-1] > intervals[i])
return false;
}

return true;
}
};
```

The time Complexity is O(nlogn) and space complexity is O(1)

Please write to us if something is missing or wrong, we will be happy to fix it.

## Subarrays with given sum

We are given an unsorted array containing positive and negative numbers, we need to find the number of subarrays with the given sum, k. For example,

```Input:
arr = [2, 4, -5, -5, 6] and k = -10
Output:
1
Explanation:
We can see that there is only one subarray (index 2 to 3) whose sum is -10. ```

## Thought process

If the sum of elements in the range [0, i] and [0, j] is the same, then we can say that the sum between [i + 1, j] will be zero. This statement will be more clear after understanding the below example.

Let’s take arr = [2, 4, 4, -10, 4, 6, 5, 6]. Sum of elements in the range [0, 2] = 2 + 4 + 4 = 10
Sum of elements in the range [0, 5] = 2 + 4 + 4 + (-10) + 4 + 6 = 10

The above two sums are the same. Here,i= 2 and j = 5. Let’s check whether the sum in the range [3, 5] is 0 or not. Sum of elements in the range [3, 5] = -10 + 4 + 6 = 0

Hence, the statement explained above is true.

Read more here: subarray with sum zero

We can extend this idea to this question. The modified statement is as follows if the difference of the sum of elements in the range [0, j]and [0, i] is k, then we can say that the sum between [i + 1, j] is k. Mathematically,

```If Sum[0, j] - Sum[0, i] = k, where j > i
Then, Sum[i + 1, j] = k or Sum[0, j] - k = Sum[0, i]```

For each index j in the array, if we have seen Sum[0, j] – k in the past, then we have found the subarray. But how to achieve this?

Let’s number of elements in an array is n. So, we need to maintain sum[0, 0], sum[0, 1], sum[0,2], sum[0, 3] … sum[0, n].
These sums are known as prefix sum. If the same sum occurs many times, we store the frequency of occurrence. We can use a hashmap where the key will denote the sum and value will denote the frequency of occurrence.

Any corner cases? What if k = sum[0, j], where j is an index. For this, sum[0, j] – k is 0 and we will try to find if we have seen 0 as the sum in the past. To handle this case, we will insert one entry to map as a map = 1.

## Algorithm

```1. For each index in the array.
1.1 sum = sum in the range [0, i]
1.2 Check if sum - k is already encountered in the array,
i.e have we encountered any array in the past whose
the sum is sum - k.
1.3 If yes, add the frequency of sum - k to answer.
2. Finally, put the sum in the map with frequency as 1 if the sum is not seen in the past.
3. If the sum is seen in the past, update frequency as old frequency + 1.

### Show me the implementation

```// nums = given array
// k = sum
int subarraySum(vector<int>& nums, int k) {
int ans = 0;   // to store the final count
int sum = 0;  // to store sum

// to store prefix sum along with frequency
map<int, int> map;
map = 1;  // for case like k = sum[0, i]

for(int i = 0; i < nums.size(); i++)
{
// calculating sum in the range[0, i]
sum += nums[i];
// if sum - k is alread seen
if(map.find(sum - k) != map.end())
ans += map[sum - k]; // add frequency

// if sum is not seen in the past
if(map.find(sum) == map.end())
map[sum] = 1;
else
map[sum] = map[sum] + 1;
}

return and;
}

```

Time Complexity of the implementation is O(n) along with the space Complexity of O(n).

## Rotten oranges problem

Rotten oranges problem goes like: In a given grid, each cell can have one of three values:

• the value 0 representing an empty cell;
• the value 1 representing a fresh orange;
• the value 2 representing a rotten orange.

Every minute, any fresh orange that is adjacent (4-directionally) to a rotten orange becomes rotten. Return the minimum number of minutes that must elapse until no cell has a fresh orange.  If this is impossible, return `-1` instead.

```Input: [[2,1,1],[1,1,0],[0,1,1]]
Output: 4

Input: [[2,1,1],[0,1,1],[1,0,1]]
Output: -1
Explanation: The orange in the bottom left corner (row 2, column 0) is never rotten because rotting only happens 4-directionally.
``` ## Thoughts

Rotting oranges problem offers a unique perspective on identifying a graph search problem. At first glance, it seems like the solution to the problem lies in changing the status of the given grid on multiple time steps while counting the steps and making sure that we come to a conclusion to our iterations, akin to solving a sudoku puzzle, where the state of the entire grid matters.

On further inspection though, we can see that we do not need to worry about the state of the entire grid, just the fresh oranges adjacent newly rotting oranges at each time step. We can consider the oranges as nodes of a graph, and the fresh oranges around them as connected to these nodes. We do not know the entire state of the graph beforehand, but we know that the adjacent nodes will expose themselves as time passes. This pushes us towards the idea of a Breadth-First Search, where we topologically move level by level through a graph.

This problem also has some interesting edge cases, with it being necessary to parse the graph to identify such cases:

### Show me the implementation

```class Solution:
def orangesRotting(self, grid: List[List[int]]) -&gt; int:
# Make sure we have a valid grid
if len(grid) &lt; 1 or len(grid) &lt; 1:
return -1
sources = []
ones = 0
for i in range(len(grid)):
for j in range(len(grid[i])):
if grid[i][j] == 2:
sources.append((i, j))
elif grid[i][j] == 1:
ones += 1
if len(sources) == 0:
# In case of no rotten oranges, return value depending
# on the state of the grid
if ones == 0:
return 0
else:
return -1
directions = [[-1, 0], [1, 0], [0, -1], [0, 1]]
# Grid is initially at t = -1
count = -1
visited = set(sources)
# End the BFS when there are no new sources left
while len(sources) &gt; 0:
count += 1
newsources = []
for source in sources:
i, j = source
grid[i][j] = 2
for direction in directions:
row = i + direction
col = j + direction
# Make sure it is a valid row and column,
# It is not visited, and it is a fresh orange
if row &gt;= 0 and row &lt; len(grid) \
and col &gt;= 0 and col &lt; len(grid) \
and (row, col) not in visited and grid[row][col] == 1:
newsources.append((row, col))
sources = newsources
# Make sure there are no fresh oranges left in the grid
for i in grid:
for j in i:
if j == 1:
return -1
return count
```

The runtime complexity of BFS is O(V + E), which in our scenario translates to moving to every node, i.e. O(n * m) where n and m are dimensions of the grid. The space complexity of BFS is O(V), which similarly as time complexity translates into O(n*m)

## Find friend groups

There are Nstudents in a class. Some of them are friends, while some are not. Their friendship is transitive in nature. For example, if Ais a direct friend of B, and Bis a direct friend of C, then Ais an indirect friend of C. And we defined a friend circle is a group of students who are direct or indirect friends.

Given a N*N matrix M representing the friend relationship between students in the class. If M[i][j] = 1, then the `i`th and `j`th students are direct friends with each other, otherwise not. And you have to output the total number of friend circles among all the students. For example,

```Input:
[[1,1,0,0,0,0],
[1,1,0,0,0,0],
[0,0,1,1,0,0],
[0,0,1,1,1,0],
[0,0,0,1,1,0],
[0,0,0,0,0,1]
]
Output: 3```

## Thought process

When we talk about the connections or relationships, we immediately will think of graph data structure. The node is the person and the edge is the relationship between two persons. So, first, we have to figure out whether it will be a directed graph or an undirected graph. In this problem, the friendship is a mutual relationship, thus the graph is undirected.

When you are reading this problem, the concept of Strongly Connected Components(SCC) will come into your mind. Ok, we will discuss why? If A is a friend of B, B is a friend of C, then A will be a friend of C. What does it mean? A is indirectly connected to C. It means that every friend can reach every other friend through a path if they are directly or indirectly connected. So in this way, they are forming a strong group or circle, in which every vertex is connected directly or indirectly in its group/circle. Notice that all friends (both direct and indirect), who should be in one friend circle are also in one connected component​ in the graph.

A particular group of friends is a single component. In this problem we are going to find out how many components are there in the graph. When you make a graph out of it. It will be like this(see below fig). It means there are 3 friends circles. We can solve this problem using 2 methods: depth first search and disjoint set method

## Using Depth first traversal method

Finding connected components for an undirected graph is very easy. We can do either BFS or DFS starting from every unvisited vertex, and we get all strongly connected components

```1. Initialize all nodes as not visited.
2. Initialize variable count as 1.
3. for every vertex 'v'.
(i) If 'v' is unvisited
Call DFS(v)
count=count+1
DFS(v)
1. Mark 'v' as visited.
2. Do following for every unvisited neighbor `u`
recursively call DFS(u)```

### DFS based approach implementation

```class Solution {
Public:
void DFS(int node,vector<vector<int>> edges,vecto<bool>visited)
{
int i;
visited[node]=true;
for(int i=0;i<edges[node].size();i++)
{
if(edges[node][i]==1 && node!=i && visited[i]==false)
{
DFS(i,edges,visited);
}
}
}

//Main Method
int findCircleNum(vector<vector<int>> edges) {
int i,j;

int n=edges.size();
int count=0;
vector<bool>visited(m);

//mark all the nodes as unvisited
for(i=0;i<n;i++)
{
visited[i]=false;
}

for(i=0;i<n;i++)
{
if(visited[i]==false)
{
DFS(i,edges,visited);
count=count+1;
}
}

return count;
}
};
```

Complexity

• Time Complexity: Since we just go along the edges and visit every node, it will be O(n).
• Space Complexity: O(n), to store the visited nodes.

## Using Disjoint Sets(Union Find)

So, how to think that this problem is solved by Disjoint Sets(union-find algorithm)?

The answer is simple because we need to keep track of the set of elements(here friends) partitioned into a number of non-overlapping subsets. Disjoint Sets(Union Find) always do this work very efficiently. We will use the Union by Rank algorithm to solve this problem.

To join two nodes, we will compare the rank of parents of both nodes.

• If the rank is equal, we can make any one of the parent’s node as a parent and increment the rank of the parent node by 1.
• If the rank is not same, then we can make the parent whose rank is greater than other. Let’s start solving this.

Union(1,2): 1 is a parent of itself and 2 is parent of itself. As both of them have different parents, so we have to connect them, and we will any of the parent as root, in this case we chose 1 and make it a parent. Union(2,1): 1 is a parent of itself and 1 is a parent of 2, as both of them have the same parents, already joined.

Union(3,4) :3 is a parent of itself and 4 is a parent of itself. Both of them have different parents, we need to join them. Union(4,3): 3 is parent of itself and 3 is the parent of 4. Both of them have the same parents, already joined.

Union(4,5):   3 is the parent node of 4 and 5 is the parent node of 5. Since parents are different, we have to compare the rank of the parents of both 4 and 5 nodes. 3 has higher rank then 5, it will be parent of 5 .(Used Path Compression) as shown in the below fig. Union(5,4): As now, 4 and 5 have the same parents, already joined. Last is the node 6 which connected to itself. So, nothing to do there. At the end of this exercise, we found that there are three different sets in the graph, and that is our answer of number of groups of friends in this graph or matrix.

### Disjoint set based approach implementation

```class Solution {
public:
class Node
{
public:
int data;
Node*parent;
int rank=0;
};
//make a set with only one element.
void make(int data)
{
Node*node=new Node();
node->data=data;
node->parent=node;
node->rank=0;
mapy[data]=node;
return;
}
map<int,Node*> mapy;
//To return the address of the particular node having data as `data`
Node*find(int data)
{
auto k=mapy.find(data);
if(k==mapy.end())
{
//There is no any node created, create the node
make(data);
return mapy[data];
}
else
{
return mapy[data];
}
return NULL;
}
/*Find the representative(parent) recursively and does path compression
as well*/
Node*parents(Node*node)
{
if(node->parent==node)
{
return node;
}
return node->parent = parents(node->parent);
}
//Main Method
int findCircleNum(vector<vector<int>>edges) {
int i,j;

vector<int> v;
int m=edges.size();
int n=edges.size();
for(i=0;i<m;i++)
{
for(j=0;j<n;j++)
{
if(edges[i][j]==1)
{
int a=i;
int b=j;

Node*A=find(a);
Node*B=find(b);

Node*PA=parents(A);
Node*PB=parents(B);

if(PA==PB)
{
}
else
{
if(PA->rank>=PB->rank)
{
//increment rank if both sets have Same rank
PA->rank=(PA->rank==PB->rank)?PA->rank+1:PA->rank;
PB->parent=PA;
}
else
{
PA->parent=PB;
}
}

}
}
}

int number=0;
for(auto k: mapy)
{
if(k.second->parent==k.second)
{
number=number+1;
}
}
return number;
}
};
```

Complexity

• Time Complexity: For each of the edge, we need to find the parents  and do the union, which is O(mlogn).
• Space Complexity: We used a map to store the parent information, O(n).

This post is contributed by Monika Bhasin

## Trapping rain water

Given n non-negative integers representing an elevation map where the width of each bar is 1, compute how much water it is able to trap after rain. For example,

```Input: [0,1,0,2,1,0,1,3,2,1,2,1]
Output: 6``` ## Basic thought

We know water stays at the highest level it is able to, and it always maintains the same flat surface. Using this, we can infer that we need to find holes in the elevation where water would be able to rest at a level. To calculate how much water these holes would need to store, we can see that we need to have elevations on both sides, and we also need to track how much space a particular hole would be able to trap the water. Fig below is an example of a hole which holds 4 units of water. Upon further breakdown of these holes, we can notice that we do not need to track the entire hole to find the capacity it holds, but we can parse each unit of the hole individually. Thus, the amount of water in each unit of the hole is

`min(leftHeight, rightHeight) - currentUnitHeight.`

What remains now is to calculate the leftHeight and the rightHeight. We could parse through them individually to find these out, but we can see a general pattern here: The highest elevation to the left inclusive of the current unit will become the leftHeight, and the highest elevation to the right inclusive of the current unit will become the rightHeight. The problem has been greatly simplified into maintaining track of highest heights on both sides of every unit.

## Brute force solution

From our observations in the previous section, the simplest brute force approach is to calculate the highest elevation on both sides of every unit, and sum them up together. Each unit takes O(n) time with this approach, and there are n units to calculate in total. Thus, this approach will take O(n2) time.

### Show me the brute force implementation

```class Solution:
def trap(self, height: List[int]) -> int:
if len(height) < 3:
return 0
ans = 0
for i in range(len(height)):
left_max = 0
right_max = 0
for j in range(i + 1):
left_max = max(left_max, height[j])
for j in range(i, len(height)):
right_max = max(right_max, height[j])
ans += min(left_max, right_max) - height[i]
return ans
```

## Dynamic Programming approach

As we can see from the brute force solution, we calculate the leftHeight and the rightHeight multiple times for the same node, i.e. the problem has overlapping subproblems. Thus a dynamic programming approach should optimize the brute force approach further. We can store the leftHeight and rightHeight elements till each index we have iterated, and thus the water storage calculation for each unit will now take O(1) time. Overall, this approach passes over the array thrice, and thus has a runtime of O(n) with a space complexity of O(n).

### Show me the dynamic programming implementation

```class Solution:
def trap(self, height: List[int]) -> int:
if len(height) < 3:
return 0
ans = 0
left_max =  * len(height)
right_max =  * len(height)
left_max = height
right_max[-1] = height[-1]
for i in range(1, len(height)):
left_max[i] = max(height[i], left_max[i - 1])
for i in reversed(range(len(height) - 1)):
right_max[i] = max(height[i], right_max[i + 1])
for i in range(1, len(height) - 1):
ans += min(left_max[i], right_max[i]) - height[i]
return ans
```

## Stacks

Since we need to keep track of the highest elevations up to a point, stacks are a good approach to perform this operation in one pass of the array. The basic idea is since we need to store the largest elevations in the stack, as we iterate through the array, we can find the amount of water stored till the currentHeight is higher than elements of the stack and move on to the next value, i.e. water stored will always be of a hole shape, thus we can find the amount of water that can be stored between two high values. This operation passes over the array once, and each element can only have two operations maximum: Pushing and popping from the stack, thus its time complexity is O(n). The space complexity will be O(n), in case the entire array is stored on the stack.

### Show me the stack implementation

```class Solution:
def trap(self, height: List[int]) -> int:
if len(height) < 3: return 0 ans = 0 stack = [] for i in range(len(height)): while len(stack) > 0 and height[i] > height[stack[-1]]:
top = stack.pop()
if len(stack) == 0:
break
# Distance between the larger value still in the
#stack with a hole the height of the top element
#and the current element
distance = i - stack[-1] - 1
# Water that can be stored is smaller
# heights between these bounds, and the height
# of the intermediate region between top of the stack
# and current index
curr_height = min(height[i], height[stack[-1]]) - height[top]
ans += distance * curr_height
stack.append(i)
return ans
```

## Simple Optimization

Another way to approach the problem is that since we need to find the maximum elevations on either side to calculate the current water stored, we can calculate the global maximum in one pass, and once we have the index for the same, we can iterate to it and from it to the rest of the array knowing that we have one bounded measurement which is the highest elevation in the array. The time complexity for this operation is O(n), but there are two passes over the array(once to calculate global maximum, and once to calculate the water amount). The space complexity is O(1).

### Show me the implementation

```class Solution:
def trap(self, height: List[int]) -> int:
if len(height) < 3:
return 0
ans = 0
gMax = 0 # Global Max
for i in range(len(height)):
gMax = i if height[gMax] < height[i] else gMax
lMax = 0 # Left max yet
for i in range(1, gMax):
lMax = i if height[lMax] <= height[i] else lMax
ans += max(0, height[lMax] - height[i])
lMax = len(height) - 1 # Right max yet
for i in reversed(range(gMax, len(height) - 1)):
lMax = i if height[lMax] <= height[i] else lMax
ans += max(0, height[lMax] - height[i])
return ans
```

Given a singly linked list, reverse it. For example:

```Input:
1->2->3->4->5
Input:
5->4->3->2->1
```

This problem is never asked standalone in an interview but very important to answer a few of the questions which are commonly asked in Amazon and Facebook interviews like reverse K nodes in a linked list or reverse K alternating nodes.

The idea to reverse a linked list is simple, you take each node, make it point to its previous node, and go to the next node. For each node, we need to maintain three different pointers: current which points to the node we are working on, prev, the node which is immediately previous to the current node and next, the node which is immediately next to the current node.

How do we change the pointers? If we change current.next to point to previous node, we cannot go forward. So, the first thing to secure if that we can move forward, by storing current.next to next.

Now, that we have secured the next pointer, we can change current.next to point to the prev. As we will be moving forward to the next node, the new previous node will be the current node, change prev to current.
The last step, move forward, make current as next.

Here is an example of reversing a linked list Keep doing this till we have no nodes remaining in the linked list. What should we return now? current, prev or next? When current is at the null, next would already be null and we cannot return a null value of a linked list which was non-null, right? Only pointer which now points to the last node is prev and should be returned.

There is another confusion and that is how to initialize these three pointers. Which node you will process first? The head. So, the current should be initialized to the head of the linked list. What would be the next pointer of the head in the reversed linked list? It will be null. Initialize prev as null. For next it does not matter, you can choose to initialize it with null or null.

## Reverse a linked list implementation

```package AlgorithmsAndMe;
/**
* public class ListNode {
*     int val;
*     ListNode next;
*     ListNode(int x) { val = x; }
* }
*/

ListNode prev = null;

while(current != null){
next = current.next;
current.next = prev;
prev = current;
current  = next;
}

return prev;
}
}
```

The time complexity of reversing a linked list is O(n).

## Three Sum Problem

Given an array nums of n integers, are there elements a, b,c in nums such that a+ b + c= 0? Find all unique triplets in the array which gives the sum of zero.

The solution set must not contain duplicate triplets.

This problem is commonly known as three sum problem.

```Input: = [-1, 0, 1, 2, -1, -4],
Output:
[
[-1, 0, 1],
[-1, -1, 2]
]
```

## Three Sum problem : thought process

Before going into the details of find three numbers with a given sum, do you know how to find two numbers with a given sum s? The idea is simple, we keep a map of all the numbers in we already seen, if see a number a[i], we see in the map if S-a[i] is present or not. If yes, then we found the pair, if not, we put a[i] and move forward.

This solution has linear time complexity i.e O(n) with a space complexity of O(n) as well. There is a way to avoid space complexity by sorting the input array first and using two pointer approach.

Details of 2 sum problem, you can read here: 2 Sum problem: Find pair with given sum in array

Can we use our understanding of the two-sum problem to solve this three-sum problem?

Hint 1:
What happens if we take one number from the array and say that this number will be one of the three numbers which add up to zero? What happens to our problem statement now?

Hint 2:
If you have decided that A will be part of three numbers which add up to zero, all we need is two find out two numbers from index 1..len-1 which add up to OA. What does that mean? It looks like a 2-sum problem, doesn’t it?

We will start with index i from 0 and len-1, each time claiming that A[i] is part of the solution. Once, we fixed A[i] in solution, we will find two numbers which add to 0-A[i] solution to that problem we already know. If we find those two numbers let’s say A[j] and A[k], then A[i],A[j] and A[k] from the triplet. If we do not find those two numbers, discard A[i] and move forward. ### Show me the implementation of three sum problem

```class Solution {

public List<List<Integer>> threeSum(int[] nums) {

List<List<Integer>> result = new ArrayList<>();
//O(n log n)
Arrays.sort(nums);

for(int i=0; i<nums.length; i++){ if (i > 0 && nums[i] == nums[i - 1]) {
continue;
}
if(nums[i] > 0) continue;
int j = i+1;
int k = nums .length -1;

int target = 0-nums[i];
/*
Below code is exactly the same to find
two numbers with a given sum using two pointers
*/
while(j<k){
if(nums[j] + nums[k] == target ){
/* If find j and k, such that element at
index i, j and K add to zero, put them in result
*/
ArrayList<Integer> triplet = new ArrayList<Integer>(
Arrays.asList(nums[i], nums[j], nums[k]));
j++;
k--;
while (j < k && nums[j] == nums[j - 1]) j++;  // skip same result
while (j < k && nums[k] == nums[k + 1]) k--; // skip same result //since we want only unique triplets, using set result.add(triplet); } else if(nums[j] + nums[k] > target){
k--;
}
else{
j++;
}

}
}
return  result;
}
}
```

If you do not come up with the idea of moving i, j and k to get unique triplets, it is OK. You can always use HashSet to filter out duplicates and then copy the HashSet to the result list. It will take more time but will give the correct result.

``` if(uniqueSet.add(triplet)){