Segregate 0s and 1s in an array

Given an array of 0s and 1s, segregate 0s and 1s in such as way that all 0s come before 1s. For example, in the array below,

The output will be as shown below.

This problem is very similar to Dutch national flag problem

Different methods to segregate 0s and 1s in an array

Counting 0s and 1s.
The first method is to count the occurrence of 0s and 1s in the array and then rewrite o and 1 onto original array those many times. The complexity of this method is `O(n)` with no added space complexity. The only drawback is that we are traversing the array twice.

```package com.company;

/**
* Created by sangar on 9.1.19.
*/
public class SegregateZerosAndOnes {

public void segregate(int[] a) throws IllegalArgumentException{

if(a == null) throw new IllegalArgumentException();
int zeroCount = 0;
int oneCount = 0;

for (int i = 0; i < a.length; i++) {
if (a[i] == 0) zeroCount++;
else if (a[i] == 1) oneCount++;
else throw new IllegalArgumentException();
}

for (int i = 0; i < zeroCount; i++) {
a[i] = 0;
}

for (int i = zeroCount; i < zeroCount + oneCount; i++) {
a[i] = 1;
}
}
}

```

Using two indices.
the second method is to solve this problem in the same complexity, however, we will traverse the array only once. Idea is to maintain two indices, left which starts from index 0 and right which starts from end (n-1) where n is number of elements in the array.
Move left forward till it encounters a 1, similarly decrement right until a zero is encountered. If left is less than right, swap elements at these two indice and continue again.

1. Set left = 0 and right = n-1
2. While left < right 2.a if a[left] is 0 then left++
2.b if a[right] is 1 then right– ;
2.c if left < right, `swap(a[left], a[right])`

segregate 0s and 1s implementation

```public void segregateOptimized(int[] a) throws IllegalArgumentException{

if(a == null) throw new IllegalArgumentException();
int left = 0;
int right = a.length-1;

while(left < right){
while(left < a.length && a[left] == 0) left++;
while(right >= 0 && a[right] == 1) right--;

if(left >= a.length || right <= 0) return;

if(a[left] > 1 || a[left] < 0 || a[right] > 1 || a[right] < 0)
throw new IllegalArgumentException();

if(left < right){
a[left] = 0;
a[right] = 1;
}
}
}
```

The complexity of this method to segregate 0s and 1s in an array is O(n) and only one traversal of the array happens.

Test cases

```package test;

import com.company.SegregateZerosAndOnes;
import org.junit.*;
import org.junit.rules.ExpectedException;

import java.util.Arrays;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 28.8.18.
*/
public class SegregateZerosAndOnesTest {

SegregateZerosAndOnes tester = new SegregateZerosAndOnes();

@Test
public void segregateZerosAndOnesOptimizedTest() {

int[] a = {0,1,0,1,0,1};
int[] output = {0,0,0,1,1,1};

tester.segregateOptimized(a);
assertEquals(Arrays.toString(output), Arrays.toString(a));

}

@Test
public void segregateZerosAndOnesAllZerosOptimizedTest() {

int[] a = {0,0,0,0,0,0};
int[] output = {0,0,0,0,0,0};

tester.segregateOptimized(a);
assertEquals(Arrays.toString(output), Arrays.toString(a));

}

@Test
public void segregateZerosAndOnesAllOnesOptimizedTest() {

int[] a = {1,1,1,1,1};
int[] output = {1,1,1,1,1};

tester.segregateOptimized(a);
assertEquals(Arrays.toString(output), Arrays.toString(a));

}

@Test(expected=IllegalArgumentException.class)
public void segregateZerosAndOnesOptimizedIllegalArgumentTest() {

int[] a = {1,1,1,1,2};
tester.segregateOptimized(a);
}

@Test(expected=IllegalArgumentException.class)
public void segregateZerosAndOnesOptimizedNullArrayTest() {

tester.segregateOptimized(null);
}

}

```

Range minimum query RMQ

Given an array A[0..n], find the index of the element with the minimum value in a given range. This problem is known as Range Minimum Query or RMQ.
For example, if given array below, find the index of minimum value between index 2 and 7, RMQ answer would be 5, which is the index of element 1.

Going by the brute force, every time a query is fired, we scan the range and find the minimum in a given range in the same way as we do for an entire array. The complexity of each query being answered is `O(n)` wherein the worst-case range is the entire array.

Can we preprocess our data, so that our query operations are less costly? If we do so, there are two parts to the solution now, first preprocessing and the second query. Let’s assume complexity of each step is `f(n)` and `g(n)` respectively, then the complexity of solution can be denoted as `(f(n), g(n))`.

What kind of preprocessing can be done? Basic idea is to calculate the minimum index of all the ranges possible in the array. How many ranges are possible for an array with n elements? It’s `n2` ranges. Why?

So, to store the index of minimum value element of each range, `O(n2)` order space is required and time complexity goes to `O(n3)`. However, complexity of query is `O(1)`. So overall complexity of solution is `( O(n3), O(1) )`.

```#include <stdio.h>

int M[100][100];

int findMinimum(int a[], int start, int end, int size){
if(start >= size || end >= size) return -1;
int min = start;
for(int i=start; i<=end; i++){
if( a[i] < a[min] ){
min = i;
}
}
return min;

}
void preprocess(int a[], int size ){
for(int i=0; i<size; i++){
for(int j=0; j<size; j++){
for(int k=i; k<=j; k++){
M[i][j] = findMinimum(a,i,j,size);
}
}
}
}

int rmq(int start, int end){
return M[start][end];
}

int main(void) {

int a[] = { 2,3,1,5,9,7,10,5,6,3 };
int size = sizeof(a)/sizeof(a[0]);

//Preprocessing step
preprocess(a, size);
printf("\n Minimum index in range is : %d ", rmq(3,9) );
printf("\n Minimum index in range is : %d ", rmq(2,7) );

return 0;
}
```

With application of dynamic programming, the complexity of the preprocessing step can be reduced to `O(n2)`.

```#include <stdio.h>

int M[100][100];

void preprocess(int a[], int size)
{
int i,j;
for (i=0; i<size; i++)
M[i][i] = i;

for (i=0; i<size; i++){
for (j=i+1; j<size; j++){
if (a[M[i][j - 1]] < a[j])
M[i][j] = M[i][j - 1];
else
M[i][j] = j;
}
}
}

int rmq(int start, int end){
return M[start][end];
}

int main(void) {

int a[] = { 2,3,1,5,9,7,10,5,6,3 };
int size = sizeof(a)/sizeof(a[0]);

//Preprocessing step
preprocess(a, size);
printf("\n Minimum index in range is : %d ", rmq(3,9) );
printf("\n Minimum index in range is : %d ", rmq(2,7) );

return 0;
}
```

Range minimum query with O(n), O(√n) complexity solution

Can we do better for preprocessing step while trading off query step? If we divide the array into smaller chunks and store index of minimum value element in those chunks, will it help? And what should be the size of chunks? How about we divide the array in √n parts, where √n is size of part.

Now, find minimum element index in each of this chunk, and store it. Extra space required is (√n). Finding minimum for each chunk has a complexity of (√n * √n) as O(n).

To find minimum element index in the given range, follow three steps:
1. Find the index of the element with the minimum value in all chunks lying between the start and end of the given range. (Max √n operations if all chunks fall in the range)
2. Find minimum index in chunk where the start of the range lies. ( Max √n comparisons from start of the range to end of the chunk).
3. Find minimum index in chuck where end of the range lies from the start of chunk to end of the range.
4. Compare all these values and return the index of the minimum of them.

No matter, how big or small range is to find the index of an element with the minimum value, the worst case will be `O(√n)` as there are only 3*√n operations.

Let’s take an example and see how it works. Find minimum in range (2,7)

To get `RMQ(2,7)`, what are the chunks with are lying within range? There is only one: chunk 1. Minimum index of chunk 1 is M[1] which is 5, so, minimum element in those chunks is A[5].

Find the index of the minimum value in chunk 0 where start of the range lies (starting from start of the range which 2). There is only one element, which is index 2, so element to compare is A[2].

Find minimum from the start of chunk where the end of the range lies. So, we will be comparing A[6] and A[7].

At the end, compare A[5] (minimum of all chunks between start and end of range ), A[2] (minimum in chunk where the start of the range lies) and A[6], A[7] (minimum in chunk where end of the range lies) and we have the answer as 5 as A[5] is the minimum of all these values.

Aggregating all things, we found a way to optimize solution of range minimum query with complexity as `((o(n), O(√n))`.

RMQ using sparse table

Method 3 uses only O(√n) space, however, query time complexity is also `O(√n)`. To reduce query time at the expense of space, there is another method called as sparse table method. This method uses features of method 2 (dynamic programming) and features of method 3 (find minimums of chunks).

In this approach, split input array into chunks of size 2j where j varies from 0 to log n and n is number of elements in array. There will be `n log n` such chunks and hence the space complexity becomes `O(n log n)`.

After splitting, find the index of the minimum element in each chunk and store it in a lookup table.

M[i][j] stores minimum in range from i with size 2j.

For example, M[0][3] stores index of the minimum value between 0 and 7 (23 = 8 elements).

Now problem is how to create this lookup table? This table can be created using dynamic programming from bottom up. Specifically, we find index of the minimum value in a block of size `2j` by comparing the two minima of its two constituent blocks of size `2j-1`. More formally,

```M[i,j] = M[i, j-1] if A[M[i, j-1]] >= A[M[i+2^j-1, j-1]]
M[i,j] = M[i+2^j-1, j-1] otherwise.
```

How to find the index of the minimum value in a given range? Idea is to find two subranges which cover the entire range and then find the minimum of minimums of these two ranges.
For example, find RMQ(i,j). If 2k be size of largest block that fits into the range from i to j, then `k = log(j – i + 1)`

Now, we have two parts to look in from i to i+2k + 1 (already computed as M[i,k] ) and from j-2k+1 (already computed as M[j-2k+1, k]).

Formally,

```    RMQ(i,j) =  M[i][k] if A[ M[i][k] ] >= A[M[j-2^k+1, k]]
RMQ(i,j) =  M[j-2^k+1, k]
```

RMQ implementatio using sparse table

```#include <stdio.h>
#include <math.h>

int M[100][100];

void preprocess(int a[], int size)
{
int i, j;

for (i = 0; i < size; i++)
M[i][0] = i;

for (j = 1; 1 << j <size ; j++){
for (i = 0; i + (1 << j) - 1 < size; i++){
if (a[M[i][j - 1]] < a[M[i + (1 << (j - 1))][j - 1]])
M[i][j] = M[i][j - 1];
else
M[i][j] = M[i + (1 << (j - 1))][j - 1];
}
}
}

int rmq(int a[], int start, int end){
int j = floor(log(start-end+1));

if ( a[M[start][j]] <= a[M[end-(1<<j)+1][j]] )
return M[start][j];
else
return M[end-(1<<j)+1][j];
}

int main(void) {

int a[] = { 2,3,1,5,9,7,10,5,6,3 };
int size = sizeof(a)/sizeof(a[0]);

//Preprocessing step
preprocess(a, size);
printf("\n Minimum index in range is : %d ", rmq(a,3,9) );
printf("\n Minimum index in range is : %d ", rmq(a,2,7) );

return 0;
}
```

These two blocks entirely cover the range and since only once comparison required, the complexity of lookup will be `O(1)`.

In this post, we discussed various ways to implement range minimum query based on space and time complexity tradeoff. In future posts, we will discuss applications of RMQ such as segmented trees and least common ancestor problem.

Please share if something is wrong or missing, we would love to hear from you.

Number of occurrences of element

Given a sorted array and a key, find the number of occurrences of a key in that array. For example, in the below array, the number of occurrences of 3 is 3.

Brute force method will be to scan through the array, find the first instance of an element and then find the last instance, then do the math. The complexity of that method is O(N). Can we do better than that?

Did you get some hint when brute force method was described? Yes,we have already cracked the problem to find first occurrence and last occurrence in O(log n) complexity earlier. We will be using those two methods, all we need to do know is math.

```occurrences = lastInstance - firstInstance + 1
```

Number of occurrences of element : Implementation.

```package com.company;

/**
* Created by sangar on 25.3.18.
*/
public class BinarySearcchAlgorithm {

private static boolean isGreaterThanEqualTo(int[] a, int index, int key){
if(a[index] >= key) return true;

return false;
}

private static boolean isLessThanEqualTo(int[] a, int index, int key){
if(a[index] <= key) return true;

return false;
}

private int findFirstOccurance(int[] nums, int target){
int start = 0;
int end = nums.length-1;

while(start<end){
int mid =  start + (end-start)/2;

if(if(isGreaterThanEqualTo(nums, mid, target)){){
end = mid;
}
else{
start = mid+1;
}
}
return start < nums.length && nums[start] == target ? start : -1;
}

private int findLastOccurance(int[] nums, int target){
int start = 0;
int end = nums.length-1;

while(start<=end){
int mid =  start + (end-start)/2;

if(isLessThanEqualTo(nums, mid, target)){
start = mid+1;
}
else if(nums[mid] > target){
end = mid-1;
}
}
return end >= 0 && nums[end] == target ? end : -1;
}

public  static  int numberOfOccurrences(int[] a, int key){
int firstInstance = findFirstOccurance(a, key);
int lastInstance = findLastOccurance(a, key);

return (firstInstance != -1) ? lastInstance-firstInstance + 1 : 0;
}

public static void main(String[] args) {
int[] input = {3,10,11,15,17,17,17,20};

int index = numberOfOccurrences(input,3);
System.out.print(index == -1 ? "Element not found" : "Element found at : " + index);

}
}
```

The worst case time complexity of the algorithm to find the number of occurrences of an element in a sorted array is O(log n). We are using the iterative method to find the first and last instances, therefore, there is no hidden space complexity of the algorithm.

You can test the code at leetcode
Please share if there is something wrong or missing. Also if you want to contribute to algorithms and me, please drop an email at communications@algorithmsandme.com

Longest Substring Without Repeating Characters

Given a string, find longest substring without repeating characters in it.  For example, S = “abcaabaca”, longest substring without repeating characters will be “abc”

Brute force solution will be to scan all substrings of given string and check which one has longest length and no repeating characters. For a string with size n, there will be n * (n-1) substrings, and to check it each for unique characters, it will take n comparison in worst case. So, worst case complexity of this algorithm is O(n3) with additional space of O(n). Code is simple enough.

```package com.company;

import java.util.HashMap;

/**
* Created by sangar on 1.1.18.
*/
public class NonRepeatingCharacters {

private static boolean allUniqueCharacters(String s, int start, int end) {

HashMap<Character, Boolean> characters = new HashMap<>();

for (char c : s.substring(start, end).toCharArray()) {
if(characters.containsKey(c)) return false;
characters.put(c, Boolean.TRUE);
}
return true;
}

private static int longestSubstringWithoutRepeatingCharacters(String s) {
int len = s.length();
int maxLength = 0;

for (int i =0; i < len; i++){
for (int j=i+1; j<len; j++){
int length = j-i;
if (allUniqueCharacters(s, i, j)){
maxLength = Integer.max(maxLength, length);
}
}
}
return maxLength;
}

public static void main(String[] args) {

String s = "abcdabcbb";
System.out.println("Longest substting without repeating characters: " +
longestSubstringWithoutRepeatingCharacters(s));

}
}

```

Longest Substring Without Repeating Characters : Sliding window approach

A sliding window is an abstract concept commonly used in array/string problems. A window is a range of elements in array/string which defined by start and end indices. A sliding window is a window which “slides” its two boundaries to the certain direction.

In brute force approach, we repeatedly checked each substring for unique characters. Do we need to check each substring? If a substring s[i,j-1] contains non repeating characters, while adding jthcharacter, check if that character is already present in substring s[i,j-1]. Since we scan substring to ascertain uniqueness of new character, complexity of this algorithm is O(n2).
How about optimizing the scanning part? What if hash is used to store characters which are already seen in substring s[i,j-1]. In that case, checking uniqueness of new character is done in O(1) and overall algorithm complexity becomes linear.

``` public  static int longestSubstringWithoutRepeatingCharacters(String s) {
int len = s.length();
HashMap<Character, Boolean> characters = new HashMap<>();

int maxLength = 0;
int start = 0;
int  end = 0;
while (start < len && end < len) {
//Check only the last character.
if(!characters.containsKey(s.charAt(end))){
characters.put(s.charAt(end), Boolean.TRUE);
end++;
}
else {
int currentLength = end-start;
maxLength = Integer.max(maxLength, currentLength);
//Move start of window one position ahead.
characters.remove(s.charAt(start));
start++;
}
}
return maxLength;
}
```

If character already present in substring s[i,j-1], that means, it cannot be added to longest substring. Find length of substring (j-i) and compare it with current maximum length. if it is greater, max length of longest substring without repeating characters is (j-i).
At last move the window to position of duplicate.

Below is example execution of above code.

```Current Character : a
Substring (  ) does not contain a
New length of substring without repeating character : 1
Current Character : b
Substring ( a ) does not contain b
New length of substring without repeating character : 2

Current Character : c
Substring ( ab ) does not contain c
New length of substring without repeating character : 3

Current Character : a
Substring (abc) contains a

Current Character : a
Substring ( bc ) does not contain a
New length of substring without repeating character : 3

Current Character : b
Substring (bca) contains b

Current Character : b
Substring ( ca ) does not contain b
New length of substring without repeating character : 3

Current Character : c
Substring (cab) contains c

Current Character : c
Substring ( ab ) does not contain c
New length of substring without repeating character : 3

Current Character : b
Substring (abc) contains b

Current Character : b
Substring (bc) contains b

Current Character : b
Substring ( c ) does not contain b
New length of substring without repeating character : 3

Current Character : b
Substring (cb) contains b

Current Character : b
Substring (b) contains b

Current Character : b
Substring (  ) does not contain b
New length of substring without repeating character : 3

Longest substring without repeating characters : 3```

There is a small optimization which helps us to skip more characters when repeating character is found instead skipping one at a time. Store the index of each character seen in substring [i,j-1].  While processing jth character, if it is already in hash, we know the index j’ where that character is in string. There is no way that any substring can contain unique characters till j’ and j are in it. So, we skip all indices from i to j’ and start from j’+1 instead of i+1 as in above method.

```  public static int longestSubstringWithoutRepeatingCharacters3(String s) {
int len = s.length();
HashMap<Character, Integer> characters = new HashMap<>();

int maxLength = 0;

for (int start=0, end = 0; end <len; end++) {
if (characters.containsKey(s.charAt(end))) {
//find the index of duplicate character.
int currentIndex = characters.get(s.charAt(end));
start = Integer.max(currentIndex, start) + 1;
}
int currentLength = end - start;
maxLength = Integer.max(maxLength, currentLength);
//Update new location of duplicate character
characters.put(s.charAt(end), end );
}
return maxLength;
}
```

Complexity of find longest substring without repeating character is hence O(n) with additional space complexity of O(n).
Please share if something is wrong or missing. We would love to hear from you.

Merge overlapping intervals

Given N intervals S = {E1,E2,…..En} with each Ei has start time `si` and end time `ei`. Some of these intervals can be overlapping, Just to clarify, Ei and Ej overlap when start time of Ej i.e `sj` is less than end time of Ei i.e `ei`. For example, [(1,3),(2,4),(5,8), (6,9)] should transform into [(1, 4),(5,9)] has interval (1,3) and (2,4) overlap and interval (5,8) and (6,9) also overlap.

Merge overlapping intervals  : Thought process

As we always do, first try to come up with brute force solution, given enough time and space and money, how would you solve this?
Natural course is to take ith interval and compare start time of all jth intervals with end time of ith, if the start time of jth interval is less than the end time of ith event, then you can merge two intervals. What should be end time for merged interval then?  It should be maximum of end times of two merged intervals.

What will be time complexity of this approach? We are not using any additional space, however, worst case time complexity is O(n2). Can we do better?

What are two times we are comparing in brute force solution? It’s the start time of one interval with the end time of another. If we arrange input in a specific order, can we reduce processing some entries?

If we sort all intervals based on their start time, `si` < `si+1`<` si+2`. Also, interval is always forward looking, `ei` > `si`, `ei+1` > `si+1` and so on.

If `si` is greater `ei-1`, then `si+1` will be greater than `ei-1`, so no need to compare `si+1` with `ei-1`, that is no need to go beyond immediate previous interval for any interval Ei. If si is less than `ei-1`, update `ei-1` with maximum of `ei-1` and `ei` and move to Ei+1.
Notice that we need last interval Ei-1 to decide if to merge new interval into previous one or keep it as standalone. A stack is the best data structure to use. The algorithm will look like:

1. Consider interval Ei.
2. If stack is empty, push Ei to stack.
3. If stack is not empty, then pop interval at top of stack call it Ei-1.
4. Compare `si`, start time of Ei with `ei-1`, end time of Ei-1.
5. If `si` less than `ei-1`, update `ei-1` as max(`ei-1`, `ei`), as in maximum of end times of two intervals and push back Ei-1on to stack.
6. Else push Ei on to stack.
7. Continue till all events are considered.
8. At the end of processing, stack will contain all merged interval.

Let’s take an example and see how this algorithm works. We have following intervals and we have to merge overlapping intervals.

First of all, sort all interval based on their start time.

Create a stack, start with the first interval, since the stack is empty, we will push the first event on to the stack.

After pushing the first event, the problem state looks like this

Take the second interval, start time (2) of the second interval is less than the end time of the previous event on the stack (3), hence, find the maximum of end times of these two intervals and update the last interval with that end time and push back on to the stack.

Look at the third interval, the start time of it is greater than the end time of interval on top of the stack, just push interval on to the stack.

Last interval, this time, the start time of the new interval is less than the end time of interval on top of the stack.

Find the maximum of end times of two intervals and update the previous interval with that end time and push it back on to stack.

At this point, when there is no more interval remaining, stack contains all merged overlapping intervals.

Merge overlapping intervals : Implementation

```package com.company;

import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.Stack;

/**
* Created by sangar on 8.4.18.
*/
public class OverlappingIntervals {
public  static ArrayList<Interval>
mergeOverlappingIntervals(ArrayList<Interval> intervals){

ArrayList<Interval> mergedIntervals = new ArrayList<>();
Stack<Interval> s = new Stack();

//Sort the ArrayList of interval based on start time.
Collections.sort(intervals, Comparator.comparing(p -> p.getStartTime()));
for(Interval currentInterval : intervals){
if(s.empty())s.push(currentInterval);
else {
Interval previousInterval = s.pop();
if(previousInterval.getEndTime() >
currentInterval.getStartTime()){
/*
If current interval's start time is less than end time of
previous interval, find max of end times of two intervals
and push new interval on to stack.
*/
int endTime = Integer.max(previousInterval.getEndTime(),
currentInterval.getEndTime());
/* Notice that we have created new interval and
did not update the old one
This concept is called as immutability of class
*/
s.push(new Interval(previousInterval.getStartTime(),
endTime));
}
else{
s.push(previousInterval);
s.push(currentInterval);
}
}
}
while(!s.empty()){
}

return mergedIntervals;
}

public static void main(String[] args) {
ArrayList<Interval> intervals = new ArrayList<>();

ArrayList<Interval> mergedIntervals = mergeOverlappingIntervals(intervals);
for (Interval interval : mergedIntervals){
System.out.print("(" + interval.getStartTime() +"," + interval.getEndTime() + ")");
}
}
}
```

Complexity of algorithm to merge overlapping intervals will be `O(n log N)` due to sorting with `O(n)` extra space for stack and then copying into the list to return also takes `O(n)` space.

There is another way to implement the same function without using the stack, here we use the fact that ArrayList in Java is implemented using the array as the base and getting an element at a particular index should be `O(1)` operation. The code looks more or less the same, however, there is no traversal of the stack at the end to create the list to return.

```public List<Interval> mergeOptimized(List<Interval> intervals) {

if(intervals.size() == 0) return intervals;

Collections.sort(intervals,
(Interval a, Interval b) -> a.getStartTime() - b.getStartTime());

List<Interval> mergedIntervals = new ArrayList<Interval>();
for(Interval interval : intervals){

/*If the merged list is empty add the interval to
it or check if the last interval in merged list overlaps

/*Remember the get function on ArrayList is O(1) operation
because Arraylists in Java are backed by arrays */
if(mergedIntervals.isEmpty()
|| mergedIntervals.get(mergedIntervals.size()-1).getEndTime() <
interval.getStartTime() ){
}
else {
int lastEndTime = Math.max(
mergedIntervals.get(mergedIntervals.size()-1).getEndTime(),
interval.getEndTime()
);
mergedIntervals.get(mergedIntervals.size()-1).setEndTime(lastEndTime);
}
}

return mergedIntervals;
}

```

You can use the above snippet of code to submit for this leetcode problem and it should be accepted.

Please share if there is something missing or wrong. Also, please reach out to us at communications@algorithmsandme.com if you want to contribute to the website and help others to learn by sharing your knowledge. If you are preparing for an interview and need some coaching to prepare for it, please sign up for the free session with us.

Subarray with sum zero

Given an array of positive and negative integers, find a subarray with sum zero in that array. For example, in the array given below, there are two subarrays whose elements sum to zero.

Brute force method to find subarray with sum zero will be to find all sub-arrays of the array and then add them individually to see if any subarray adds up to zero. There can be `n * (n-1)` subarrays for a given array of size n, so the complexity of brute force solution is `O(n2)`.

```package com.company;

import java.util.Arrays;
import java.util.HashMap;

/**
* Created by sangar on 3.12.18.
*/
public class SubarrayWithZeroSum {
public int [] findSubarrayWithZeroSumBrute(int[] a){
int len = a.length;

for(int i=0; i<len; i++){
int  sum  = 0;
for(int j=i; j<len; j++){
sum += a[j];
if(sum == 0){
return Arrays.copyOfRange(a,i,j+1);
}
}
}
return new int[0];
}
}
```

Test cases

```package test;

import com.company.SubarrayWithZeroSum;
import org.junit.jupiter.api.Test;

import java.util.Arrays;

import static org.junit.Assert.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class SubarrayWithSumZeroTest {

SubarrayWithZeroSum tester = new SubarrayWithZeroSum();

@Test
public void subarrayWithZeroSumBruteTest() {

int[] a = {2, -3, -1, 4};
int [] output = {-3, -1, 4};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSumBrute(a)));
}

@Test
public void subarrayWithZeroSumBruteNoSubArrayTest() {

int[] a = {2, -3, -2, 4};
int [] output = {};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSumBrute(a)));
}

@Test
public void subarrayWithZeroSumBruteOneElementTest() {

int[] a = {2, 0, -1, 4};
int [] output = {0};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSumBrute(a)));
}
}
```

Find subarray with sum zero: thoughts

A subarray is a contiguous part of an array. Let’s say we find the sum of subarray starting at 0 and ending at any index i. So, T[i] represents the sum of subarray A[0..i].

What if we have two indices i and j; such that `i< j` and `T[i] = T[j]`. In this case, all the elements which are between index i and index j add up to zero and that is our subarray with sum zero.
Length of subarray with sum zero will be j-i+1.

Implementation

```package com.company;

import java.util.Arrays;
import java.util.HashMap;

/**
* Created by sangar on 3.12.18.
*/
public class SubarrayWithZeroSum {
public int [] findSubarrayWithZeroSum(int[] a){

int len = a.length;

int [] T = new int[len];

T[0] = a[0];
for(int i=1; i<len; i++){
T[i] = T[i-1] + a[i];
}

//Complexity of below code is O(n^2)

for(int i=0; i<len; i++){
for(int j=i+1; j<len; j++){
if(T[i]== T[j]){
return Arrays.copyOfRange(a, i+1, j+1);
}
}
}
return new int[0];
}
}
```

Test cases

```package test;

import com.company.SubarrayWithZeroSum;
import org.junit.jupiter.api.Test;

import java.util.Arrays;

import static org.junit.Assert.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class SubarrayWithSumZeroTest {

SubarrayWithZeroSum tester = new SubarrayWithZeroSum();

@Test
public void subarrayWithZeroSumTest() {

int[] a = {2, -3, -1, 4};
int [] output = {-3, -1, 4};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSum(a)));
}

@Test
public void subarrayWithZeroSumNoSubArrayTest() {

int[] a = {2, -3, -2, 4};
int [] output = {};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSum(a)));
}

@Test
public void subarrayWithZeroSumOneElementTest() {

int[] a = {2, 0, -1, 4};
int [] output = {0};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSum(a)));
}
```

The complexity of the algorithm to find a subarray with zero sum in a given array of integers is `O(n2)` with an additional space complexity of `O(n)` to store sum till index i.

We can optimize it further by creating a hash of all the sums which we see while adding. When we add the index i to already calculated sum till index i-1, we check if the new sum is zero? If yes, then subarray from 0 to index i add up to zero. If there is already a sum present which is equal to the current sum then there is subarray with sum zero between index when we saw the sum last and current index.

```package com.company;

import java.util.Arrays;
import java.util.HashMap;

/**
* Created by sangar on 3.12.18.
*/
public class SubarrayWithZeroSum {

public int [] findSubarrayWithZeroSumOptimized(int[] a){

int len = a.length;

HashMap<Integer, Integer> T = new HashMap<Integer, Integer>();

int sum  = 0 ;
for(int i=0; i<len; i++){
sum  += a[i];
if(T.get(sum) != null){
return Arrays.copyOfRange(a,T.get(sum)+1, i+1);
}
T.put(sum, i);
}

return new int[0];
}
}
```

Test cases

```package test;

import com.company.SubarrayWithZeroSum;
import org.junit.jupiter.api.Test;

import java.util.Arrays;

import static org.junit.Assert.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class SubarrayWithSumZeroTest {

SubarrayWithZeroSum tester = new SubarrayWithZeroSum();

@Test
public void subarrayWithZeroSumOptimizedTest() {

int[] a = {2, -3, -1, 4};
int [] output = {-3, -1, 4};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSumOptimized(a)));
}

@Test
public void subarrayWithZeroSumOptimizedNoSubArrayTest() {

int[] a = {2, -3, -2, 4};
int [] output = {};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSumOptimized(a)));
}

@Test
public void subarrayWithZeroSumOptimizedOneElementTest() {

int[] a = {2, 0, -1, 4};
int [] output = {0};
assertEquals(Arrays.toString(output),
Arrays.toString(tester.findSubarrayWithZeroSumOptimized(a)));
}

}
```

The complexity of this method is `O(n)` with additional space of `O(n)` in worst case.

Sliding window problem

Given a large integer array of size x, window size of n and a random number k, find smallest k numbers in every window of n elements in array. This is commonly know as sliding window problem. For example: for an array [2,3,1,5,6,4,2,5,4,3,8] k = 2 and n = 6, output should be [1,2],[1,2],[1,3][1,4][1,3][1,3]. How? see below figure.

This problem regularly features in Amazon interviews.

Find k numbers in sliding window : thoughts

If we spit down the problem, it reduces to find k smallest elements in an array, which can easily be solve in multiple ways. All we have to take care of is moving the window and storing results for each window.

Quick sort method
First way is to use quick sort, we randomly pick a pivot and put it in right place. When pivot is at right place, all elements on the right side of pivot are greater than pivot and all elements on the left side are less than pivot. If pivot is a kth position in array, all elements on left side of pivot automatically become K smallest elements of given array. In worst case this method take O(n log n) for each window.

Using heaps
What are we interested in is k elements, what if from current window, we take out first k numbers and consider them as k smallest elements? This set of k numbers may change based value of following numbers in the window. Which way? If new number is smaller than any of the number chosen randomly, new number has to be added into the k smallest element set. However, we have only k spaces there, so someone has to move out.

If new number is less than any number in set, it must be less than maximum number in set

Given above fact, we can always swap new number with maximum of set. Now problem is how to find max in a set? This set will modified repeatedly, so we cannot just sort it once and find the max. For use cases when data is changing and we have to find max of that set, heaps are the best data structures to use. In this case we will use max heap. Max heap is kind of heap where children of root node are smaller than root node. Max heap will give us O(1) complexity to find max and O(log n) complexity to heapify on removal old max and insertion of new number.

Algorithm

1. Create a max heap with first k elements of window.
2. Scan through remaining elements in window
1. If root of max heap is less than new number, remove the root and add new element to heap
2. All elements in heap at the end of processing are k smallest numbers in window.

Sliding window algorithm to find k smallest elements : Implementation

```#include<stdio.h>
#include<stdlib.h>
#include <math.h>

typedef struct node {
struct node * left;
struct node * right;
int data;
} heapNode;

int leftChild(int i){
return 2*i + 1;
}

int rightChild(int i){
return 2*i + 2;
}

void swapPtr(heapNode *a[], int i, int largest){
heapNode *temp = a[i];
a[i] = a[largest];
a[largest] = temp;
}
/* This function heapifies heap after removal of root
or at time of building heap from an array */
void max_heapify_ptr(heapNode *a[], int i, int len){
int largest = i;
int left, right;

left = leftChild(i);
right = rightChild(i);

if(left <= len && a[i]->data <a[left]->data){
largest = left;
}
if(right <= len && a[largest]->data < a[right]->data){
largest = right;
}
if(largest != i){
swapPtr(a, i, largest);
max_heapify_ptr(a, largest, len);
}
}

/* Building heap from given elements */
void build_max_heap_ptr(heapNode *a[], int len){
int i = len/2 +1;
for(; i>=0; i--){
max_heapify_ptr(a,i, len);
}
}

/* This function allocates node of heap */
heapNode * create_node(int data){
heapNode *node = (heapNode *)(malloc)(sizeof(heapNode));
if(node){
node->data = data;
}
return node;

}

/* This function is real implementation of
the sliding window algorithm */
void slide_window(int buffer[], int N, int K, int buffer_len){

int i =0, j =0,s;
heapNode *max_heap[K+1];
int num = K;

for(j=0 ; j + N < buffer_len; j++){
/* Window starts at index 0 and is of size N */
printf("\nCurrent window :");
for(s =j; s<j+N; s++){
printf("%d ", buffer[s]);
}
printf("\n");
/* Put K element from N element window */
for(i=0;i<K; i++){
/* Since we wold be doing for every window,
avoiding reallocation of node */
if(max_heap[i]){
max_heap[i]->data = buffer[i+j];
}
else{
max_heap[i] = create_node(buffer[i+j]);
}
}
/* Build min heap with those entered elements */
build_max_heap_ptr(max_heap,K-1);

/*Now for all remaining N-K-1 elements in window,
check if they fit in max heap */
for(i=K+j; i< N+j; i++){
heapNode * root = max_heap[0];
if(buffer[i] < root->data){
root->data = buffer[i];
max_heapify_ptr(max_heap, 0, K-1);
}
}

/*Print the current max heap, it will contain K smallest
element in current window */
printf("K minimum elements in this window :");
for(int x=0; x< K; x++){
printf("%d ", max_heap[x]->data);
}

}
}
/* Driver Program to execute above code */
int main(){
int buffer[10] = {1,4,5,6,3,2,4,8,9,6};

int K= 4;
int N =5;

int size = sizeof(buffer)/ sizeof(buffer[0]);

slide_window(buffer,N, K,size);
return 0;
}
```

Following figures explain how window slides and how heap is updated.
1. Window starts at index 0 and ends at N. We take K minimum elements among N elements and store in max heap. Array is given in below picture with window size of 9 and k = 4.
First step is to create a max heap with first 4 elements of window.

Next we are looking at 4, which is less than max in max heap. So we remove the max from heap and add the new element(4) to heap.

Next is 2, which is less than max in max heap. So we remove the max from heap and add the new element(2) to heap.

Next is 3, which is less than max in max heap. So we remove the max from heap and add the new element(3) to heap.

Next we have 10 and 11 which are greater than root of max heap, so nothing happens.

We come to end of window. Therefore, 4 smallest element in window are [ 1,2,3,4 ]

Next window moves one step ahead, that’s where you discard the max heap and create the new empty one and repeat the process.

We can actually avoid discarding the entire heap when window moves, however complexity of overall algorithm will remain the same. This problem is asked in a different way, which is to find maximum in sliding window.

```#include <iostream>
#include<deque>
using namespace std;

void slidingWindow(int buffer[], int n, int w, int output[])
{
deque<int> Q;
int i;
/*Initilize deque Q for first window, put all W elements, however also
removing elements which cannot be maximum in this window */
for (i = 0; i < w; i++)
{
//This is where we are removing all less than elements
while (!Q.empty() && buffer[i] >= buffer[Q.back()])
Q.pop_back();
// Pushing the index
Q.push_back(i);
}

for (i = w; i < n; i++)
{
output[i-w] = buffer[Q.front()];

//update Q for new window
while (!Q.empty() && buffer[i] >= buffer[Q.back()])
Q.pop_back();

//Pop older element outside window from Q
while (!Q.empty() && Q.front() <= i-w)
Q.pop_front();

//Insert current element in Q
Q.push_back(i);
}
output[n-w] = buffer[Q.front()];
}

int main(){
int a[]={3,5,4,2,-1,4,0,-3};
int n = sizeof(a)/sizeof(a[0]);
int output[n];

slidingWindow(a,n,4,output);
return 0;
}
```

Worst case complexity of sliding window algorithm would be O(n2k). K is included as it takes O(k) complexity to build heap of k elements.

Please share if there is something wrong or missing.

Merge k sorted arrays

Given k sorted arrays each having n elements, merge k sorted arrays into one n*k element array in sorted order. For example, given 3 arrays are as below

Result array should be like

Merge k sorted arrays

Since all the input arrays are sorted, the first element in result array will be among the first elements of input arrays. How can we find the minimum among all the elements plucked from the first index of each array ? Easy, take those k elements (there are k arrays, so k first elements) and build a min heap. The root of the min heap the least element among the first elements of all arrays, so it will be the first element in the result array.

Once, we add the first element into the result array, we have to find the second element. Second element can be from the set of first elements of all input arrays except one array from which the first element of result array was added. So, we will take second element of that array.

In order to know which array gave the minimum element at a particular time, we will store additional information of about array and index at which minimum element was.

If i represents the array number, and j represents the index of the minimum number in heap in ith array, then we add (j+1)th element to the min heap next and re-heapify. If j goes out of bound for ith array, we take min heap with k-1 size and go on, till we have no elements left in heap.

Follow the procedure for `(n-1)*k` times. When all array elements are processed, result array will be in the sorted array.

Merge k sorted arrays: algorithm

• Build min heap with the first element of all k arrays.
• Pick the root of min element and put it in the result array.
• If there are remaining elements in the array,  put next element at the root of min heap and heapify again
• If all elements are already of an array are processed, reduce the size of min heap by 1.
• Repeat step 2, 3 and 4 till min heap is empty.

Merge k sorted arrays: implementation

```package com.company;

import java.util.PriorityQueue;

/**
* Created by sangar on 2.12.18.
*/
public class MergeKSortedArrays {
private class HeapNode{
public int arrayNum;
public int index;
public int value;

public HeapNode(int arrayNum, int index, int value){
this.arrayNum = arrayNum;
this.index = index;
this.value = value;
}
}

public int [] mergeKSortedArrays(int[][] arrays){

if(arrays == null) return null;

PriorityQueue<HeapNode> minHeap =
new PriorityQueue<>(arrays.length,
(HeapNode a,HeapNode b)-> a.value - b.value);

int size = 0;
for(int i =0; i<arrays.length; i++){
size += arrays[i].length;
}
int[] result = new int[size]; // k * n

//add first elements in the array to this heap
for(int i=0; i<arrays.length; i++){
}

//Complexity O(n * k * log k)
for(int i=0; i< size; i++){
//Take the minimum value and put into result
HeapNode node = minHeap.poll();

if(node != null){
result[i] = node.value;
if(node.index + 1 < arrays[node.arrayNum].length) {
//Complexity of O(log k)
node.index + 1,
arrays[node.arrayNum][node.index + 1]));
}
}
}
return result;
}
}
```

Test cases

```package test;

import com.company.MergeKSortedArrays;
import org.junit.jupiter.api.Test;

import java.util.Arrays;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class MergeKSortedArraysTest {

MergeKSortedArrays tester = new MergeKSortedArrays();

@Test
public void mergeKSortedArraysTest() {

int[][] input  ={
{ 1, 2, 3, 4 }, { 5, 6, 7, 8 }, { 9, 10, 11, 12 }
};

int[] expectedOutput = {1,2,3,4,5,6,7,8,9,10,11,12};

int [] output = tester.mergeKSortedArrays(input);

System.out.println(Arrays.toString(output));
assertEquals(Arrays.toString(expectedOutput),
Arrays.toString(output));
}

@Test
public void mergeKSortedArraysWithUnequalSizeTest() {

int[][] input  ={
{ 1, 2 }, { 5, 6, 7}, { 9, 10, 11, 12 }
};

int[] expectedOutput = {1,2,5,6,7,9,10,11,12};

int [] output = tester.mergeKSortedArrays(input);

System.out.println(Arrays.toString(output));
assertEquals(Arrays.toString(expectedOutput),
Arrays.toString(output));
}

@Test
public void mergeKSortedArraysWithNullTest() {

int [] output = tester.mergeKSortedArrays(null);

assertEquals(null, output);
}
}

```

Complexity of code to merge k sorted arrays is O(n * k * log k) along with space complexity of O(k).

Kth smallest element in array

Given an array of integers which is non sorted, find kth smallest element in that array. For example: if input array is A = [3,5,1,2,6,9,7], 4th smallest element in array A is 5, because if you sort the array A, it looks like A = [1,2,3,5,6,7,9] and now you can easily see that 4th element is 5.

This problem is commonly asked in Microsoft and Amazon interviews as it has multiple layers and there is some many things that can be measured with this one problem.

Kth smallest element : Line of thought

First of all, in any interview, try to come up with brute force solution. Brute force solution to find Kth smallest element in array of integers would be to sort array and return A[k-1] element (K-1 as array is zero base indexed).

What is the complexity of brute force solution? It’s `O(n2)`? Well, we have sort algorithms like merge sort and heap sort which work in `O(n log n)` complexity. Problem with both searches is that they use additional space. Quick sort is another sort algorithm. It has problem that it’s worst case complexity will be `O(n2)`, which happens when input is completely sorted.
In our case, input is given as unsorted already, so we can expect that quick sort will function with `O(n log n)` complexity which is it’s average case complexity. Advantage of using quick sort is that there is no additional space complexity.

Optimising quick sort

Let’s see how quicksort works and see if we can optimize solution further?
Idea behind quicksort is to find the correct place for the selected pivot. Once the pivot is at the correct position, all the elements on the left side of the pivot are smaller and on the right side of the pivot are greater than the pivot. This step is partitioning.

If after partitioning, pivot is at position j, can we say that pivot is actually jth smallest element of the array? What if j is equal to k? Well problem solved, we found the kth smallest element.

If j is less than k, left subarray is less than k, we need to include more elements from right subarray, therefore kth smallest element is in right subarray somewhere. We have already found j smallest elements, all we need to find is k-j elements from right subarray.

What if j is greater than k? In this case, we have to drop some elements from left subarray, so our search space would be left subarray after partition.

Theoretically, this algorithm still has complexity of `O(n log n)`, but practically, you do not need to sort the entire array before you find k smallest elements.

Algorithm to find K smallest elements in array

1. Select a pivot and partition the array with pivot at correct position j
2. If position of pivot, j, is equal to k, return A[j].
3. If j is less than k, discard array from start to j, and look for (k-j)th smallest element in right sub array, go to step 1.
4. If j is greater than k, discard array from j to end and look for kth element in left subarray, go to step 1

Let’s take an example and see if this algorithm works? A =  [4, 2, 1, 7, 5, 3, 8, 10, 9, 6 ], and we have to find fifth smallest element in array A.

Start with pivot as first index of array, so pivot = 0, partition the array into two parts around pivot such that all elements on left side of pivot element, i.e. `A[pivot]` are smaller and all elements on right side are greater than `A[pivot]`.

Start with pivot as first index of array, so pivot = 0, partition the array into two parts around pivot such that all elements on left side of pivot element, i.e. `A[pivot]` are smaller and all elements on right side are greater than `A[pivot]`.

In our example, array A will look like below after pivot has found it’s correct position.

If pivot == k-1 (array is represented as zero base index), then `A[pivot]` is kth smallest element. Since pivot (3) is less than k-1 (4), look for kth smallest element on right side of the pivot.

k remains as it is as opposed to k-j mentioned in algorithm as pivot is given w.r.t entire array and not w.r.t subarray.

In second iteration, pivot = 4 (index and not element). After second execution of quick sort array A will be like

pivot(4) which is equal to k-1(5-1). 5th smallest element in array A is 5.

Kth smallest element : Implementation

```package com.company;

/**
* Created by sangar on 30.9.18.
*/
public class KthSmallest {
private void swap(int[] a, int i, int j){
int temp = a[i];
a[i] = a[j];
a[j] = temp;
}
private int partition(int[] a, int start, int end){
int pivot = a[start];
int i  = start+1;
int j  = end;

while(i <= j){
while(a[i] < pivot) i++;
while(a[j] > pivot) j--;

if(i < j) {
swap(a, i, j);
}
}
swap(a, start, j);
return j;
}

public int findKthSmallestElement(int a[], int start,
int end, int k){
if(start <= end){
int p = partition(a, start, end);
if(p == k-1){
return a[p];
}
if(p > k-1)
return findKthSmallestElement(a, start, p, k);
if(p < k-1)
return findKthSmallestElement(a, p+1, end, k);
}
return -1;
}
}
```
```package test;

import com.company.KthSmallest;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 28.8.18.
*/
public class KthSmallestTest {

KthSmallest tester = new KthSmallest();
private int[] a = {4, 2, 1, 7, 5, 3, 8, 10, 9};
@Test
public void kthSmallest() {
assertEquals(7, tester.findKthSmallestElement(a,0,8,6));
}

@Test
public void firstSmallest() {
assertEquals(1, tester.findKthSmallestElement(a,0,8,1));
}

@Test
public void lastSmallest() {
assertEquals(10, tester.findKthSmallestElement(a,0,8,9));
}

@Test
public void kGreaterThanSize() {
assertEquals(-1, tester.findKthSmallestElement(a,0,8,15));
}
@Test
public void emptyArray() {
int[] a = {};
assertEquals(-1, tester.findKthSmallestElement(a,0,0,1));
}

@Test
public void nullArray() {
assertEquals(-1, tester.findKthSmallestElement(null,0,0,1));
}
}
```

Complexity of using quick sort algorithm to find kth smallest element in array of integers in still O(n log n).

Kth smallest element using heaps

Imagine a case where there are a billion integers in the array and you have to find 5 smallest elements from that array. The complexity of O(n log n) is too costly for that use case. Above algorithm using quick sort does not take into consideration disparity between k and n.

We want top k elements, how about we chose those k elements randomly, call it `set A` and then go through all other n-k elements, call it `set B`, check if element from set B (n-k elements) can displace element in set A (k elements)?

What will be the condition for an element from set B to replace an element in set A? Well, if the new element is less than maximum in set A than the maximum in set A cannot be in the set of k smallest elements right?  Maximum element in set A would be replaced by the new element from set B.

Now, the problem is how to quickly find the maximum out of set A. Heap is the best data structure there. What kind of heap: min heap or max heap? Max heap as it store the maximum of the set at the root of it.

Let’s defined concrete steps to find k smallest elements using max heap.

1. Create a max heap of size k from first k elements of array.
2. Scan all elements in array one by one.
1.  If current element is less than max on heap, add current element to heap and heapify.
2. If not, then go to next element.
3. At the end, max heap will contain k smallest elements of array and root will be kth smallest element.

Let’s take an example and see if this algorithm works? The input array is shown below and we have to find the 6th smallest element in this array.

Step 1 : Create a max heap with first 6 elements of array.

Step 2: Take next element from set B and check if it is less than the root of max heap. In this case, yes it is. Remove the root and insert the new element into max heap.

Step 2: It continues to 10, nothing happens as the new element is greater than the root of max heap. Same for 9.  At 6, again the root of max heap is greater than 6. So remove the root and add 6 to max heap.

Array scan is finished, so just return the root of the max heap, 6 which is the sixth smallest element in given array.

```	public int findKthSmallestElementUsingHeap(int a[], int k){
//https://stackoverflow.com/questions/11003155/change-priorityqueue-to-max-priorityqueue

PriorityQueue<Integer>  maxHeap =
new PriorityQueue<>(k, Collections.reverseOrder());

if(a == null || k > a.length) return -1;
//Create max with first k elements
for(int i=0; i<k; i++){
}

/*Keep updating max heap based on new element
If new element is less than root,
remove root and add new element
*/

for(int i=k; i<a.length; i++){
if(maxHeap.peek() > a[i]){
maxHeap.remove();
}
}
return maxHeap.peek();
}
```

Can you calculate the complexity of above algorithm? `heapify()` has complexity of `log(k)` with k elements on heap. In worst case, we have to do `heapify()` for all elements in array, which is n, so overall complexity of algorithm becomes `O(n log k)`. Also, there is additional space complexity of `O(k)` to store heap.
When is very small as compared to n, this algorithm again depends on the size of array.

We want k smallest elements, if we pick first k elements from a min heap, will it solve the problem? I think so. Create a min heap of n elements in place from the given array, and then pick first k elements.
Creation of heap has complexity of `O(n)`, do more reading on it. All we need to do is delete k times from this heap, each time there will be `heapify()`. It will have complexity of `O(log n)` for n element heap. So, overall complexity would be `O(n + k log n)`.

Depending on what you want to optimize, select correct method to find kth smallest element in array.

Please share if there is something wrong or missing. If you are interested in taking coaching sessions from our experienced teachers, please reach out to us at communications@algorithmsandme.com