Blog

Check if tree is BST or not

Check if binary tree is (BST) or not

This is one of the most asked programming interview questions. How to check or validate that a given binary tree is BST or not or if a given tree is binary search tree? For example, first and second binary trees are BST but not the third one.

binary tree is BST or not
binary tree is BST

binary tree is BST or not
binary tree is BST
binary tree is bst or not
binary tree is not BST

In binary tree introduction  we touched upon the topic of recursive structure of binary search tree.  The first property to satisfy to be qualified as BST is: value in all nodes on the left subtree of the root node are smaller and value of all nodes in right subtree are greater than the root node. This property should be valid at all nodes.

Check if binary tree is (BST) or not: Recursive implementation

So, to see if the binary tree rooted a particular node is BST, the root is greater than all nodes on left subtree and less than all nodes on the right subtree. However, is it sufficient condition? Let’s take a counterexample and prove that even root is greater than all nodes on the left side and smaller than all nodes on right subtree, a binary tree may not be binary search tree. Look at the tree below.

In this tree above condition is satisfied, but we cannot call this binary tree a BST.

This is a recursive structure of binary search tree plays an important role. For a binary tree root at a node to be BST, it’s left subtree and right subtree should also be BST. So, there are three conditions which should be satisfied:

  1. Left subtree is BST
  2. Right subtree is BST
  3. Value of root node is greater than max in left subtree and less than minimum in right subtree

Check if binary tree is (BST) or not  : Recursive implementation

#include<stdio.h>
#include<stdlib.h>

#define true 1
#define false 0

struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;

Node * findMaximum(Node * root){
	if( !root ) return root;
	while( root->right ){
		root = root->right;
	}
	return root;
}

Node * findMinimum(Node * root){
	if( !root ) return root;
	while( root->left ){
		root = root->left;
	}
	return root;
}

int isBST(Node * node){

  	if(!node)
  		return true;
    
    if( ! ( node->left || node->right ) ) return true;   
  	int isLeft  = isBST(node->left);
  	int isRight = isBST(node->right);

  	if(isLeft && isRight){
  		/* Since we already know that left sub tree and
 		right sub tree are Binary search tree, finding min and max in them would be easy */
   	
   		Node *max  =  NULL;
   		Node *min  =  NULL;
   		if( node->left )
   			max = findMaximum(node->left);
   		if( node->right )
   			min = findMinimum(node->right);

   		//Case 1 : only left sub tree is there
    	if(max && !min)
        	return node->value > max->value;
   		//Case 2 : Only right sub tree is there
    	if(!max && min)
       		return node->value < min->value;
   		//Case 3 : Both left and right sub tree are there
    	return (node->value > max->value && node->value < min->value);
   }
   return false;
}

Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value < value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  printf("%s", isBST(root ) ? "Yes" : "No" );
  
  return 0;
}

Check if binary tree is (BST) or not  : Optimized implementation

Above implementation to check if the binary tree is binary search tree or not is correct but inefficient because, for every node,  its left and right subtree are scanned to find min and max. It makes implementation non-linear.

How can we avoid re-scanning of left and right subtrees? If we can keep track max on left subtree and min on right subtree while checking those subtrees for BST property and use same min and max.

Start with INTEGER_MAX and INTEGER_MIN, check if the root node is greater than max and less than min. If yes, then go down left subtree with max changed to root value, and go down to right subtree with min changed to root value. It is a similar implementation as above, except from revisiting nodes.

#include<stdio.h>
#include<stdlib.h>

#define true 1
#define false 0
#define INT_MIN -32999
#define INT_MAX 32999

struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;

int isBSTHelper(Node *root, int max, int min){
    if(!root) return true;

    if(root->value < min || root->value > max){
        return false;
    }

    return isBSTHelper(root->left, root->value, min) &&
           isBSTHelper(root->right, max, root->value);
}

int isBST(Node *root){
    return isBSTHelper(root, INT_MAX, INT_MIN);
}


Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value < value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  printf("%s", isBST(root ) ? "Yes" : "No" );
  
  return 0;
}

The complexity of the above implementation is O(n) as we are traversing each node only once.

Another method to see if the binary tree is BST or not is to do an inorder traversal of the binary tree and keep track of the previous node. As we know in order traversal of a binary search tree gives nodes in sorted order, previously visited node should be always smaller than the current node. If all nodes satisfy this property, a binary tree is a binary search tree. If this property is violated at any node, the tree is not a binary search tree.

The complexity of this implementation is also O(n) as we will be traversing each node only once

Please share if there is something missing or not correct. If you want to contribute and share your knowledge with thousands of learner around the world, please reach out to us at communications@algorithmsandme.com

Closest element in binary search tree

Closest element in a binary search tree

The problem statement is: given a binary search tree and a value, find the closest element to that value in the binary search tree. For example, if below is the binary search tree and value given is 16, then the function should return 15.

closest element in binary search tree
Closest element in a binary search tree

Closest element in BST: thoughts

A simple approach is to go to all nodes of BST and find out the difference between the given value and value of the node. Get the minimum absolute value and add that minimum value from the given value, we would get the closest element in the BST.

Do we really need to scan all the nodes? No we don’t need to. Let’s say given value is k.

What if current.value is equal to the k? That means it is the closest node to the k and we cannot go better than this. Return the node value as closest value.

If current.value is greater than k, by virtue of BST, we know that nodes on the right subtree of the current node would be greater than the current.value. Hence the difference between k and all the nodes on the right subtree would only increase. So, there is no node on the right side, which is closer than the current node itself. Hence, the right subtree is discarded. If there is no left subtree, return current.value.

If current.value is less than k,  with the same logic above, all elements in left subtree are less than the value of current.value, the difference between k and the nodes on left subtree would only increase. So, we can safely discard the left subtree. If there is no right subtree, return current.value.

When we return the closest element from left subtree, we check if the difference between current.value and k less than difference between returned value and k. If it is less, then return current.value else return the returned value from the subtree.

Closest element in a binary search tree: algorithm

  1. If k == current.value, return current.value.
  2. If k < current.value, search the closest element in left subtree including the root as current is still a candidate.
    1. Return current.value if there is no left subtree.
  3. If k > current.value, search the closest element in the right subtree.
    1. Return current.value if there is no right subtree.
  4. If abs(current.value - k ) < abs(returnedValue - k ), return current.value else return returnedValue

Closest element in binary search tree: implementation

package com.company.BST;

/**
 * Created by sangar on 3.11.18.
 */
public class ClosestElement {

    public int findClosest(TreeNode node, int k){
        if(node == null) return -1;

        int currentValue = (int)node.getValue();

        if(currentValue == k) return currentValue;

        if(currentValue > k){
            if(node.getLeft() == null) return currentValue;

            //Find closest on left subtree
            int closest = findClosest(node.getLeft(), k);
            if(Math.abs(closest - k) > Math.abs(currentValue - k)){
                return currentValue;
            }
            return closest;
        }
        else {
            if (node.getRight() == null) return currentValue;

            //Find closest on right subtree
            int closest = findClosest(node.getRight(), k);
            if (Math.abs(closest - k) 
				> Math.abs(currentValue - k)) {
                return currentValue;
            }
            return closest;
        }
    }
}
package com.company.BST;

/**
 * Created by sangar on 21.10.18.
 */
public class TreeNode<T> {
    private T value;
    private TreeNode left;
    private TreeNode right;

    public TreeNode(T value) {
        this.value = value;
        this.left = null;
        this.right = null;
    }

    public T getValue(){
        return this.value;
    }
    public TreeNode getRight(){
        return this.right;
    }
    public TreeNode getLeft(){
        return this.left;
    }

    public void setValue(T value){
        this.value = value;
    }

    public void setRight(TreeNode node){
        this.right = node;
    }

    public void setLeft(TreeNode node){
        this.left = node;
    }
}

package test;

import com.company.BST.BinarySearchTree;
import com.company.BST.ClosestElement;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 23.9.18.
 */
public class ClosestElementTest {

    ClosestElement tester = new ClosestElement();

    @Test
    public void closestElementTest() {

        BinarySearchTree<Integer> binarySearchTree =
						new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        assertEquals(6,
			tester.findClosest(binarySearchTree.getRoot(),1));
    }
}

Complexity of algorithm to find closest element in a binary search tree is O(n) if tree is completely skewed.

Please share if there is something wrong or missing. If you are preparing for interviews, please signup for free interview material.

Two nodes with given sum in binary search tree

Two nodes with a given sum in a binary search tree

Given a binary search tree and an integer k, find all two nodes with given sum, nodes (a,b) such that a+b =k. In other words, find two nodes in a binary search tree which add to a number. For example, in the binary search tree given below, if k = 24, node(5) and node(19) should be the result.

sum of two nodes

Two nodes with given sum in binary search tree: thoughts

We have solved a similar problem with arrays, where we had to find pairs with a given sum k, the basic idea was that if the array is sorted, we will scan it from start(moving right) and end(moving left), and see if elements at two ends add up to k. If the sum of those numbers is greater than k, we move to right from start. If the sum of those two numbers is less than k, we move to left from end. We continue till two pointers cross each other.

Is there a way in which binary search tree can be sorted array? Yes, if BST is traversed in inorder, output is in sorted order of nodes. So, if we scan the BST and store in an array, the problem is same as find pairs in an array with a given sum. 
However, the solution requires two scans of all nodes and additional space of O(n).

Another solution could be to use a hash map. Each node is stored in the hashmap and at each node, we check if there is a key (sum-node.value) present in the hashmap. If yes, then two nodes are added into the result set. This still requires additional space, however, there is only one traversal of the tree.

How can we avoid additional space? Look at the binary search tree property: all the nodes on the left subtree of a root node are less than and all the nodes on right subtree are greater than the root node.
We know that the minimum node in BST is the leftmost node and the maximum node is the rightmost node.
If we start an inorder traversal, the leftmost node is the first node to be visited and if we do a reverse inorder traversal, the rightmost node is the first node will be visited. If the sum of the minimum and the maximum is less than k, then we have to go to the second minimum (next node in forward inorder traversal). Similarly, if the sum of the minimum and the maximum is greater than k, then we have to go to the second maximum(next in reverse inorder traversal)

How can we do a forward and reverse inorder traversal at the same time? 

Sum of two nodes in binary search tree: stacks

We know how to traverse a tree using stack data structure. We will use two stacks, one stack stores the nodes to be visited in forward inorder traversal and second stores the nodes to be visited in reverse inorder traversal. When we reach the leftmost and the rightmost node, we pop from the stacks and check for equality with the given sum.
If the sum of the two nodes is less than k, we increase one of the nodes. We will go to the right subtree of the node popped from the forward inorder stack. Why? Because that’s where we will find the next greater element.

If the sum of the two nodes is greater than k, we decrease one of the nodes. We will go to the left subtree of the node popped from the reverse inorder stack. Why? Because that’s where we will find the next smaller element.

We will continue till both forward and reverse inorder traversal do not meet.

Two nodes with given sum in BST: Implementation

package com.company;

import com.company.BST.TreeNode;
import javafx.util.Pair;

import java.util.ArrayList;
import java.util.Stack;

/**
 * Created by sangar on 26.11.18.
 */
public class TwoNodesWithGivenSum {
    public ArrayList<Pair<Integer, Integer>>
		findPairsWithGivenSum(TreeNode root, int sum) {

        Stack<TreeNode> s1 = new Stack<>();
        Stack<TreeNode> s2 = new Stack<>();

        ArrayList<Pair<Integer, Integer>> result
			= new ArrayList<>();

        TreeNode cur1 = root;
        TreeNode cur2 = root;

        while (!s1.isEmpty() || !s2.isEmpty() ||
                cur1 != null || cur2 != null) {
            if (cur1 != null || cur2 != null) {
                if (cur1 != null) {
                    s1.push(cur1);
                    cur1 = cur1.getLeft();
                }

                if (cur2 != null) {
                    s2.push(cur2);
                    cur2 = cur2.getRight();
                }
            } else {
                int val1 = (int)s1.peek().getValue();
                int val2 = (int)s2.peek().getValue();

                if (s1.peek() == s2.peek()) break;

                if (val1 +  val2 == sum)
					result.add(new Pair(val1, val2)) ;

                if (val1 + val2 < sum) {
                    cur1 = s1.pop();
                    cur1 = cur1.getRight();
                } else {
                    cur2 = s2.pop();
                    cur2 = cur2.getLeft();
                }
            }
        }

        return result;
    }
}

Test cases

package test;

import com.company.BST.BinarySearchTree;
import com.company.TwoNodesWithGivenSum;
import javafx.util.Pair;
import org.junit.jupiter.api.Test;

import java.util.ArrayList;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 23.9.18.
 */
public class TwoNodesWithGivenSumTest {

    TwoNodesWithGivenSum tester = new TwoNodesWithGivenSum();

    @Test
    public void twoNodesWithGivenSumTest() {

        BinarySearchTree<Integer> binarySearchTree
			= new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        ArrayList<Pair<Integer, Integer>> result
			= new ArrayList<>();
        result.add(new Pair(12,15));

        assertEquals(result, 
			tester.findPairsWithGivenSum(
				binarySearchTree.getRoot(),
				27
			)
		);
    }

    @Test
    public void twoNodesWithGivenSumNotPossibleTest() {

        BinarySearchTree<Integer> binarySearchTree
			= new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        ArrayList<Pair<Integer, Integer>> result 
			= new ArrayList<>();
        assertEquals(result,
			tester.findPairsWithGivenSum(
				binarySearchTree.getRoot(),
				45
			)
		);
	}

    @Test
    public void twoNodesWithGivenSumNullTreeTest() {

        ArrayList<Pair<Integer,Integer>> result
			= new ArrayList<>();

        System.out.println(result);

        assertEquals(result,
			tester.findPairsWithGivenSum(
				null,
				45
			)
		);
    }
}

The complexity of this method to find two nodes with a given sum in a binary search tree is O(n). We are storing nodes in stacks, which will have space complexity of O(n), however, it is less than the previous solutions as it actually making the recursive stack explicit.

Please share if you have any doubts or something is missing or wrong. If you are preparing for an interview, signup to receive a free interview kit.

Lowest common ancestor in binary tree

Lowest common ancestor (LCA) in BST

Given a binary search tree and two nodes, find the lowest node which is the parent of both given nodes, that is the lowest common ancestor (LCA). For example, in the following tree, LCA of 6 and 1 is node(5), whereas lowest common ancestor of node 17 and 6 would be node(10).

lowest common ancestor lca

Lowest common ancestor : Thoughts

What is the condition for a node to be LCA of two nodes?  If paths for given nodes diverges from the node, then node is lowest common ancestor. While path is common for both the nodes, nodes are common ancestor but they are not lowest or least. How can we find where paths are diverging?

Paths are diverging when one node is on left subtree and another node is on right subtree of the node. Brute force solution would be to find one node and then go up the tree and see at what parent node, other given node falls on the opposite subtree.

Implementation wise, traverse to node 1 and node 2 and store both paths on the stack. Then pop from two stacks till you find the same node on both paths, that node would be the lowest common ancestor. There will be two scans of the tree and additional space complexity to store paths which in the worst case be O(n).

However, the brute force solution does not use the property of a binary search tree. Property is that all the nodes on the left side of a node are smaller and all the nodes on the right side of a node are greater than node. Can we use that property to solve this problem?

Basic idea is to return the node if it is found in any of the subtrees. At any node, search for both given nodes in the left subtree.  If we get a non-empty node returned from the left subtree, there is at least one of the two nodes is on left subtree.

Again, search in right subtree these two nodes, if a non-empty node is returned from the right subtree, that means at least one of the node is on right subtree.

What does it means if we have a non-empty node on both left and right subtree? It means two nodes are on the left and right subtree, one on each side. It means the root node is the lowest common ancestor.

What if one of the returned nodes is empty? It means both nodes are on one side of the root node, and we should return the upwards the non-empty node returned.

Let’s take an example and see how does it work? Given the below tree, find the lowest common ancestor of node(1) and node(9).

lowest common anestor in binary tree

Start with the node(10) and look for the left subtree for both node(1) and node(9). Go down all the way to the node(1), at the time, we return 1 as the node as node.value is equal to one of the nodes.

lowest common anestor in binary tree

lowest common anestor in binary treeAt node(5), we have got node(1) return from left subtree. We will search for node(1) and node(9) on right subtree. We go all the way to node(6), which is leaf node.

least common ancestor in binary search tree

At node(8), left subtree returns nothing as none of the nodes in the left subtree of node(8). However, right subtree returns node(9).

lowest common ancestor

As per our algorithm, if either of subtree returns non-empty node, we return the node return from the subtree.

lca in binary search tree

At node(5), we get a non-empty node from right subtree and we already know, from the left subtree, we got node(1). At this point at node(5), we have both left and right subtree returning non-empty node, hence return the node(5).

lca in binary treee

Two nodes will be searched on the right subtree of node(10), which will return nothing, hence, final lowest common ancestor will be node(5).

Lowest common ancestor : Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;

Node * findLCA(Node *root, int val1, int val2)
{
    // Base case
    if (root == NULL) return NULL;
 
    /* If either val1 or val2 matches with root's key, 
       report the presence by returning the root
       (Note that if a key is the ancestor of other,
       then the ancestor key becomes LCA 
   */
    if (root->key == val1 || root->key == val2)
        return root;
 
    // Look for keys in left and right subtrees
    Node *left  = findLCA(root->left, val1, val2);
    Node *right = findLCA(root->right, val1, val2);
 
    /* If both of the above calls return Non-NULL,
       then one key is present in once subtree
       and other is present in other,
       So this node is the LCA */
    if (left && right)  return root;
 
    // Otherwise check if left subtree or right subtree is LCA
    return (left != NULL)? left : right;
}

Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  printf("\n least common ancestor: %d ",
      leastCommonAncestor(root, 15, 25));
  
  return 0;
}

Below implementation only works for binary search tree and not for the binary tree as above method works.

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;

int leastCommonAncestor(Node *root, int val1, int val2){

 	if(!root)
       return -1;

 	if(root->value == val1 || root->value == val2)
    	return root->value;

 	/* Case 3: If one value is less and other greater
             than the current node
             Found the LCS return */
 	if((root->value > val1 && root->value <= val2) ||
  		(root->value <= val1 && root->value >val2)){
             return root->value;
 	}
  	/*Case 2 : If Both values are greater than current node, 
           look in right subtree */
 	else if(root->value < val1 && root->value <val2){
        return leastCommonAncestor(root->right, val1, val2);
 	}
 	/*Case 1 : If Both values are less than current node,
           look in left subtree */
 	else if(root->value > val1 && root->value > val2){
        return leastCommonAncestor(root->left, val1, val2);
 	}
}

Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  printf("\n least common ancestor: %d ",
      leastCommonAncestor(root, 15, 25));
  
  return 0;
}

The worst complexity of the algorithm to find the lowest common ancestor in a binary tree is O(n). Also, keep in mind that recursion is involved. More skewed the tree, more stack frames on the stack and more the chances that stack will overflow.

This problem is solved using on traversal of tree and managing states when returning from recursive calls.

Please share if there is something wrong or missing. If you are willing to contribute and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Find Kth smallest element in binary search tree

Kth smallest element in a binary a search tree

Given a binary search tree, find kth smallest element in the binary search tree. For example, 5th smallest element in below binary search tree would be 14, if store the tree in sorted order 5,7,9,10,14,15,19; 14 is the fifth smallest element in that order.

kth smallest element in binary search tree
Kth smallest element in binary search tree

Kth smallest element in binary search tree: thoughts

As mentioned earlier in a lot of posts like delete a binary tree or mirror a binary tree, first try to find the traversal order required to solve this problem. One hint we already got is that we want all the nodes on BST traversed in sorted order. What kind of traversal gives us a sorted order of nodes? Of course, it is inorder traversal.

So idea is to do an inorder traversal of the binary search tree and store all the nodes in an array. Once traversal is finished, find the kth smallest element in the sorted array.

This approach, however, scans the entire tree and also has space complexity of O(n) because we store all the nodes of tree in an array. Can we avoid scanning the whole tree and storing nodes?

If we keep count of how many nodes are traversed during inorder traversal, we can actually stop traversal as soon as we see k nodes are visited. In this case, we do not store nodes, just a counter. 

Kth smallest element in binary tree: implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 9.11.18.
 */
public class FindKthSmallestInBST {
    private int counter;

    public FindKthSmallestInBST(){
        counter = 0;
    }
    public TreeNode findKthSmallest(TreeNode root, int k){
        if(root == null) return root;

        //Traverse left subtree first
        TreeNode left = findKthSmallest(root.getLeft(),k);
        //If we found kth node on left subtree
        if(left != null) return left;
        //If k becomes zero, that means we have traversed k nodes.
        if(++counter == k) return root;

        return findKthSmallest(root.getRight(),k);
    }
}

Test cases

package test;

import com.company.BST.BinarySearchTree;
import com.company.BST.FindKthSmallestInBST;
import com.company.BST.TreeNode;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 23.9.18.
 */
public class KthSmallestElementTest {

    FindKthSmallestInBST tester = new FindKthSmallestInBST();

    @Test
    public void kthSmallestElementTest() {

        BinarySearchTree<Integer> binarySearchTree =
			new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        TreeNode kthNode =
			tester.findKthSmallest(binarySearchTree.getRoot(),1);
        assertEquals(6, kthNode.getValue());
    }

    @Test
    public void kthSmallestElementOnRightSubtreeTest() {

        BinarySearchTree<Integer> binarySearchTree =
				new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        TreeNode kthNode =
			tester.findKthSmallest(binarySearchTree.getRoot(),5);
        assertEquals(12, kthNode.getValue());
    }

    @Test
    public void kthSmallestElementAbsentSubtreeTest() {

        BinarySearchTree<Integer> binarySearchTree =
				new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        TreeNode kthNode =
			tester.findKthSmallest(binarySearchTree.getRoot(),10);
        assertEquals(null, kthNode);
    }

    @Test
    public void kthSmallestElementNulltreeTest() {

        BinarySearchTree<Integer> binarySearchTree =
				new BinarySearchTree<>();
        binarySearchTree.insert(10);
        binarySearchTree.insert(8);
        binarySearchTree.insert(15);
        binarySearchTree.insert(12);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);

        TreeNode kthNode = tester.findKthSmallest(null,10);
        assertEquals(null, kthNode);
    }
}

The complexity of this algorithm to find kth smallest element is O(k) as we traverse only k nodes on binary search tree.

There is hidden space complexity here. Recursive function requires call stack memory, which is limited to Operation System default. More deep you go in recursion, more space we use on stack. If tree is completely skewed, there are more chances of stack overflow. Also recursive function is very difficult to debug in production environments. Below is the non-recursive solution for the same problem.

Non-recursive way to find kth smallest element in BST

public int kthSmallest(TreeNode root, int k) {
        Stack<TreeNode> s = new Stack<TreeNode>();

        TreeNode current = root;
        int result = 0;

        while(!s.isEmpty() || current!=null){
            if(current!=null){
                s.push(current);
                current = current.getLeft();
            }else{
                TreeNode  temp = s.pop();
                k--;
                if(k==0)
                    result = (int)temp.getValue();
                current = temp.getRight();
            }
        }

        return result;
    }

Please share if there is something wrong or missing. If you are preparing for an interview and need help, please reach out to us at communications@algorithmsandme.com

Find Kth smallest element in array

Kth smallest element in array

Given an array of integers which is non sorted, find kth smallest element in that array. For example: if input array is A = [3,5,1,2,6,9,7], 4th smallest element in array A is 5, because if you sort the array A, it looks like A = [1,2,3,5,6,7,9] and now you can easily see that 4th element is 5.

This problem is commonly asked in Microsoft and Amazon interviews as it has multiple layers and there are some many things that can be tested with this one problem.

Kth smallest element : Line of thought

First of all, in any interview, try to come up with brute force solution. Brute force solution to find Kth smallest element in array of integers would be to sort the array and return A[k-1] element (K-1 as array is zero base indexed).

What is the complexity of brute force solution? It’s O(n2)? Well, we have sort algorithms like merge sort and heap sort which work in O(n log n) complexity. Problem with both searches is that they use additional space. Quick sort is another sort algorithm. It has problem that it’s worst case complexity will be O(n2), which happens when input is completely sorted.
In our case, input is given as unsorted already, so we can expect that quick sort will function with O(n log n) complexity which is it’s average case complexity. Advantage of using quick sort is that there is no additional space complexity.

Optimising quick sort

Let’s see how quicksort works and see if we can optimize solution further?
Idea behind quicksort is to find the correct place for the selected pivot. Once the pivot is at the correct position, all the elements on the left side of the pivot are smaller and on the right side of the pivot are greater than the pivot. This step is partitioning.

If after partitioning, pivot is at position j, can we say that pivot is actually jth smallest element of the array? What if j is equal to k? Well problem solved, we found the kth smallest element.

If j is less than k, left subarray is less than k, we need to include more elements from right subarray, therefore kth smallest element is in right subarray somewhere. We have already found j smallest elements, all we need to find is k-j elements from right subarray.

What if j is greater than k? In this case, we have to drop some elements from left subarray, so our search space would be left subarray after partition.

Theoretically, this algorithm still has the complexity of O(n log n), but practically, you do not need to sort the entire array before you find k smallest elements.

If you are preparing for a technical interview and need personal coaching along with mock interviews, book a free demo session with us

Algorithm to find K smallest elements in array

  1. Select a pivot and partition the array with pivot at correct position j
  2. If position of pivot, j, is equal to k, return A[j].
  3. If j is less than k, discard array from start to j, and look for (k-j)th smallest element in right sub array, go to step 1.
  4. If j is greater than k, discard array from j to end and look for kth element in left subarray, go to step 1

Let’s take an example and see if this algorithm works? A =  [4, 2, 1, 7, 5, 3, 8, 10, 9, 6 ], and we have to find fifth smallest element in array A.

Kth smallest element in array

Start with pivot as first index of array, so pivot = 0, partition the array into two parts around pivot such that all elements on left side of pivot element, i.e. A[pivot] are smaller and all elements on right side are greater than A[pivot].

Start with pivot as first index of array, so pivot = 0, partition the array into two parts around pivot such that all elements on left side of pivot element, i.e. A[pivot] are smaller and all elements on right side are greater than A[pivot].

In our example, array A will look like below after pivot has found it’s the correct position.

k smallest element
After partition, correct position of pivot is index 3

If pivot == k-1 (array is represented as zero base index), then A[pivot] is kth smallest element. Since pivot (3) is less than k-1 (4), look for kth smallest element on right side of the pivot.

k remains as it is as opposed to k-j mentioned in algorithm as pivot is given w.r.t entire array and not w.r.t subarray.

In second iteration, pivot = 4 (index and not element). After second execution of quick sort array A will be like

After partition of right subarray, correct position of pivot is index 4

pivot(4) which is equal to k-1(5-1). 5th smallest element in array A is 5.

Kth smallest element : Implementation

package com.company;

/**
	* Created by sangar on 30.9.18.
*/
public class KthSmallest {
	private void swap(int[] a, int i, int j){
		int temp = a[i];
		a[i] = a[j];
		a[j] = temp;
	}
	private int partition(int[] a, int start, int end){
		int pivot = a[start];
		int i  = start+1;
		int j  = end;

		while(i <= j){
			while(a[i] < pivot) i++;
			while(a[j] > pivot) j--;

			if(i < j) {
				swap(a, i, j);
			}
		}
		swap(a, start, j);
		return j;
	}

	public int findKthSmallestElement(int a[], int start, 
				int end, int k){
		if(start <= end){
		int p = partition(a, start, end);
		if(p == k-1){
			return a[p];
		}
		if(p > k-1)
			return findKthSmallestElement(a, start, p, k);
		if(p < k-1)
			return findKthSmallestElement(a, p+1, end, k);
		}
		return -1;
	}
}
package test;

import com.company.KthSmallest;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class KthSmallestTest {

	KthSmallest tester = new KthSmallest();
	private int[] a = {4, 2, 1, 7, 5, 3, 8, 10, 9};
	@Test
	public void kthSmallest() {
		assertEquals(7, tester.findKthSmallestElement(a,0,8,6));
	}

	@Test
	public void firstSmallest() {
		assertEquals(1, tester.findKthSmallestElement(a,0,8,1));
	}

	@Test
	public void lastSmallest() {
		assertEquals(10, tester.findKthSmallestElement(a,0,8,9));
	}

	@Test
	public void kGreaterThanSize() {
		assertEquals(-1, tester.findKthSmallestElement(a,0,8,15));
	}
	@Test
	public void emptyArray() {
		int[] a = {};
		assertEquals(-1, tester.findKthSmallestElement(a,0,0,1));
	}

	@Test
	public void nullArray() {
		assertEquals(-1, tester.findKthSmallestElement(null,0,0,1));
	}
}

Complexity of using quicksort algorithm to find the kth smallest element in the array of integers is still O(n log n).

Kth smallest element using heaps

Imagine a case where there are a billion integers in the array and you have to find 5 smallest elements from that array. The complexity of O(n log n) is too costly for that use case. Above algorithm using quicksort does not take into consideration disparity between k and n.

We want top k elements, how about we chose those k elements randomly, call it set A and then go through all other n-k elements, call it set B, check if element from set B (n-k elements) can displace element in set A (k elements)?

What will be the condition for an element from set B to replace an element in set A? Well, if the new element is less than maximum in set A than the maximum in set A cannot be in the set of k smallest elements right?  Maximum element in set A would be replaced by the new element from set B.

Now, the problem is how to quickly find the maximum out of set A. Heap is the best data structure there. What kind of heap: min heap or max heap? Max heap as it store the maximum of the set at the root of it.

Let’s defined concrete steps to find k smallest elements using max heap. 

  1. Create a max heap of size k from first k elements of array.
  2. Scan all elements in array one by one.
    1.  If current element is less than max on heap, add current element to heap and heapify.
    2. If not, then go to next element.
  3. At the end, max heap will contain k smallest elements of array and root will be kth smallest element.

Let’s take an example and see if this algorithm works? The input array is shown below and we have to find the 6th smallest element in this array.

kth smallest element using heaps
input array

Step 1 : Create a max heap with first 6 elements of array.

Create a max heap with set A

Step 2: Take the next element from set B and check if it is less than the root of max heap. In this case, yes it is. Remove the root and insert the new element into max heap.

Element from set B removes root from max heap and added to max heap

Step 2: It continues to 10, nothing happens as the new element is greater than the root of max heap. Same for 9.  At 6, again the root of max heap is greater than 6. Remove the root and add 6 to max heap.

Again, new element from set B is less than root of max heap. Root is removed and new element is added.

Array scan is finished, so just return the root of the max heap, 6 which is the sixth smallest element in given array.

	public int findKthSmallestElementUsingHeap(int a[], int k){
	//https://stackoverflow.com/questions/11003155/change-priorityqueue-to-max-priorityqueue

	PriorityQueue<Integer>  maxHeap =
			new PriorityQueue<>(k, Collections.reverseOrder());

		if(a == null || k > a.length) return -1;
		//Create max with first k elements
		for(int i=0; i<k; i++){
			maxHeap.add(a[i]);
		}

		/*Keep updating max heap based on new element
		If new element is less than root, 
		remove root and add new element
		*/

		for(int i=k; i<a.length; i++){
			if(maxHeap.peek() > a[i]){
				maxHeap.remove();
				maxHeap.add(a[i]);
			}
		}
		return maxHeap.peek();
	}

Can you calculate the complexity of above algorithm? heapify() has complexity of log(k) with k elements on heap. In worst case, we have to do heapify() for all elements in array, which is n, so overall complexity of algorithm becomes O(n log k). Also, there is additional space complexity of O(k) to store heap.
When is very small as compared to n, this algorithm again depends on the size of array.

We want k smallest elements, if we pick first k elements from a min heap, will it solve the problem? I think so. Create a min heap of n elements in place from the given array, and then pick first k elements.
Creation of heap has complexity of O(n), do more reading on it. All we need to do is delete k times from this heap, each time there will be heapify(). It will have complexity of O(log n) for n element heap. So, overall complexity would be O(n + k log n).

Depending on what you want to optimize, select correct method to find kth smallest element in array.

Please share if there is something wrong or missing. If you are interested in taking coaching sessions from our experienced teachers, please reach out to us at communications@algorithmsandme.com

Quick sort algorithm

Quick sort Algorithm

Quick sort like merge sort is a sorting algorithm under divide and conquer paradigm of algorithms like merge sort. Basic idea of algorithm is to divide inputs around a pivot and then sort two smaller parts recursively and finally get original input sorted.

Selection of pivot

Entire idea of quick sort revolves around pivot. Pivot is an element in input around which input is arranged in such a way that all elements on left side are smaller and all elements on right side are greater than pivot. Question is how to find or select pivot and put it into correct position.

To make things simpler to start with, let’s assume first element of input is pivot element.

To put this pivot at correct position in input, start with next element of pivot in input space and find first element which is greater than pivot. Let that be ith position.

At the same time, start from end of array and find first element which is smaller than pivot. Let it be jth position.

If i and j have not crossed each other i.e i < j, then swap element at ith and jth positions, and continue moving right on input to find element greater than pivot and moving left to find element smaller than pivot.
Once i and j cross each other, swap pivot with element at jth position.  After this step, pivot will be at its correct position and array will be divided into two parts. All elements on left side will be less than pivot and all elements on right side will be greater than pivot.

Quick sort partition example

This is too much to process, I know! Let’s take an example and see how it does it work? We have an array as follows

quick sort

Let’s select first element as pivot, pivot = 3.

quick sort pivot selection

Start from next element of pivot, move towards right of array, till we see first element which is greater than pivot i.e. 3.

From end of array, move towards left till you find an element which is less than pivot.

Now, there are two indices, i and j, where A[i] > pivot and A[j] < pivot. See that i and j not yet crossed each other. Hence, we swap A[i] with A[j]. Array at the bottom of pic, shows resultant array after swap.

quick sort partition

Again, start with i+1 and follow the same rule : Stop when you find element greater than pivot. In this case, 10 is greater than 3, hence we stop.

Similarly, move left from end again, till we find an element which is less than pivot. In this case, we end up at index = 2 which is element 1.

Since, i > j, than means paths have been crossed. At this time, instead of swapping element at i and j index, swap element at j index with pivot.

After swapping pivot with jth index, we have array divided into two parts, pivot as boundary. All elements on left side of pivot are smaller (they may not be sorted) and all elements on right side of pivot are greater than pivot (again may not be sorted).

quick sort partitions

We, apply this same partition process to left and right arrays again, till base condition is hit. In this case, base condition would be if there is only one element in array to be partitioned.

Quick sort algorithm

quickSort([], start, end)
1. If array has more than one elements i.e (start < end):
1.1 Find correct place for pivot.
pivot = partition(arr, low, high)
1.2. Apply same function recursively to left of pivot index
quickSort(arr, start, pivot -1 )
and to the right of pivot index
quickSort(arr, pivot + 1, end)

Quick sort implementation

package AlgorithmsAndMe;

public class QuickSort {

    private void swap(int [] a, int i, int j){
        int temp = a[i];
        a[i] = a[j];
        a[j] = temp;
    }

    private int partition(int [] a, int start, int end){
        int pivot = a[start];
        int i  = start;
        int j  = end;

        while(i < j){
            while(i <= end && a[i] <= pivot) i++;
            while(j >= start && a[j] > pivot) j--;
            
            if(i < j) {
                swap(a, i, j);
            }
        }

        swap(a, start, j);
        return j;
    }

    public void quickSort(int [] a, int start, int end){
        if(start < end){
            int p = partition(a, start, end);
            quickSort(a,start, p-1);
            quickSort(a, p+1, end);
        }
    }
}

There is another implementation which is based on Lomuto partition scheme, in this scheme, we make last element as pivot. The implementation is compact but complexity is bit higher than the original partition methods in terms of number of swaps.

#include<stdlib.h>
#include<stdio.h>
 
void swap(int *a, int *b){
    int temp = *a;
    *a = *b;
    *b = temp;
}
 
int partition(int a[], int low, int high)
{
    // set pivot as highest element
    int x  = a[high];
 
    //Current low points to previous of low of this part of array. 
    int i = low - 1;
 
    for (int j = low; j <= high-1; j++)
    {
    	/*Move in the array till current node data is 
        less than the pivot */
        if (a[j] <= x){
            //set the current low appropriately
            i++;
            swap(&a[i], &a[j]);
        }
    }
    //Now swap the next node of current low with pivot
 
    swap(&a[i+1], &a[high]);
 
    printf("\n Pivot : %d\n", a[i+1]);
    for(int j=0; j<=high; j++){
 
    	printf("%d ", a[j]);
    }
    //return current low as partitioning point.
    return i+1;
}
 
/* A recursive implementation of quicksort for linked list */
void quickSortUtil(int a[], int low, int high)
{
    if (low < high)
    {
        int p = partition(a,low, high);
        quickSortUtil(a,low, p-1);
        quickSortUtil(a, p+1, high);
    }
}
 
/* Driver program to run above code */
int main(){
 
    int a[] = {5,4,2,7,9,1,6,10,8};
 
    int size = sizeof(a)/sizeof(a[0]);
    quickSortUtil(a, 0, size-1);
 
    for(int i=0; i<size; i++){
    	printf("%d ", a[i]);
    }
    return 0;
}

Complexity analysis of quick sort algorithm

If pivot splits original array into two equal parts (which is the intention), complexity of quick sort is O(n log n). However, worst case complexity of quick sort happens when input array is already sorted in increasing or decreasing order. In this case, array is partitioned into two subarrays, one with size 1 and other with size n-1. Similarly, subarray with n-1 elements, it again is divided into two subarrays of size 1 and n-2. In order to completely sort array it will split for n-1 times and each time it requires to traverse n element to find correct position of pivot. Hence overall complexity of quick sort comes out as O(n2).

There is a very interesting question, which tests your understanding of system basics. Question is what is space complexity of this algorithm? There is no apparent memory is used. However, recursive implementation internally puts stack frames on stack for partitioned indices and function call return address and so on. In worst case, there can be n stack frames, hence worst case complexity of quick sort will be O(n).

How can we reduce that? If the partition with fewest elements is (recursively) sorted first, it requires at most O(log n) space. Then the other partition is sorted using tail recursion or iteration, which doesn’t add to the call stack. This idea, was described by R. Sedgewick, and keeps the stack depth bounded by O(log n) and hence space complexity will be O(log n).

Quick sort with tail recursion

Quicksort(A, p, r)
{
 while (p < r)
 {
  q = Partition(A, p, r)
  Quicksort(A, p, q)
  p = q+1
 }
}

Selection of Pivot
If array is completely sorted, then worst case behavior of quick sort is O(n2), so there comes another problem. How can we select pivot so that two subarrays are almost equal size. There are many solutions proposed.
1. Taking median of array as pivot. So how to select median of an unsorted array. We look into this problem separately, but yes it guarantees two halves of same size.
2. Selecting pivot randomly. This requires heuristics algorithms to select pivot.

Please leave your comment in case you find something wrong or you have some improved version.

Arranging people according to country of origin : Count sort in play

Introduction

Let’s consider a problem where we have nationals of different countries in a room and our task is to arrange these nationals in such a way that all the nationals of same country are clubbed together.
Simple application of count sort isn’t? Count nationals of each country and place them in order as discussed in previous post.

Problem:

Now let’s add some complexity. What if we need all nationals of country we discovered early to be before the nationals of country we encountered later?
So input be
A = [I,I,I, U,U,S, S, E,E,E,M,M,M,I, U,M,E]
Output should be
A = [I,I,I,I, U,U,U,S,S,E,E,E,E, M,M,M]

Solution:

So there is additional information we need to track that what was the order of the visiting country while traversing the country and rest all follows as per the count sort.
How do we keep track of order?
We can keep an array of countries and put countries in such a way that index of first encounter country is lower than the country later. We can do this while counting the nationals of each country by checking that if this was the first national of this country we encountered then we can go and add the country in country array.

Complete code

void arrange(int a[], int n, int K){
        /*
            n is number of people
            K is no of distinct countries
        */
        char country[K];
        int  count[K];
        int i,j;
        int index =0;
        int next_index = 0;

        for(i=0;i<=K; i++){
                count[i] =0;
        }

        for(i=0;i<n;i++){
                if(count[a[i]] == 0){
                 /*
                   If this is the first person for that country, add that country to our country array
                 */
                        country[index++] = a[i];
                }
                count[a[i]]++;
        }

        for(i=0; i<index;i++){
                j = next_index;
                next_index += count[country[i]];
                for(; j<next_index; j++){
                        a[j] = country[i];
                }

        }


}

Conclusion

It is fairly easy to apply count sort on any arranging problem where we have a certain attribute we need to arrange input on.

Count sort : Sorting in linear time

Count sort : Sorting in linear time

Are there any sorting algorithm which has worst case complexity of O(n)? There are a few like count sort and decision tree algorithms. In this post we would discuss about count sort and couple of problems where this counting sort algorithm can be applied.

Counting sort was invented by Harold H. Seward
To apply counting sort, we need to keep in mind following assumptions:

  1. There should be duplicate values in the input
  2. There should be at most K different type of values in input.
  3. The input data ranges in 0 to K

Count sort goes for O(n2) if K is very close to n i.e. a very few elements are duplicate and rest all are unique. So above three conditions are necessary to have this algorithm work in linear time.

Count Sort -Approach
Let’s see how does it work? There are three steps involved in the algorithm:

  • Sampling the data, counting the frequencies of each element in the input set.
  • Accumulating frequencies to find out relative positions of each element.
  • Third step distributes each element to its appropriate place.
  • Now let’s take one example and go through the above three steps:
    Input is  an array A = [1,2,3,2,2,3,3,1,1,1,1,3,2,2]. Here we can see that K=3. Now let’s perform the first step of the method, i.e. count the frequency of each element. We keep a hash and keep track of each element’s count. Since values are upper bound by K, we need at most K sized hash. We initialize this hash as zero. A is input array and n is the size of the array.

    int char count [K];
    
    for(i=0; i<K;i++){
            count[i]=0;
    }
    
    for(i=0;i<n;i++){
            count[a[i]]++;
    }
    

    Count looks likes count =[5,5,4]. The complexity of this step is O(n). The second step involves the accumulation of frequencies where we add frequencies of previous elements to the current element.
    F(j) = summation of f[i] for all i=0

    F(j) = summation of f[i] for all i<=j and i>=0
    
    for(i=1;i<K;i++){
       Count[i] +=count[i-1];
    }
    

    The second step runs for K times hence the complexity of O(K). The third step involves distribution. Let’s create an output array called as B of size n For every element in A we check the corresponding count in count array and place the element at that position. So, first 1 will be at position 4 (there are total 5 and array is an index from 0), the first two will be placed at 9 and the first three will be at 13. Subsequently, lower positions will be filled as we encounter them in the input array.

    for(i=0; i<n;i++){
           B[--count[a[i]]] = a[i];
    }
    

    This step has complexity of O(n)

    #include<stdlib.h>
    #include<stdio.h>
    
    void count_sort(int a[], int n, int K){
    
            int count [K];
            int i,j;
            int b[n];
            for(i=0; i<=K;i++){
                    count[i]=0;
            }
    
            for(i=0;i<n;i++){
                    count[a[i]]++;
            }
    
            for(i=1;i<=K;i++){
                    count[i] +=count[i-1];
            }
            for(j=0;j<n;j++){
                    b[j]=0;
            }
    
            for(i=0; i<n;i++){
                    count[a[i]]--;
                    b[count[a[i]]] = a[i];
            }
    }
    int main(){
            int a[]= {0,1,1,1,0,0,0,3,3,3,4,4};
            int n  =  sizeof(a)/sizeof(a[0]);
            
            count_sort(a,n,4);
    }
    
    Iter 0 :0  0  0  0  1  0  0  0  0  0  0  0  0  0  
    Iter 1 :0  0  0  0  1  0  0  0  0  2  0  0  0  0  
    Iter 2 :0  0  0  0  1  0  0  0  0  2  0  0  0  3  
    Iter 3 :0  0  0  0  1  0  0  0  2  2  0  0  0  3  
    Iter 4 :0  0  0  0  1  0  0  2  2  2  0  0  0  3  
    Iter 5 :0  0  0  0  1  0  0  2  2  2  0  0  3  3  
    Iter 6 :0  0  0  0  1  0  0  2  2  2  0  3  3  3  
    Iter 7 :0  0  0  1  1  0  0  2  2  2  0  3  3  3  
    Iter 8 :0  0  1  1  1  0  0  2  2  2  0  3  3  3  
    Iter 9 :0  1  1  1  1  0  0  2  2  2  0  3  3  3  
    Iter 10 :1  1  1  1  1  0  0  2  2  2  0  3  3  3  
    Iter 11 :1  1  1  1  1  0  0  2  2  2  3  3  3  3  
    Iter 12 :1  1  1  1  1  0  2  2  2  2  3  3  3  3  
    Iter 13 :1  1  1  1  1  2  2  2  2  2  3  3  3  3  
    
    Final O/P :1  1  1  1  1  2  2  2  2  2  3  3  3  3
    

    Total complexity of the algorithm being O(K)+ O(n) + O(K) +O(n) = O(K+n). Please refer to the master theorem, how this is derived. Now if we see, K+n remains in order of n till the time K<n, if K goes on to n^2, the complexity of algorithm goes for polynomial time instead of linear time.

    There is immediate application of this algorithm in following problem:
    Let’s there is an array which contains Black, White and Red balls, we need to arrange these balls in such a way that all black balls come first, white second and Red at last. Hint: assign black 0, white 1 and Red 2 and see. 🙂

    In the next post, we would discuss how extra space can be saved, how initial positions of elements can be maintained. We would go through an interesting problem to discuss the above optimization.

    Why this filling from the end of the span? Because this step makes count sort stable sort. Here we have seen a sorting algorithm which can give us o/p linear time given some particular conditions met on the i/p data.