Disjoint set data structure

Disjoint set data structure

A disjoint set data structure or union and find maintains a collection 𝑆 = { 𝑆1, 𝑆2, ⋯ , 𝑆𝑛} of disjoint dynamic sets. Subsets are said to be disjoint if there is the intersection between them is NULL. For example, set {1,2,3} and {4,5,6} are disjoint sets, but {1,2,3} and {1,3,5} are not. Another important thing about the disjoint set is that every set is represented by a member of that set called as representative.

Operations on this disjoint set data structure:
1. Make Set: Creates a new set with one element x, since the sets are disjoint, we require that x not already be in any of the existing sets.
2. Union: Merges two sets containing x and y let’s say Sx and Sy and destroys the original sets.
3.Find: Returns the representative of the set which element belongs to.

Let’s take an example and see how disjointed sets can be used to find the connected components of an undirected graph.

To start with, we will make a set for each vertex by using make-set operation.

for each vertex v in G(V)
    do makeSet(v)

Next process all the edges in the graph (u,v) and connect set(u) and set(y) if the representatives of the set which contains u and set which contains v are not same.

for each edge (u,v) in 𝐺(E)
    do if findSet(u) != findSet(v)
        then union(u, v)

Once above preprocessing steps have run, then we can easily find answer if two vertices u and v are part of same connected component or not?

boolean isSameComponent(u, v)
 if findSet(u)==findSet(v)
     return True
 else 
     return False

To find how many components are there, we can look at how many disjoint sets are there and that will give us the number of connected components in a graph. Let’s take an example and see how it works.

disjoint set data structure

Below table shows the processing of each edge in the graph show figure above.

disjoint sets

Now, how can we implement sets and quickly do union and find operations? There are two ways to do it.

Disjoint set representation using an array

Simple implementation of disjoint set is using an array which maintains their representative of element i in A[i]. To this implementation to work, it is must that all the element in the set are in range 0 to N-1 where N is size of the array.

Initially, in makeSet() operation, set A[i]=i, for each i between 0 and N-1 and create the initial versions of the sets.

disjoint set data structure representation of graph

for (int i=0; i<N; i++) A[i] = i;

Union operation for the sets that contain integers u and v, we scan the array A and change all the elements
that have the value A[u] to have the value A[v]. For example, we if want to connect an edge between 1 and 2 in the above set, the union operation will replace A[2] with A[1] as A[2] was the only element with a value equal to A[2].

disjoint set data structure time complexity and implementation in java

Now, if want to add an edge between 3 and 1. In this case, u = 3 and v = 1. A[3] = 3 and A[1] = 1. So, we will replace all the indices of A where A[i] = 1. So final array looks like this.

disjoint set data structure java

Similarly, if want to add an edge from 6 to 7.
disjoint sets

//change all elements from A[u] to A[v].
void union(int A[], int u, int v){
    int temp = A[ u ];
    for(int i=0; i<A.length; i++){
        if(A[ i ] == temp)
            A[i] = A[v]; 
    }
}

findSet(v) operation returns the value of A[v].

int findSet(int A[], int v){
    return A[v]
}

The complexity of makeSet() operation is O(n) as it initializes the entire array. Union operation take every time O(n) operations if we have to connect n nodes, then it will be O(n2) operations. FindSet() operation has constant time complexity.

We can represent disjoint set using linked list too. In that case, each set will be a linked list, and head of the linked list will be the representative element. Each node contains two pointers, one to its next element it the set and other points to the representative of the set.

To initialize, each element will be added to a linked list. To union (u, v), we add the linked list which contains u to end of the linked list which contains v and change representation pointer of each node to point to the representation of list which contained v.

The complexity of union operation is again O(n). Also, find operation can be O(1) as it returns the representative of it.

Disjoint set forest

The disjoint-forests data structure is implemented by changing the interpretation of the meaning of the element of array A. Now each A[i] represents an element of a set and points to another element of that set. The root element points to itself. In short, A[i] now points to the parent of i.

Makeset operation does not change, as to start with each element will be the parent of itself.
Union operation will change, if we want to connect u and v with an edge, we update A[root of u] with the root of v. How to find the root of an element? As we have the relationship that A[i] is the parent of i, we can move up the chain until we find a case where A[i] == i, that case, i is the root of v.

//finding root of an element
int root(int A[],int i){
    while(A[i] != i){
        i = A[i];
    }
    return i;
}

/*Changed union function where we connect 
  the elements by changing the root of 
  one of the elements
*/

int union(int A[ ] ,int u ,int v){
    int rootU = root(A, u);       
    int rootV = root(A, v);  
    A[ rootU ] = rootV ; 
}

This implementation has a worst-case complexity of O(n) for union function. And also we made the worst complexity of findSet operation as O(n).

However, we can do some ranking on the size of trees which are being connected. We make sure that always root of smaller tree point to the root of the bigger tree.

void union(int[] A, int[] sz, u, v){

    //Finding roots
    for (int i = u; i != A[i]; i = A[i]) ;
    for (int j = v; j != A[j]; j = A[j]) ;

    if (i == j) return;
    //Comparing size of tree to put smaller tree root under 
    // bigger tree's root.
    if (sz[i] < sz[j]){
        A[i] = j;
        sz[j] += sz[i];
    }
    else {
        A[j] = i; 
        sz[i] += sz[j];
    }
}

In next few posts, we will be discussing applications of this method to solve different problems on graphs.
Please share if there is something wrong or missing. If you are preparing for an interview, and want coaching sessions to prepare for it, please signup for free demo session.

Connect n ropes with minimum cost

Connect n ropes with minimum cost

There are given n ropes of different lengths, we need to connect these n ropes into one rope. The cost to connect two ropes is equal to the sum of their lengths. We need to connect the ropes with minimum cost.

For example, if there are 4 ropes of lengths 5, 2, 3 and 9. We can connect the ropes in the following way: First, connect the ropes of lengths 2 and 3, the cost of this connection is the sum of lengths of ropes which is 2 + 3 = 5. We are left with three ropes with lengths 5, 5 and 9. Next, connect the ropes of lengths 5 and 5. Cost of connection is 10. Total cost till now is 5 + 10 = 15. We have two ropes left with lengths 10 and 9. Finally, connect the last two ropes and all ropes have connected, Total Cost would be 15 + 19 = 34.

Another way of connecting ropes would be: connect ropes with length 5 and 9 first (we get three ropes of 3, 2 and 14), then connect 14 and 3, which gives us two ropes of lengths 17 and 2. Finally, we connect 19 and 2. Total cost in this way is 14 + 17 + 21 = 52. which is much higher than the optimal cost we had earlier.

Minimum cost to connect n ropes: algorithm

When we were doing calculations in examples, did you notice one thing? Lengths of the ropes connected first are added subsequently in all the connections. For example, we connected ropes with length 2 and 3 in the first example, it gets added to next connect as part of rope with length 5, and again when we connect the ropes with lengths 15 and 9, 2 + 3 is already inside 15.

Read Huffman coding to understand how to solve this problem from this hint.

All we have to make sure that the most repeated added rope is the smallest, then the second smallest and so on. This gives the idea that if we sort the ropes by their sizes and add them, sort again the array again until there is no ropes to add. It will always give us the optimal solution to connect ropes.

What will be the complexity of this implementation? The complexity will be dominated by the sorting algorithm, best we can achieve is O(n log n) using quicksort or merge sort. Also, connecting two ropes we have to sort the arry again. So overall complexity of this method is O(n2 log n)

Can we do better than this? Do we need the array sorted at all the times? All we need is the two ropes with the least length. What data structure gives me the minimum element in the least time. Min Heap will do so. If we create a min heap with lengths of ropes, we can easily find the two ropes with least length in O(1) complexity.

  1. Create a min heap from the array of rope lengths
  2. Fetch the root which will give us smallest rope
  3. Fetch the root again which will give us second smallest rope
  4. Add two ropes and put it back into heap
  5. Go back to step 2

Minimum cost to conenct ropes

package com.company;

import java.util.Arrays;
import java.util.List;
import java.util.PriorityQueue;
import java.util.stream.Collectors;

/**
 * Created by sangar on 3.1.19.
 */
public class ConnectRopes {

    public int getMinimumCost(int[] ropeLength){

        PriorityQueue<Integer> minHeap = new PriorityQueue<Integer>();

        /*
        There is no shortcut for converting from int[] to List<Integer> as Arrays.asList
        does not deal with boxing and will just create a List<int[]>
        which is not what you want.
         */
        List<Integer> list = Arrays.stream(ropeLength).boxed().collect(Collectors.toList());

        /*
        Javadoc seems to imply that addAll is inherited from AbstractQueue where
        it is implemented as a sequence of adds.
        So complexity of this operation is O(nlogn)
         */
        minHeap.addAll(list);

        int totalLength = 0;

        while(minHeap.size() > 1){
            int len1 = (int)minHeap.remove();
            int len2 = (int)minHeap.remove();

            totalLength+=(len1 + len2);

            minHeap.add(len1+len2);
        }

        return totalLength;
    }
}

Test cases

package test;

import com.company.ConnectRopes;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
/**
 * Created by sangar on 23.9.18.
 */
public class ConnectRopeTest {

    ConnectRopes tester = new ConnectRopes();

    @Test
    public void minimumCostTest() {

        int[] a = {5,2,3,9};

        assertEquals(24, tester.getMinimumCost(a));
    }
    @Test
    public void minimumCostOneRopeTest() {

        int[] a = {5};

        assertEquals(0, tester.getMinimumCost(a));
    }
}

The complexity of this implementation is O(nlogn) (to create min heap out of an array in java Priority queue) + O(nlogn) (to fetch two minimum and re-heapify). However, initial complexity to build a heap from the array can be brought down to O(n) by using own implementation of min heap.

Please share if there is something wrong or missing.

Lowest common ancestor(LCA) using Range Minimum Query(RMQ)

Lowest common ancestor(LCA) using RMQ

We already have discussed lowest common ancestor and range minimum query. In this post, we will discuss how to use RMQ to find the lowest common ancestor of two given nodes in a binary tree or binary search tree. LCA of two nodes u and v is the node which is furthest from root and u and v are descendant of that node. For example, LCA node(5) and node(9) in below tree is node(2).

lowest common ancestor using RMQ

In earlier solutions, we scan the whole binary tree every time we have to find LCA of two nodes. This has a complexity of O(n) for each query. If this query if fired frequently, this operation may become a bottleneck of the algorithm. One way to avoid processing all nodes on each query is to preprocess binary tree and store precalculated information to find LCA of any two nodes in constant time.

This pattern is very similar to a range minimum query algorithm. Can we reduce the lowest common ancestor problem to range minimum query problem?

Reduction of lowest common ancestor problem to RMQ

Let’s revise what is RMQ: Given an array A of length n; RMQ(i,j) – returns the index of the minimum element in the subarray A[i..j].

lowest common ancestor using RMQ

Let’s find LCA of two nodes 5 and 8 manually in the above binary tree. We notice that LCA(u,v) is a shallowest common node (in terms of distance from root) which is visited when u and v are visited using the depth-first search of the tree. An important thing to note is that we are interested in shallowest, which is minimum depth, the node between u and v. Sounds like RMQ?

Implementation wise, the tree is traversed as Euler tour, which means we visit each node of tree, without lifting the pencil. This is very similar to a preorder traversal of a tree. At most, there can be 2n-1 nodes in Euler tour of a tree with n nodes, store this tour in an array E[1..2n-1].

As algorithm requires the shallowest node, closest to root, so we store the depth of each node while doing Euler tour, so we store the depth of each node in another array D[1..2n-1].

We should maintain the value when the node was visited for the first time. Why?

E[1..2n-1] – Store the nodes visited in a Euler tour of T. Euler[i] stores ith node visited in the tour.
D[1..2n-1] – Stores level of the nodes in tour. D[i] is the level of node at Euler[i]. (level is defined to be the distance from the root).
F[1..n] – F[i] will hold value when node is first visited.

For example of this graph, we start from node(1) and do Euler tour of the binary tree.

lowest common ancestor using rmq

Euler tour would be like

lca using rmq

Depth array is like

lca using rmq

First visit array looks like

lca using rmq

To compute LCA(u,v): All nodes in the Euler tour between the first visits to u and v are E[F[u]...F[v]] (assume F[u] is less than F[v] else, swap u and v). The shallowest node in this tour is at index RMQ D(F[u]..F[v]), since D[i] stores the depth of node at E[i].
RMQ function will return the index of the shallowest node between u and v, thus output node will be E[RMQ D(F[u], F[v])] as LCA(u,v)

Let’s take an example, find the lowest common ancestor of node(5) and node(8).

First of all, find the first visit to node(5) and node(8). It will be F[5] which is 2 and F[8] which is 7.

Now, all the nodes which come between visit of node(5) and node(8) are in E[2..7], we have to find the shallowest node out these nodes. This can be done by applying RMQ on array D with range 3 to 6.

lca using rmq

LCA will be E[RMQ( D(2,7)], in this case, RMQ(D[2..7]) is index 3. E[3] = 2, hence LCA(5,8) is node(2).

Lowest common ancestor using RMQ: Implementation

package com.company.BST;

import java.util.Arrays;

/**
 * Created by sangar on 1.1.19.
 */
public class LowestCommonAncestor {

    private int[] E;
    private int[] D;
    private int[] F;

    int[][] M;

    private int tourCount;

    public LowestCommonAncestor(BinarySearchTree tree){
        //Create Euler tour, Depth array and First Visited array
        E = new int[2*tree.getSize()];
        D = new int[2*tree.getSize()];
        F = new int[tree.getSize() + 1];

        M = new int[2 * tree.getSize()][2 * tree.getSize()];

        Arrays.fill(F, -1);
        getEulerTour(tree.getRoot(), E, D, F, 0);

        preProcess(D);
    }

    public int findLowestCommonAncestor(int u, int v){
        //This means node is not in tree
        if(u >= F.length || v >= F.length || F[u] == -1 || F[u] == -1)
            return -1 ;

        return E[rmq(D, F[u], F[v])];
    }

    /* This function does all the preprocessing on the tree and
       creates all required arrays for the algorithm.
    */
    private void getEulerTour(TreeNode node, int[] E, int[] D, int[] F,
                              int level){
        if(node == null) return;

        int val = (int)node.getValue();

        E[tourCount] = val; // add to tour
        D[tourCount] =  level; // store depth

        if(F[val] == -1) {
            F[(int) node.getValue()] = tourCount;
        }
        tourCount++;
        
        if(node.getLeft() != null ) {
            getEulerTour(node.getLeft(), E, D, F, level + 1);

            E[tourCount] = val;
            D[tourCount++] = level;
        }
        if(node.getRight() != null ) {
            getEulerTour(node.getRight(), E, D, F, level + 1);

            E[tourCount] = val;
            D[tourCount++] = level;
        }
    }

    /*
      This function preprocess the depth array to quickly find 
      RMQ which is used to find shallowest node.
     */
    void preProcess(int[] D) {

        for (int i = 0; i < D.length; i++)
            M[i][0] = i;

        for (int j = 1; 1 << j <D.length ; j++){
            for (int i = 0; i + (1 << j) - 1 < D.length; i++){
                if (D[M[i][j - 1]] < D[M[i + (1 << (j - 1))][j - 1]])
                    M[i][j] = M[i][j - 1];
                else
                    M[i][j] = M[i + (1 << (j - 1))][j - 1];
            }
        }
    }

    private int rmq(int a[], int start, int end){
        int j = (int)Math.floor(Math.log(end-start+1));

        if ( a[ M[start][j] ] <= a[M[end-(1<<j)+1][j]] )
            return M[start][j];

        else
            return M[end-(1<<j)+1][j];
    }
}

The beauty of this algorithm is that it can be used to find LCA of any tree, not just only binary tree or BST. The complexity of the algorithm to find a lowest common ancestor using range minimum query is (O(n), O(1)) with an additional space complexity of O(n).

Reference
Faster algorithms for finding lowest common ancestors in directed acyclic graphs

Please share if there is something wrong or missing. If you are preparing for an interview, please signup for free demo class to guide you through the process.

Range minimum query (RMQ)

Range minimum query RMQ

Given an array A[0..n], find the index of the element with the minimum value in a given range. This problem is known as Range Minimum Query or RMQ.
For example, if given array below, find the index of minimum value between index 2 and 7, RMQ answer would be 5, which is the index of element 1.

 RMQ range minimum query

Going by the brute force, every time a query is fired, we scan the range and find the minimum in a given range in the same way as we do for an entire array. The complexity of each query being answered is O(n) wherein the worst-case range is the entire array.

Can we preprocess our data, so that our query operations are less costly? If we do so, there are two parts to the solution now, first preprocessing and the second query. Let’s assume complexity of each step is f(n) and g(n) respectively, then the complexity of solution can be denoted as (f(n), g(n)).

What kind of preprocessing can be done? Basic idea is to calculate the minimum index of all the ranges possible in the array. How many ranges are possible for an array with n elements? It’s n2 ranges. Why?

So, to store the index of minimum value element of each range, O(n2) order space is required and time complexity goes to O(n3). However, complexity of query is O(1). So overall complexity of solution is ( O(n3), O(1) ).

#include <stdio.h>

int M[100][100];

int findMinimum(int a[], int start, int end, int size){
	if(start >= size || end >= size) return -1;
	int min = start;
	for(int i=start; i<=end; i++){
		if( a[i] < a[min] ){
			min = i;
		}
	}
	return min;
	
}
void preprocess(int a[], int size ){
    for(int i=0; i<size; i++){
        for(int j=0; j<size; j++){
            for(int k=i; k<=j; k++){
                M[i][j] = findMinimum(a,i,j,size); 
            }
        }
    }
}

int rmq(int start, int end){
	return M[start][end];
}

int main(void) {
	
	int a[] = { 2,3,1,5,9,7,10,5,6,3 };
	int size = sizeof(a)/sizeof(a[0]);
	
	//Preprocessing step
	preprocess(a, size);
	printf("\n Minimum index in range is : %d ", rmq(3,9) );
	printf("\n Minimum index in range is : %d ", rmq(2,7) );
	
	return 0;
}

With application of dynamic programming, the complexity of the preprocessing step can be reduced to O(n2).

#include <stdio.h>

int M[100][100];

void preprocess(int a[], int size)
{
	int i,j;
	for (i=0; i<size; i++)
		M[i][i] = i;
	
	for (i=0; i<size; i++){
		for (j=i+1; j<size; j++){
			if (a[M[i][j - 1]] < a[j])
				M[i][j] = M[i][j - 1];
			else
				M[i][j] = j;
		}
	}
}

int rmq(int start, int end){
	return M[start][end];
}

int main(void) {
	
	int a[] = { 2,3,1,5,9,7,10,5,6,3 };
	int size = sizeof(a)/sizeof(a[0]);
	
	//Preprocessing step
	preprocess(a, size);
	printf("\n Minimum index in range is : %d ", rmq(3,9) );
	printf("\n Minimum index in range is : %d ", rmq(2,7) );
	
	return 0;
}

Range minimum query with O(n), O(√n) complexity solution

Can we do better for preprocessing step while trading off query step? If we divide the array into smaller chunks and store index of minimum value element in those chunks, will it help? And what should be the size of chunks? How about we divide the array in √n parts, where √n is size of part.

RMQ or range minimum query based on square root partitioning

Now, find minimum element index in each of this chunk, and store it. Extra space required is (√n). Finding minimum for each chunk has a complexity of (√n * √n) as O(n).

To find minimum element index in the given range, follow three steps:
1. Find the index of the element with the minimum value in all chunks lying between the start and end of the given range. (Max √n operations if all chunks fall in the range)
2. Find minimum index in chunk where the start of the range lies. ( Max √n comparisons from start of the range to end of the chunk).
3. Find minimum index in chuck where end of the range lies from the start of chunk to end of the range.
4. Compare all these values and return the index of the minimum of them.

No matter, how big or small range is to find the index of an element with the minimum value, the worst case will be O(√n) as there are only 3*√n operations.

Let’s take an example and see how it works. Find minimum in range (2,7)

range minimum query or RMQ example

To get RMQ(2,7), what are the chunks with are lying within range? There is only one: chunk 1. Minimum index of chunk 1 is M[1] which is 5, so, minimum element in those chunks is A[5].

Find the index of the minimum value in chunk 0 where start of the range lies (starting from start of the range which 2). There is only one element, which is index 2, so element to compare is A[2].

Find minimum from the start of chunk where the end of the range lies. So, we will be comparing A[6] and A[7].

At the end, compare A[5] (minimum of all chunks between start and end of range ), A[2] (minimum in chunk where the start of the range lies) and A[6], A[7] (minimum in chunk where end of the range lies) and we have the answer as 5 as A[5] is the minimum of all these values.

Aggregating all things, we found a way to optimize solution of range minimum query with complexity as ((o(n), O(√n)).

RMQ using sparse table

Method 3 uses only O(√n) space, however, query time complexity is also O(√n). To reduce query time at the expense of space, there is another method called as sparse table method. This method uses features of method 2 (dynamic programming) and features of method 3 (find minimums of chunks).

In this approach, split input array into chunks of size 2j where j varies from 0 to log n and n is number of elements in array. There will be n log n such chunks and hence the space complexity becomes O(n log n).

After splitting, find the index of the minimum element in each chunk and store it in a lookup table. 

M[i][j] stores minimum in range from i with size 2j.

RMQ using sparse matrix table

For example, M[0][3] stores index of the minimum value between 0 and 7 (23 = 8 elements).

Now problem is how to create this lookup table? This table can be created using dynamic programming from bottom up. Specifically, we find index of the minimum value in a block of size 2j by comparing the two minima of its two constituent blocks of size 2j-1. More formally,

M[i,j] = M[i, j-1] if A[M[i, j-1]] >= A[M[i+2^j-1, j-1]] 
M[i,j] = M[i+2^j-1, j-1] otherwise.

How to find the index of the minimum value in a given range? Idea is to find two subranges which cover the entire range and then find the minimum of minimums of these two ranges.
For example, find RMQ(i,j). If 2k be size of largest block that fits into the range from i to j, then k = log(j – i + 1)

Now, we have two parts to look in from i to i+2k + 1 (already computed as M[i,k] ) and from j-2k+1 (already computed as M[j-2k+1, k]).

Formally,

    RMQ(i,j) =  M[i][k] if A[ M[i][k] ] >= A[M[j-2^k+1, k]]
    RMQ(i,j) =  M[j-2^k+1, k]

RMQ implementatio using sparse table

#include <stdio.h>
#include <math.h>

int M[100][100];

void preprocess(int a[], int size)
{
    int i, j;
	
    for (i = 0; i < size; i++)
        M[i][0] = i;
		
    for (j = 1; 1 << j <size ; j++){
        for (i = 0; i + (1 << j) - 1 < size; i++){
            if (a[M[i][j - 1]] < a[M[i + (1 << (j - 1))][j - 1]])
                M[i][j] = M[i][j - 1];
            else
                M[i][j] = M[i + (1 << (j - 1))][j - 1];
        }
    }
}  
  
int rmq(int a[], int start, int end){
    int j = floor(log(start-end+1));

    if ( a[M[start][j]] <= a[M[end-(1<<j)+1][j]] )
        return M[start][j];
    else 
        return M[end-(1<<j)+1][j];
}

int main(void) {
	
	int a[] = { 2,3,1,5,9,7,10,5,6,3 };
	int size = sizeof(a)/sizeof(a[0]);
	
	//Preprocessing step
	preprocess(a, size);
	printf("\n Minimum index in range is : %d ", rmq(a,3,9) );
	printf("\n Minimum index in range is : %d ", rmq(a,2,7) );
	
	return 0;
}

These two blocks entirely cover the range and since only once comparison required, the complexity of lookup will be O(1).

In this post, we discussed various ways to implement range minimum query based on space and time complexity tradeoff. In future posts, we will discuss applications of RMQ such as segmented trees and least common ancestor problem.

Please share if something is wrong or missing, we would love to hear from you.

Breadth First traversal

Breadth First traversal

In the last post, we discussed depth first traversal of a graph. Today, we will discuss another way to traverse a graph, which is breadth first traversal. What is breadth first traversal? Unlike depth-first traversal, where we go deep before visiting neighbors, in breadth-first search, we visit all the neighbors of a node before moving a level down. For example, breadth first traversal of the graph shown below will be [1,2,5,3,4,6]

breadth first traversal

In breadth first search, we finish visiting all the nodes at a level before going further down the graph. For example, the graph used in the above example can be divided into three levels as shown.

breadth first search

We start with a node in level 1 which is node(1). Then visit all the nodes which are one level below node(1) which are node(2) and node(5). Then we visit all the node at level 3 which are node(3), node(4) and node(6).

Breadth First Traversal: algorithm

  1. Start with the given node u, put node u to queue
  2. While queue is not empty, repeat below steps:
    1. Dequeue fro queue and print node u.
    2. For each neighbor of u, node v
    3. If v is not visited already: add v to the queue
    4. mark v as visited

Let’s take an example and see how it works. Below is the graph and we have to find BFS for this graph.
breadth first traversal

We start from node(1), and put it in the queue.
breadth first traversal of graph

While the queue is not empty, we should pop from it and print the node. In this case, node(1) will be printed. Next, we go through all the neighbors of node(1) and put all the unvisited node on the queue. node(2) and node(5) will go on to the queue and marked as visited. Traversal = {1}

breadth first search

Again, we dequeue from the queue and this time we get node(2). We print it and go for all the neighbor node, node(3) and node(4) and mark them as visited. Traversal = {1,2}

node(5) is dequeued next and printed. Here, even though node(4) is a neighbor of node(5), it is already visited and hence not put on to the queue again. But node(6) is not yet visited, so put it on to the queue. Traversal = {1,2,5}

Now, we pop node(3) and print it, however, node(4) is already visited. Hence, nothing is added to the queue. Traversal = {1,2,5,3}

Next, node(4) is taken out from queue and printed, nothing goes on to queue. Traversal = {1,2,5,3,4}

Last, we pop node(6) and print it. Traversal = {1,2,5,3,4,6}.

At this point, the queue is empty and we stop traversal.

Breadth first traversal: implementation

public ArrayList<Integer> breadthFirstTraversal(){

        boolean[] visited = new boolean[this.G.length];
        ArrayList<Integer> traversal = new ArrayList<>();

        Queue<Integer> q = new LinkedList<>();

        //This is start node
        q.add(1);
        visited[1] = true;

        while(!q.isEmpty()){
            int u = (int)q.remove();
            traversal.add(u);

            for(int i=1; i< this.G[1].length; i++){
                if(this.G[u][i] && !visited[i]){
                    q.add(i);
                    visited[i]= true;
                }
            }
        }
        System.out.println(traversal);
        return traversal;

    }

The complexity of this code is O(V2) as at least V nodes will go in queue and for each nodes internal for loop runs V times.

Implementation of breadth-first search on graph represented by adjanceny list

  public ArrayList<Integer> breadthFirstTraversal(){

        boolean[] visited = new boolean[this.G.size()];
        ArrayList<Integer> traversal = new ArrayList<>();

        Queue<Integer> q = new LinkedList<>();

        //This is start node
        q.add(1);
        visited[1] = true;

        //This loop will run for V times, once for each node.
        while(!q.isEmpty()){
            int u = (int)q.remove();
            traversal.add(u);

            /*This loop has a worst-case complexity of O(V), where 
               node has an edge to every other node, but 
               the total number of times this loop will run is E times 
               where E number of edges.
             */
            for(int v : this.G.get(u)){
                if(!visited[v]){
                    q.add(v);
                    visited[v]= true;
                }
            }
        }
        System.out.println(traversal);
        return traversal;

    }

The complexity of Breadth First Search is O(V+E) where V is the number of vertices and E is the number of edges in the graph.

The complexity difference in BFS when implemented by Adjacency Lists and Matrix occurs due to this fact that in Adjacency Matrix, to tell which nodes are adjacent to a given vertex, we take O(|V|) time, irrespective of edges. Whereas, in Adjacency List, it is immediately available to us, takes time proportional to adjacent vertices itself, which on summation over all vertices |V| is |E|. So, BFS by Adjacency List gives O(|V| + |E|).

StackOverflow

When a graph is strongly connected, O(V + E) is actually O(V2)

Applications of Breadth first traversal

  1. To find shortest path between two nodes u and v
  2. To test bipartite-ness of a graph
  3. To find all nodes within one connected component

Please share if there is something wrong or missing. If you are preparing for an interview and want a free coaching session to guide you through it, please reach out to us at communications@algorithmsandme.com

Difference between array and linked list

Difference between array and linked list

In last post : Linked list data structure, we discussed basics of linked list, where I promised to go in details what is difference between array and linked list. Before going into post, I want to make sure that you understand that there is no such thing called one data structure is better than other. Based on your requirements and use cases, you chose one or the other. It depends on what is most frequent operation your algorithm would perform in it’s lifetime. That’s why they have data structure round in interview process to understand if you can chose the correct one for the problem.

What is an array?
Array is linear, sequential and contiguous collection of elements which can be addressed using index.

What is a linked list?
Linked list is linear, sequential and non-contiguous collection of nodes, each node store the reference to next node. To understand more, please refer to Linked list data structure.

Difference between arrays and linked list

Static Vs dynamic size

Size of an array is defined statically at the compile time where as linked list grows dynamically at run time based on need. Consider a case where you know the maximum number of elements algorithm would ever have, then you can confidently declare it as array. However, if you do not know, the linked list is better. There is a catch : What if there is a rare chance that number of elements will reach maximum, most of the time it will be way less than maximum? In this case, we would unnecessary allocating extra memory for array which may or may not be used. 

Memory allocation

An array is given contiguous memory in system. So, if you know the address of any of the element in array, you can access other elements based position of the element.

linked list vs arrays
Statically allocated contiguous memory

Linked list are not store contiguous on memory, nodes are scattered around on memory. So you may traverse forward in linked list, given node (using next node reference), but you can not access nodes prior to it.

arrays vs linked list
Dynamically allocated non-contiguous memory

Contiguous allocation of memory required sufficient memory before hand for an array to be stored, for example if want to store 20 integers in an array, we would required 80 bytes contiguous memory chunk. However, with linked list we can start with 8 bytes and request more memory as when required, which may be wherever. Contiguous allocation of memory makes it difficult to resize an array too. We have to look for different chunk of memory, which fits the new size, move all existing elements to that location. Linked list on other hand are dynamically size and can grow much faster without relocating existing elements.

Memory requirement

It’s good to have non-contiguous memory then? It comes with a cost. Each node of linked list has to store reference to next node in memory. This leads to extra payload of 4 bytes in each node. On the other hand, array do not require this extra payload. You  have to trade off extra space with advantages you are getting. Also, sometime, spending extra space is better that have cumbersome operations like shifting, adding and deleting operation on array. Or value stored in node is big enough to make these 4 bytes negligible in analysis.

Operation efficiency

We do operations of data structure to get some output. There are four basic operations we should be consider : read, search, insert/update and delete.

Read on array is O(1) where you can directly access any element in array given it’s index. By O(1), read on array does not depend on size of array.
Whereas, time complexity of read on linked list is O(n) where n is number of nodes. So, if you have a problem, which requires more random reads, array will over-weigh linked list.

Given the contiguous memory allocation of array, there are optimized algorithms like binary search to search elements on array which has complexity of O(log n). Search on linked list on other hand requires O(n).

Insert on array is O(1) again, if we are writing within the size of array. In linked list, complexity of insert depends where do you want to write new element at. If insert happens at head, then it O(1), on the other hand if insert happens at end, it’s O(n).

Insert node at start of linked list
Insert node at the tail of linked list

Update means here, changing size of array or linked list by adding one more element. In array it is costly operation, as it will require reallocation of memory and copying all elements on to it. Does not matter if you add element at end or start, complexity remains O(1).
For linked list, it varies, to update at end it’s O(n), to update at head, it’s O(1). 
In same vain, delete on array requires movement of all elements, if first element is deleted, hence complexity of O(n). However, delete on linked list O(1), if it’s head, O(n) if it’s tail.

To see the difference between O(1) and O(n), below graph should be useful.

difference between array and linked list
Complexity analysis graph

Key difference between array and linked list are as follows

  • Arrays are really bad at insert and delete operation due to internal reallocation of memory.
  • Statically sized at the compile time
  • Memory allocation is contiguous,  which make access elements easy without any additional pointers. Can jump around the array without accessing all the elements in between.
  • Linked list almost have same complexity when insert and delete happens at the end, however no memory shuffling happens
  • Search on linked list is bad.=, usually require scan with O(n) complexity
  • Dynamically sized on run time.
  • Memory allocation is non-contiguous, additional pointer is required to store neighbor node reference. Cannot jump around in linked list.

Please share if there is something wrong or missing. If you wan to contribute to website, please reach out to us at communications@algorithmsandme.com

Linked list data structure

Linked list data structure

Linked list is a very important data structure to understand as lot of problems are asked based on linked list in Amazon, Microsoft and Google interview. Today, we will understand the basics of linked list data structure and it’s implementation. 

Linked list represent linear sequence of elements. Each element connected to next element using chain of references. Another data structure which store linear sequence of items is array. There are some advantages and uses cases where linked list way of storing sequence is more efficient than array, I will cover that into next post : Arrays Vs Linked lists.

In last paragraph, I emphasized on linkedlist being linear data structure. In linear data structure, there is a sequence and order how elements are inserted, arranged and traversed. In order to go to tail of linked list, we have to go through all of the nodes.

linked list data structure
linear data structure when elements can be traversed only in one order


Non linear data structures are the ones where elements are not arranged or traversed in a specific order. One element may be connected to many others, hence we cannot traverse them in the same order every time. Example of non-linear data structure would be maps, dictionaries, trees, graphs etc.

linked list as data structure
Non  linear data structure when nodes cannot be traversed in one order always

Linked list implementation

Linked list consists of node, any number of nodes. Each node contains two things : first, value of the node, this value can be of any type, integer, string, or other user defined type. Second, a reference which points to next node in linked list. A node can be declared as follows:

typedef struct Node {
	int data;
	struct Node * next;
} Node;
Node structure
Linked list

What happens if the node is last node in linked list? At last node, next pointer of the node points to the null. It’s very important to understand this bit, as this condition will be used on almost every problem you have to solve on linked list.

Linked list is dynamic data structure. By dynamic data structure, we mean, it’s size and nature is not defined at the time of compilation, but defined at run time. Every time, a new node is added to linked list, new memory location is allocated and previous node’s next pointer will point to new node.

Operations of linked list

  • Adding node at the end of list
    There are three basic steps to add a node to linked list at end:
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Scan through linked list using next pointer, reach to the last node.
    2. Create a new node, and point next pointer of last node to this new node.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);

	//find the last node
	Node *currentNode = *headRef;
	while(currentNode && currentNode->next != NULL){
		currentNode = currentNode->next;
	}
	if(currentNode)
		currentNode->next = newNode;
	}
	else{
		//Change headRef to point to new head.
		*headRef = newNode;
	}
}

Complexity of adding a node to linked list is O(n). 

  • Insert node at head of list
    In this case too, we allocate a new node, however, this time we do not have to scan the entire list. Every time we add node to list, it’s head changes though.
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Create a new node, and point next pointer new node to head.
    2. Return new node as head pointer.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);
	newNode->next = *headRef;
	*headRef = newNode;
}

Linked list data structure problems

It’s very important to understand that linked list is a recursive data structure. Base case is a linked list with no node, represented by NULL node. Every problem on linked list can be solved using template : process one node, and then recursively process the remaining linked list.

In programming terms, linked list is divided into two parts, head and tail. The node being processed is called head and rest of the linked list is tail. Tail has the exactly same structure as the original list. 

Problems like merging linked lists, reverse a linked list, find length of linked list all can be solved using the same template of processing one node and the recursively call function on remaining node. 

Types of linked list

There are three types of linked lists :
1. Singly linked list 
Singly linked lists contain nodes with data and reference, i.e., next, which points to the next node in the sequence of nodes. The next pointer of the last node will point to null. In singly linked list you can traverse only in one direction.

singly linked list
singly linked list

2. Doubly linked list
In a doubly linked list, each node contains two links – previous, which points to the node before current node and next,  which points to next node. The previous pointer of the first node and next pointer of the last node will point to null. In doubly linked list, you can traverse it both directions. Two references adds to weight as extra memory is required.

doubly linked list
doubly linked list

3. Circular linked list
In circular linked list, next pointer of  the last node will point to the first node. A circular linked list can be both singly as well as doubly linked list.

circular linked list
Circular doubly linked list

This was all for basics of linked list, I know problems on them are hard to solve but if you look at all the problems, they boil down to one thing : understanding of node and how recursion can be used. In next posts, we will be solving many of these problems and see how we can use these basics.

Please share if there is something wrong or missing. If you are interested in contributing to website and share your knowledge with thousands of users across world, please reach out to us at communications@algorithmsandme.com

Fill 4xN wall with 4×1 and 1×4 bricks

Fill 4xN wall with 4×1 and 1×4 bricks

There is a wall with 4 x N dimensions and we have a brick with 4 x 1 dimension. We have to fill the wall with given brick and find out how may ways possible to fill that wall.

For example, if there is wall with N = 3, we have only one way to fill the wall, with three brick laid horizontally.

Where as with N = 4, there are two ways, one with putting four bricks horizontally, or 4 bricks vertically.

fill wall with bricks Actually, examples themselves give away the answer to the our problem. Let’s start small and build on top of it. What if N = 1 , then wall dimensions are 4 x 1, and there is only one way to fill that wall with brick of 4 x 1, which is to lay the brick horizontally.

What if N = 2, i.e. wall  is 4 x 2, , again, there is only one way  possible, put two bricks horizontally,we cannot put bricks vertical. Why?

Take N  = 3, i.e. wall with 4 x 3, only way we can fill the wall is to put three bricks horizontally, can’t use vertical brick.

What if N = 4, wall with 4 x 4 dimensions, in this scenario, we have two options, put four bricks horizontally or four bricks vertically, so there are two ways to fill a wall of 4 x 4 with brick of 4 x 1.

Now,  if number of ways to fill a wall of dimension 4 x N is f(N) then f(N) for values 1, 2 and 3 is as follows.

f(1)=1, f(2)=1, f(3)=1

We have two choices for each brick for wall of size greater than 4 X 3.  Either to keep brick vertically or  to keep brick horizontally.

If we keep brick vertically, we cover four units out of N units height of wall with each brick, require four vertical bricks to cover horizontally, so problem reduces to N-4 units.

If we keep brick horizontally, then it covers only 1 unit height of wall, hence we need to cover N-1 units of height further.
So, for N we have relationship as

f(N) = f(N-1)  + f(N-4)

We have the recurrence relation and the base conditions, let’s implement it.

Fill wall with brick : recursive implementation

int findWays(int n){
        if(n == 0 || n == 1 || n == 2 || n == 3) return 1;
        return findWays(n-1) + findWays(n-4);
}

int main(void) {
	int N = 5;
	int ways = findWays(N);
	printf("%d", ways);
	return 0;
}

Do you think this solution is optimized? Why do you think, it can be optimized and how? If you closely look at the recursion tree of implementation, you will see the problem. Some of the subproblems are solved repeatedly. Can we avoid solving them again and again?  Yes, that’s called memoization.

Well, this problem can be solved using dynamic programming, because two properties hold : First, optimal solution to subproblem gives solution to original problem. Second, overlapping subproblems.

Dynamic programming approach would be to fill a table bottom up where table [N] will be the solution.  table[0] = table[1] = table[2] = table[3] = 1 as discussed above.

Now from N = 4, we can fill the table bottom up as

table[N] = table[N-1] + table[N-4]

Fill wall with brick : dynamic programming implementation

int find_ways(int n, int table[]){
	int i;
	for(i = 4; i&lt;= n; i++){
		table[i] = table[i-1] + table[i-4];
	}
}

int main(void) {
	int N =5;
	int table[N+1];
	table[0] = 1;
	table[1] = 1;
	table[2] = 1;
	table[3] = 1;
	find_ways(N, table);
	printf("%d", table[N]);
	return 0;
}

Complexity of dynamic programming approach is O (N) with space complexity of O(N).

Please share if there is something wrong or missing. If you are willing to share your knowledge and help thousands of learners across the world, please reach out to us on communications@algorithmsandme.com

Scheduling weighted jobs

Scheduling weighted jobs

Suppose we have been give n jobs j1, j2,j3…jn with their start time s1,s2,… sn and finish time f1,f2, f3…fn. There is a value vi associated with each job. Problem is scheduling weighted jobs such all jobs are compatible and we get maximum value. Two jobs are said to be compatible, if there execution time do not overlap.

For example, we have four jobs as shown below:

scheduling weighted jobs

In above figure maximum value can be achieved by scheduling job 1 and job 4 which is value of 250. Notice that there one more schedule with compatible jobs (Job1, Job2 and Job 3), however, value we get by that schedule is only 170 which is less than what we got in earlier schedule.

Scheduling weighted jobs : Line of thoughts

There is strong urge to use greedy algorithm here, and problems is very similar to Interval Scheduling Algorithm. However, greedy algorithm works for this problem when value of all jobs is equal. Since value of jobs is different here, greedy algorithm fails.

Let’s consider brute force solution. First of all, sort all jobs based on finish time in increasing order. Now, for each job, decide if including it in schedule gives us maximum value or excluding it will give us maximum value. When we include a job, check if it is compatible with other jobs which are included in schedule. To determine compatibility quickly, we pre-calculate an array, called P such that

p(j) = largest index i < j such that job i is compatible with j.

For jth job or interval to be compatible with ith interval, start time of jth interval or job should be greater than end time of ith interval or job.

For example: p(8) = 5, p(7) = 3, p(2) = 0.

scheduling-weighted-jobs

Now, let’s say OPT(j) represents the maximum value which we gain by adding jobs from 1 to j. As mentioned above, there are two cases:

Case 1: OPT selects job j. In this case we can not use incompatible jobs {p(j) + 1, p(j) + 2, …, j – 1} and must include optimal solution to problem consisting of remaining compatible jobs 1, 2, …, p(j).

Case 2: OPT does not select job j. – must include optimal solution to problem consisting of remaining compatible jobs 1, 2, …, j-1

For case 1, we already have P[j] calculated. With P[j] already prepared, we know that we don’t have to check any job later than P[j] as all of them will be conflicting with current job. Recursive formula for calculating maximum value for n jobs will be:

OPT( j) = 0 if j = 0 
          max { vj + OPT( p(j) ), OPT(j-1)} otherwise

Scheduling weighted jobs : Recursive solution

package com.company;

import java.util.Arrays;

/**
 * Created by sangar on 4.5.18.
 */
public class ScheduleWeightedJobs {

    public static int optimalScheduling(Job[] jobs, int[] nonConflictJobs, int j){
        if(j == -1){
            return 0;
        }

        return Integer.max(optimalScheduling(jobs, nonConflictJobs, nonConflictJobs[j]) + jobs[j].getValue(),
                            optimalScheduling(jobs, nonConflictJobs, j-1));
    }

    public static void main(String[] args) {

        Job[] jobs = new Job[4];
        jobs[0] = new Job(1, 3, 50);
        jobs[1] = new Job(3, 5, 20);
        jobs[2] = new Job(6, 9, 100);
        jobs[3] = new Job(3, 12, 200);

        Arrays.sort(jobs, (o1, o2) -> o1.getEndTime() - o2.getEndTime());

        int[] nonConflictingJobs = new int[jobs.length];

        for (int j = 0; j < jobs.length; j++) {
            nonConflictingJobs[j] = -1;
            for(int i = j-1; i >= 0; i--) {
                if(jobs[i].getEndTime() <= jobs[j].getStartTime()) {
                    nonConflictingJobs[j] = i;
                    break;
                }
            } 
        }

        int maxValue = optimalScheduling(jobs,nonConflictingJobs, jobs.length-1);

        System.out.println(maxValue);
    }
}

This recursive algorithm has exponential complexity as there are lot of subproblems which are calculated repeatedly. For example,
Schedule weighted jobs

Recursive execution tree for above problem would like
weighted jobs scheduling

If we revisit the problems there are two properties of this problem : First it is optimal substructure, which means, optimal solution to subproblem leads to optimal solution to bigger problem. Second, there are overlapping subproblems. From figure, we can see that there are subproblems which are being re-calculated. Typical way to avoid this repetition is to store solutions to subproblem, this method is called memoization. This is kind of a cache where results of subproblems are stored and looked into whenever required.

This is typical case of dynamic programming application.

scheduling weighted job : Dynamic programming implementation

package com.company;

import java.util.Arrays;

/**
 * Created by sangar on 4.5.18.
 */
public class ScheduleWeightedJobs {

    public static int optimalSchedulingDP(Job[] jobs, int[] nonConflictJobs){
        int[] optimalValue = new int[jobs.length];

        optimalValue[0] = jobs[0].getValue();

        for(int i = 1; i < jobs.length; i++){
            optimalValue[i] = Integer.max(optimalValue[nonConflictJobs[i]] + jobs[i].getValue(),
                                optimalValue[i-1]);
        }
        return optimalValue[jobs.length-1];
    }

    public static void main(String[] args) {

        Job[] jobs = new Job[4];
        jobs[0] = new Job(1, 3, 50);
        jobs[1] = new Job(3, 5, 20);
        jobs[2] = new Job(6, 9, 100);
        jobs[3] = new Job(3, 12, 200);

        Arrays.sort(jobs, (o1, o2) -> o1.getEndTime() - o2.getEndTime());

        int[] nonConflictingJobs = new int[jobs.length];

        for (int j = 0; j < jobs.length; j++) {
            nonConflictingJobs[j] = -1;
            for(int i = j-1; i >= 0; i--) {
                if(jobs[i].getEndTime() <= jobs[j].getStartTime()) {
                    nonConflictingJobs[j] = i;
                    break;
                }
            }
        }

        int maxValue = optimalSchedulingDP(jobs,nonConflictingJobs);

        System.out.println(maxValue);
    }
}

Run time complexity of dynamic programming approach is O(n2). Sorting takes O(n log n) and calculation of maximum value takes O(n2).
If we have pre-sorted input based on finish time, then this approach takes only O(n). Note that we need additional O(n) space for storing results of subproblems.

How about finding the solution itself, means to find which jobs are actually give us optimal value? This requires some post processing. Algorithm is as follows

Find-solution(j) : 
 if (j = 0) output nothing 
 else if (vj + Table[P(j)] > Table[j-1]) print j 
     Find-Solution(p(j)) 
 else Find-Solution(j-1)

Please share if there is something wrong or missing. If you are interested in contributing to algorithms and me, please drop a mail

Interval partitioning problem

Interval partitioning problem

In continuation of greedy algorithm problem, (earlier we discussed : even scheduling and coin change problems) we will discuss another problem today. Problem is known as interval partitioning problem and it goes like : There are n lectures to be schedules and there are certain number of classrooms. Each lecture has a start time si and finish time fi. Task is to schedule all lectures in minimum number of classes and there cannot be more than one lecture in a classroom at a given point of time. For example, minimum number of classrooms required to schedule these nine lectures is 4 as shown below.

interval partition

However,  we can do some tweaks and manage to schedule same nine lectures in three classrooms as shown below.

So, second solution optimizes the output.

Another variant of this problem is :  You want to schedule jobs on a computer. Requests take the form (si , fi) meaning a job that runs from time si to time fi. You get many such requests, and you want to process as many as possible, but the computer can only work on one job at a time.

Interval partitioning : Line of thought

First thing to note about interval partitioning problem is that we have to minimize something, in this case, number of classrooms. What template this problem fits into? Greedy may be? Yes it fits into greedy algorithm template. In greedy algorithm we take decision on local optimum.

Before discussing the solution, be clear that what is resource and what needs to be minimized? In this problem, resource is classroom and total number of classroom needs to be minimized by arranging lectures in certain order.

There are few natural orders in which we can arrange all lectures or for sake of generality, tasks. First is to arrange them in order of finish time,  second is to arrange in order of start time, third is to order them by smallest duration of task, fourth is by minimum number of conflicting jobs. Which one to chose?
You can come up with counter example when if lectures are arranged in classrooms by order of their end time, or smallest duration or minimum number of conflicting jobs, it does not end to optimal solution  So, let’s pick lectures based on earliest start time. At any given pint of time, pick lecture with least start time and yet not scheduled and then assign it to first available class. Will it work? Sure it does.  When you have assigned all lectures, total number of classrooms will be minimum number of classrooms required.

Interval partitioning algorithm

1. Sort all lectures based on start time in ascending order.
2. Number of initial classrooms = 0
3. While lecture to be scheduled:
   3.1 Take first lecture yet not scheduled,
   3.2 If there a already a class available for lecture's start time
       Assign lecture to the class.
   3.3 If not, then allocate a new classroom
       number of classroom = number of classroom + 1
4. Return number of classrooms.

Before jumping into the code, let’s discuss some data structures which we can use to implement this algorithm.

Understand that we have to find a compatible classroom for a lecture. There are many classrooms, we need to check if the finish time of lecture in that classroom is less than start time of new lecture. If yes , then classroom is compatible, if there is no such class, allocate a new class. If we store our allocated classrooms in such a way that it always gives classroom with least finish time of last lecture scheduled there, we can safely say that if this classroom is not compatible, none of the others will be.(Why?) Every time we assign a lecture to a classroom, sort the list of classroom, so that first classroom is with least finish time.  Sort has complexity of O(n log n) and if we do it for all n intervals, overall complexity of algorithm will be O(n2 log n).

We are sorting just to find minimum end time across all classrooms. This can easily be achieved by min heap or priority queue keyed on finish time of last lecture of class. Every time finish time of last lecture changes for a classroom, heap is readjusted and root gives us classroom with min finish time.

  • To determine whether lecture j is compatible with some classroom, compare sj to key of min classroom k in priority queue.
  • When a lecture is added to a classroom,  increase key of classroom k to fj.

Well know we have algorithm and data structure to implement in, so let’s code it.

PrioritityQueue implementation is given below:

import heapq
# This is our priority queue implementation
class PriorityQueue:
    def __init__(self):
        self._queue = []
        self._index = 0
 
    def push(self, item, priority):
        heapq.heappush(self._queue, (priority, self._index, item))
        self._index += 1
 
    def pop(self):
        if(self._index == 0):
                return None
        return heapq.heappop(self._queue)[-1];

Classroom class implementation

class Classroom:
	def __init__(self, number, finish_time):
		self.class_num = number
		self.finish_time = finish_time
	def __repr__(self):
		return 'Classroom({!r})'.format(self.class_num)

Interval partitioning problem : Implementation

from PriorityQueue import PriorityQueue
from Classroom import Classroom

jobs = [(1, 930, 1100),
        (2, 930, 1300),
        (3, 930, 1100),
        (5, 1100, 1400),
        (4, 1130, 1300),
        (6, 1330, 1500),
        (7, 1330, 1500),
        (8,1430,1700),
        (9, 1530, 1700),
        (10, 1530, 1700)
]

def find_num_classrooms():
	num_classrooms = 0;
	priority_queue = PriorityQueue()

	for job in jobs:
		# we have job here, now pop the classroom with least finishing time
		classroom = priority_queue.pop();
		if(classroom == None) :
			#allocate a new class
			num_classrooms+= 1;
			priority_queue.push(Classroom(num_classrooms,job[2]),job[2]);
		else:
			#check if finish time of current classroom is
			#less than start time of this lecture
			if(classroom.finish_time  <= job[1]):
				classroom.finish_time = job[2]
				priority_queue.push(classroom,job[2])
			else:
				num_classrooms+= 1;
				#Since last classroom needs to be compared again, push it back
				priority_queue.push(classroom,job[2])
				#Push the new classroom in list
				priority_queue.push(Classroom(num_classrooms,job[2]),job[2])

    return  num_classrooms
	
print "Number of classrooms required: " +  find_num_classrooms();

Java Implementation

package com.company;

import java.util.*;

/**
 * Created by sangar on 24.4.18.
 */
public class IntervalPartition {

    public static int findIntervalPartitions(ArrayList<Interval> intervals){
        PriorityQueue<Interval> queue =
                new PriorityQueue<Interval>(intervals.size(), Comparator.comparing(p -> p.getEndTime()));

        for(Interval currentInterval : intervals) {
            if (queue.isEmpty()) queue.add(currentInterval);
            else {
                if (queue.peek().getEndTime() > currentInterval.getStartTime()) {
                    queue.add(currentInterval);
                } else {
                    queue.remove();
                    queue.add(currentInterval);
                }
            }
        }
        return queue.size();
    }

    public static void main(String args[] ) throws Exception {
        ArrayList<Interval> intervals = new ArrayList<>();

        intervals.add(new Interval(930,1100));
        intervals.add(new Interval(930,1300));
        intervals.add(new Interval(930,1100));
        intervals.add(new Interval(1130,1300));
        intervals.add(new Interval(1100,1400));
        intervals.add(new Interval(1330,1500));
        intervals.add(new Interval(1330,1500));
        intervals.add(new Interval(1430,1700));
        intervals.add(new Interval(1530,1700));

        Collections.sort(intervals, Comparator.comparing(p -> p.getStartTime()));

        int minimumClassRooms = findIntervalPartitions(intervals);
        System.out.println(minimumClassRooms);
    }
}

This algorithm takes overall time of O(n log n) dominated by the sorting of jobs on start time. Total number of priority queue operations is O(n) as we have only n lectures to schedule and for each lecture we have push and pop operation.

Reference :

There is another method using binary search algorithm which can be used to solve this problem. As per problem statement, we have to find minimum number of classrooms to schedule n lectures. What are the maximum number of classrooms required? It will be number of lectures when all lectures conflict with each other.
Minimum number of classrooms will be 0 when there is no lecture to be scheduled. Now, we know the range of values of classrooms. How can we find minimum?

Basic idea is that if we can schedule all n lectures in m rooms, then we can definitely schedule them in m+1 and more rooms. So minimum number of rooms required will be either m or less than it. In this case, we can safely discard all candidate solution from m to n (remember n is the maximum number of classrooms).
Again what if we can not schedule lectures in m rooms, then there is no way we can schedule them in less than m rooms. Hence we can discard all candidate solutions less than m.

How can we select m? We can select is as mid of range which is (0,n). And try to fit all lectures on those m rooms based on condition that none of lecture conflicts. Keep track of end time of last lecture of each classroom. If none of the classroom has end time less than start time of new lecture, allocate new class. If total number of classrooms is less than or equal to m, discard m+1 to n. If it is more than m, then discard 0 to m and search for m+1 to n.

package com.company;

import java.util.*;

/**
 * Created by sangar on 24.4.18.
 */
public class IntervalPartition {

    public static boolean predicate(ArrayList<Interval> intervals, long candidateClassRooms){

        int i = 0;

        PriorityQueue<Interval> queue =
                new PriorityQueue<Interval>(intervals.size(), Comparator.comparing(p -> p.getEndTime()));

        for(Interval currentInterval : intervals){
            if(queue.isEmpty()) queue.add(currentInterval);
            else{
                if(queue.peek().getEndTime() > currentInterval.getStartTime()){
                    queue.add(currentInterval);
                }
                else{
                    queue.remove();
                    queue.add(currentInterval);
                }
            }
        }

        return queue.size() <= candidateClassRooms;
    }

    public static void main(String args[] ) throws Exception {
        ArrayList<Interval> intervals = new ArrayList<>();

        intervals.add(new Interval(930,1100));
        intervals.add(new Interval(930,1300));
        intervals.add(new Interval(930,1100));
        intervals.add(new Interval(1130,1300));
        intervals.add(new Interval(1100,1400));
        intervals.add(new Interval(1330,1500));
        intervals.add(new Interval(1330,1500));
        intervals.add(new Interval(1430,1700));
        intervals.add(new Interval(1530,1700));

        long low = 0;
        long high = intervals.size();

        Collections.sort(intervals, Comparator.comparing(p -> p.getStartTime()));

        while(low < high){
            long mid  = low + ( (high - low) >> 1);

            if(predicate(intervals, mid)){
                high = mid;
            }else{
                low = mid+1;
            }
        }
        System.out.println(low);
    }
}

Complexity of algorithm is dependent on number of lectures to be scheduled which is O(n log n ) with additional space complexity of O(c) where c is number of classrooms required.

Please share your views and suggestions in comments and feel free to share and spread the word. If you are interested to share your knowledge to learners across the world, please write to us on communications@algorithmsandme.com