Print paths in a binary tree

Print paths in a binary tree

We learned various kind of traversals of a binary tree like inorder, preorder and postorder. Paths in a binary tree problem require traversal of a binary tree too like every other problem on a binary tree. The problem statement is:

Given a binary tree, print all paths in that binary tree

What is a path in a binary tree? A path is a collection of nodes from the root to any leaf of the tree. By definition, a leaf node is a node which does not have left or right child. For example, one of the paths in the binary tree below is 10,7,9.

paths in a binary tree
Paths in a binary tree

Paths in a binary tree: thoughts

It is clear from the problem statement that we have to start with root and go all the way to leaf nodes. Question is do we need to start with root from each access each path in binary tree? Well, no. Paths have common nodes in them. Once we reach the end of a path (leaf node), we just move upwards one node at a time and explore other paths from the parent node. Once all paths are explored, we go one level again and explore all paths from there. 

This is a typical postorder traversal of a binary tree, we finish the paths in the left subtree of a node before exploring paths on the right subtree We process the root before going into left or right subtree to check if it is the leaf node. We add the current node into the path till now. Once we have explored left and right subtree, the current node is removed from the path.

Let’s take an example and see how does it work. Below is the tree for each we have to print all the paths in it.

paths in a binary tree
Paths in binary tree

First of all our list of paths is empty. We have to create a current path, we start from the root node which is the node(10). Add node(10) to the current path. As node(10) is not a leaf node, we move towards the left subtree.

print paths in a binary tree

node(7) is added to the current path. Also, it is not a leaf node either, so we again go down the left subtree.

paths in binary search tree

node(8) is added to the current path and this time, it is a leaf node. We put the entire path into the list of paths or print the entire path based on how we want the output.

paths in a binary tree

At this point, we take outnode(8) from the current path and move up to node(7). As we have traversed the left subtree of,node(7) we will traverse right subtree of the node(7).

paths in binary search tree

node(9) is added now to the current path. It is also a leaf node, so again, put the path in the list of paths. node(9) is moved out of the current path.

Now, left and right subtrees of node(7) have been traversed, we remove node(7) from the current path too.

At this point, we have only one node in the current path which is the node(10) We have already traversed the left subtree of it. So, we will start traversing the right subtree, next we will visit node(15) and add it to the current path.

node(15) is not a leaf node, so we move down the left subtree. node(18) is added to the current path. node(18) is a leaf node too. So, add the entire path to the list of paths. Remove node(18) from the current path.

We go next to the right subtree of the node(15), which is the node(19). It is added to the current path. node(19) is also a leaf node, so the path is added to the list of paths.

Now, the left and right subtrees of the node(15) are traversed, it is removed from the current path and so is the node(10).

Print paths in a binary tree: implementation

package com.company.BST;

import java.util.ArrayList;

/**
 * Created by sangar on 21.10.18.
 */
public class PrintPathInBST {
    public void printPath(BinarySearchTree tree){
        ArrayList<TreeNode> path  = new ArrayList<>();
        this.printPathRecursive(tree.getRoot(), path);
    }

    private void printPathRecursive(TreeNode root,
									ArrayList<TreeNode> path){
        if(root == null) return;

        path.add(root);

        //If node is leaf node
        if(root.getLeft() == null && root.getRight() == null){
            path.forEach(node -> System.out.print(" " 
							+ node.getValue()));
            path.remove(path.size()-1);
            System.out.println();
            return;
        }

        /*Not a leaf node, add this node to 
		path and continue traverse */
        printPathRecursive(root.getLeft(),path);
        printPathRecursive(root.getRight(), path);

		//Remove the root node from the path
        path.remove(path.size()-1);
    }
}

Test cases

package com.company.BST;
 
/**
 * Created by sangar on 10.5.18.
 */
public class BinarySearchTreeTests {
    public static void main (String[] args){
        BinarySearchTree binarySearchTree = new BinarySearchTree();
 
        binarySearchTree.insert(7);
        binarySearchTree.insert(8);
        binarySearchTree.insert(6);
        binarySearchTree.insert(9);
        binarySearchTree.insert(3);
        binarySearchTree.insert(4);
 
        binarySearchTree.printPath();
    }
}

Tree node definition

package com.company.BST;

/**
 * Created by sangar on 21.10.18.
 */
public class TreeNode<T> {
    private T value;
    private TreeNode left;
    private TreeNode right;

    public TreeNode(T value) {
        this.value = value;
        this.left = null;
        this.right = null;
    }

    public T getValue(){
        return this.value;
    }
    public TreeNode getRight(){
        return this.right;
    }
    public TreeNode getLeft(){
        return this.left;
    }

    public void setValue(T value){
        this.value = value;
    }

    public void setRight(TreeNode node){
        this.right = node;
    }

    public void setLeft(TreeNode node){
        this.left = node;
    }
}

Complexity of above algorithm to print all paths in a binary tree is O(n).

Please share if there is something wrong or missing. If you are preparing for Amazon, Microsoft or Google interviews, please signup for interview material for free.

First non repeated character in string

First non repeated character in string

Given a string, find first non repeated character in a stringFor example, string is abcbdbdebab, the first non repeating character would be c. Even though e is also non repeating in string, c is output as it is first non repeating character.

Non repeating character : thoughts

What does it mean to be non-repeating character? Well, the character should occur in string only once. How about we scan the string and find what is count for each character? Store character and count in map as key value pair.
Now, that we have <character, count> key value pair for all unique characters in string, how can we find first non repeating character? Refer back to original string; scan the string again and for each character, check the corresponding count and if it is 1 return the character.

package com.company;

import java.util.HashMap;

/**
 * Created by sangar on 4.10.18.
 */
public class FirstNonRepeatingChar {

    HashMap<Character, Integer> characterCount = new HashMap<>();

    public char firstNonRepeatingCharacter(String s){
        //Best to discuss it with interviewer, what should we return here?
        if(s == null) return ' ';

        if(s.length() == 0) return ' ';

        for (char c: s.toCharArray()){
            if(!characterCount.containsKey(c)){
                characterCount.put(c,1);
            }
            else {
                characterCount.put(c, characterCount.get(c) + 1);
            }
        }
        for (char c: s.toCharArray()) {
            if(characterCount.get(c) == 1) return c;
        }

        return ' ';
    }
}
package test;

import com.company.FirstNonRepeatingChar;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class FirstNonRepeatingCharTest {

    FirstNonRepeatingChar tester = new FirstNonRepeatingChar();

    @Test
    public void firstNonRepeatingCharExists() {
        String s = "abcbcbcbcbad";
        assertEquals('d', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharDoesNotExists() {
        String s = "abcbcbcbcbadd";
        assertEquals(' ', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharWithEmptyString() {
        String s = "";
        assertEquals(' ', tester.firstNonRepeatingCharacter(s));
    }

    @Test
    public void firstNonRepeatingCharWithNull() {
        assertEquals(' ', tester.firstNonRepeatingCharacter(null));
    }
}

Complexity of this method to find first non-repeating character in a string is O(n) along with space complexity of O(1) to store the character to count map.

There is always some confusion about space complexity of above method, we think as 256 characters are used, should it not be counted as space complexity? Definitely. But in asymptotic notations, this space is independent of size of input, so space complexity remains O(1).

One more thing, even though time complexity is O(n), input string is scanned twice, first time to get count of characters and second time to find first non repeating character.

Optimization

Consider a case when string is too large with millions of characters, most of them repeated, above solution may turn slow in last where we look for character with count 1 in map.  How can we avoid scanning array second time?
How about we store some information with character in map along with count, so that we can figure out if the character is first non repeating or not.
Or we can have two maps, one stores the count and other stores the first index of character.

Once, we have created two maps as mentioned above, go through the first map and find the all characters with count 1. For each of these characters, check which one has the minimum index on second map and return that character.

Complexity of algorithm remains same, however, second scan of string is now not required. In other words, second scan is now independent of size of input as it depends on the size of first map, which is constant to 256 as that’s the number of unique 8 bit characters possible.

Find first non repeating character : Implementation

package com.company;

import java.util.HashMap;

/**
 * Created by sangar on 4.10.18.
 */
public class FirstNonRepeatingChar {
    public char firstNonRepeatingCharacterOptimized(String s){
        HashMap<Character, Integer> characterCount = new HashMap<>();
        HashMap<Character, Integer>characterIndex = new HashMap<>();
        //Best to discuss it with interviewer, what should we return here?
        if(s == null) return ' ';

        if(s.length() == 0) return ' ';

        for (int i=0; i<s.length(); i++){
            char c  = s.charAt(i);
            if(!characterCount.containsKey(c)){
                characterCount.put(c,1);
                characterIndex.put(c,i);
            }
            else {
                characterCount.put(c, characterCount.get(c) + 1);
            }
        }
        char nonRepeatedCharacter = ' ';
        int prevIndex = s.length();
        for (char c : characterCount.keySet()) {
            if(characterCount.get(c) == 1 
			&& characterIndex.get(c) < prevIndex){
                prevIndex = characterIndex.get(c);
                nonRepeatedCharacter = c;
            }
        }
        return nonRepeatedCharacter;
    }
}
package test;

import com.company.FirstNonRepeatingChar;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
 * Created by sangar on 28.8.18.
 */
public class FirstNonRepeatingCharTest {

    FirstNonRepeatingChar tester = new FirstNonRepeatingChar();

    @Test
    public void firstNonRepeatingOptimizedCharExists() {
        String s = "aebcbcbcbcbad";
        assertEquals('e', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedDoesNotExists() {
        String s = "abcbcbcbcbadd";
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedWithEmptyString() {
        String s = "";
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(s));
    }

    @Test
    public void firstNonRepeatingCharOptimizedWithNull() {
        assertEquals(' ', tester.firstNonRepeatingCharacterOptimized(null));
    }
}

Please share if there is anything wrong or missing. If you are interested in taking personalized coaching sessions by our expert teachers, please signup to website and get first session free.


Difference between array and linked list

Difference between array and linked list

In last post : Linked list data structure, we discussed basics of linked list, where I promised to go in details what is difference between array and linked list. Before going into post, I want to make sure that you understand that there is no such thing called one data structure is better than other. Based on your requirements and use cases, you chose one or the other. It depends on what is most frequent operation your algorithm would perform in it’s lifetime. That’s why they have data structure round in interview process to understand if you can chose the correct one for the problem.

What is an array?
Array is linear, sequential and contiguous collection of elements which can be addressed using index.

What is a linked list?
Linked list is linear, sequential and non-contiguous collection of nodes, each node store the reference to next node. To understand more, please refer to Linked list data structure.

Difference between arrays and linked list

Static Vs dynamic size

Size of an array is defined statically at the compile time where as linked list grows dynamically at run time based on need. Consider a case where you know the maximum number of elements algorithm would ever have, then you can confidently declare it as array. However, if you do not know, the linked list is better. There is a catch : What if there is a rare chance that number of elements will reach maximum, most of the time it will be way less than maximum? In this case, we would unnecessary allocating extra memory for array which may or may not be used. 

Memory allocation

An array is given contiguous memory in system. So, if you know the address of any of the element in array, you can access other elements based position of the element.

linked list vs arrays
Statically allocated contiguous memory

Linked list are not store contiguous on memory, nodes are scattered around on memory. So you may traverse forward in linked list, given node (using next node reference), but you can not access nodes prior to it.

arrays vs linked list
Dynamically allocated non-contiguous memory

Contiguous allocation of memory required sufficient memory before hand for an array to be stored, for example if want to store 20 integers in an array, we would required 80 bytes contiguous memory chunk. However, with linked list we can start with 8 bytes and request more memory as when required, which may be wherever. Contiguous allocation of memory makes it difficult to resize an array too. We have to look for different chunk of memory, which fits the new size, move all existing elements to that location. Linked list on other hand are dynamically size and can grow much faster without relocating existing elements.

Memory requirement

It’s good to have non-contiguous memory then? It comes with a cost. Each node of linked list has to store reference to next node in memory. This leads to extra payload of 4 bytes in each node. On the other hand, array do not require this extra payload. You  have to trade off extra space with advantages you are getting. Also, sometime, spending extra space is better that have cumbersome operations like shifting, adding and deleting operation on array. Or value stored in node is big enough to make these 4 bytes negligible in analysis.

Operation efficiency

We do operations of data structure to get some output. There are four basic operations we should be consider : read, search, insert/update and delete.

Read on array is O(1) where you can directly access any element in array given it’s index. By O(1), read on array does not depend on size of array.
Whereas, time complexity of read on linked list is O(n) where n is number of nodes. So, if you have a problem, which requires more random reads, array will over-weigh linked list.

Given the contiguous memory allocation of array, there are optimized algorithms like binary search to search elements on array which has complexity of O(log n). Search on linked list on other hand requires O(n).

Insert on array is O(1) again, if we are writing within the size of array. In linked list, complexity of insert depends where do you want to write new element at. If insert happens at head, then it O(1), on the other hand if insert happens at end, it’s O(n).

Insert node at start of linked list
Insert node at the tail of linked list

Update means here, changing size of array or linked list by adding one more element. In array it is costly operation, as it will require reallocation of memory and copying all elements on to it. Does not matter if you add element at end or start, complexity remains O(1).
For linked list, it varies, to update at end it’s O(n), to update at head, it’s O(1). 
In same vain, delete on array requires movement of all elements, if first element is deleted, hence complexity of O(n). However, delete on linked list O(1), if it’s head, O(n) if it’s tail.

To see the difference between O(1) and O(n), below graph should be useful.

difference between array and linked list
Complexity analysis graph

Key difference between array and linked list are as follows

  • Arrays are really bad at insert and delete operation due to internal reallocation of memory.
  • Statically sized at the compile time
  • Memory allocation is contiguous,  which make access elements easy without any additional pointers. Can jump around the array without accessing all the elements in between.
  • Linked list almost have same complexity when insert and delete happens at the end, however no memory shuffling happens
  • Search on linked list is bad.=, usually require scan with O(n) complexity
  • Dynamically sized on run time.
  • Memory allocation is non-contiguous, additional pointer is required to store neighbor node reference. Cannot jump around in linked list.

Please share if there is something wrong or missing. If you wan to contribute to website, please reach out to us at communications@algorithmsandme.com

Linked list data structure

Linked list data structure

Linked list is a very important data structure to understand as lot of problems are asked based on linked list in Amazon, Microsoft and Google interview. Today, we will understand the basics of linked list data structure and it’s implementation. 

Linked list represent linear sequence of elements. Each element connected to next element using chain of references. Another data structure which store linear sequence of items is array. There are some advantages and uses cases where linked list way of storing sequence is more efficient than array, I will cover that into next post : Arrays Vs Linked lists.

In last paragraph, I emphasized on linkedlist being linear data structure. In linear data structure, there is a sequence and order how elements are inserted, arranged and traversed. In order to go to tail of linked list, we have to go through all of the nodes.

linked list data structure
linear data structure when elements can be traversed only in one order


Non linear data structures are the ones where elements are not arranged or traversed in a specific order. One element may be connected to many others, hence we cannot traverse them in the same order every time. Example of non-linear data structure would be maps, dictionaries, trees, graphs etc.

linked list as data structure
Non  linear data structure when nodes cannot be traversed in one order always

Linked list implementation

Linked list consists of node, any number of nodes. Each node contains two things : first, value of the node, this value can be of any type, integer, string, or other user defined type. Second, a reference which points to next node in linked list. A node can be declared as follows:

typedef struct Node {
	int data;
	struct Node * next;
} Node;
Node structure
Linked list

What happens if the node is last node in linked list? At last node, next pointer of the node points to the null. It’s very important to understand this bit, as this condition will be used on almost every problem you have to solve on linked list.

Linked list is dynamic data structure. By dynamic data structure, we mean, it’s size and nature is not defined at the time of compilation, but defined at run time. Every time, a new node is added to linked list, new memory location is allocated and previous node’s next pointer will point to new node.

Operations of linked list

  • Adding node at the end of list
    There are three basic steps to add a node to linked list at end:
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Scan through linked list using next pointer, reach to the last node.
    2. Create a new node, and point next pointer of last node to this new node.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);

	//find the last node
	Node *currentNode = *headRef;
	while(currentNode && currentNode->next != NULL){
		currentNode = currentNode->next;
	}
	if(currentNode)
		currentNode->next = newNode;
	}
	else{
		//Change headRef to point to new head.
		*headRef = newNode;
	}
}

Complexity of adding a node to linked list is O(n). 

  • Insert node at head of list
    In this case too, we allocate a new node, however, this time we do not have to scan the entire list. Every time we add node to list, it’s head changes though.
  1. Check if there is already a node
    1. If no, then create a new node and return it as head of linked list.
  2. If there is a node,
    1. Create a new node, and point next pointer new node to head.
    2. Return new node as head pointer.
Node * createNode(int val){
	Node * newNode = (Node *)malloc(sizeof(Node));
	if(newNode){
		newNode->data = val;
		newNode->next = NULL;
	}
	return newNode;
}

void addNode(Node **headRef, int value){
	//create new node
	Node *newNode = createNode(value);
	newNode->next = *headRef;
	*headRef = newNode;
}

Linked list data structure problems

It’s very important to understand that linked list is a recursive data structure. Base case is a linked list with no node, represented by NULL node. Every problem on linked list can be solved using template : process one node, and then recursively process the remaining linked list.

In programming terms, linked list is divided into two parts, head and tail. The node being processed is called head and rest of the linked list is tail. Tail has the exactly same structure as the original list. 

Problems like merging linked lists, reverse a linked list, find length of linked list all can be solved using the same template of processing one node and the recursively call function on remaining node. 

Types of linked list

There are three types of linked lists :
1. Singly linked list 
Singly linked lists contain nodes with data and reference, i.e., next, which points to the next node in the sequence of nodes. The next pointer of the last node will point to null. In singly linked list you can traverse only in one direction.

singly linked list
singly linked list

2. Doubly linked list
In a doubly linked list, each node contains two links – previous, which points to the node before current node and next,  which points to next node. The previous pointer of the first node and next pointer of the last node will point to null. In doubly linked list, you can traverse it both directions. Two references adds to weight as extra memory is required.

doubly linked list
doubly linked list

3. Circular linked list
In circular linked list, next pointer of  the last node will point to the first node. A circular linked list can be both singly as well as doubly linked list.

circular linked list
Circular doubly linked list

This was all for basics of linked list, I know problems on them are hard to solve but if you look at all the problems, they boil down to one thing : understanding of node and how recursion can be used. In next posts, we will be solving many of these problems and see how we can use these basics.

Please share if there is something wrong or missing. If you are interested in contributing to website and share your knowledge with thousands of users across world, please reach out to us at communications@algorithmsandme.com

Inorder predecessor in binary search tree

Inorder predecessor in binary search tree

What is an inorder predecessor in binary tree? Inorder predecessor is the node which traversed before given node in inorder traversal of binary tree.  In binary search tree, it’s the previous big value before a node. For example, inorder predecessor of node(6) in below tree will 5 and for node(10) it’s 6.

inorder predecessor

If node is leftmost node in BST or least node, then there is no inorder predecessor for that node.

Inorder predecessor : Thoughts

To find inorder predecessor , first thing to find is the node itself.  As we know in inorder traversal, root node is visited after left subtree.  A node can be predecessor for given node which is on right side of it.

Let’s come up with examples and see what algorithm works. First case, if given node is left most leaf node of tree, there is no inorder predecessor, in that case return NULL. For example, predecessor for node 1 is NULL.

predecessor in BST

What if node has left subtree? In that case, maximum value in left subtree will be predecessor of given node.  We can find maximum value in tree by going deep down right subtree, till right subtree is NULL, and then return the last node. For example, predecessor node(10) is 6.

inorder predecessor

What are the other cases? Another case is that node does not have left subtree but it is also not the leftmost leaf node? Then parent of given node will be inorder predecessor. While moving down the tree on right side, keep track of parent node as it may be solution. predecessor of node(12) will be 10 as that’s where we moved to right subtree last time.  Note that we change predecessor candidate only  while moving down right subtree.

Algorithm to find inorder predecessor

  1. Start with root, current = root, successor = NULL.
  2. If node.value > current.value, then predecessor = current, current = current.right.
  3. If node.value < current.value, current = current.left.
  4. If node.value == current.value and node.left!= null, predecessor = maximum(current.left).
  5. Return predecessor

Inorder predevessor: Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;


/* This function return the maximum node in tree rooted at node root */
Node *findMaximum(Node *root){
    if(!root)
        return NULL;
 
    while(root->right) root = root->right;
    return root;
}
/* This function implements the logic described in algorithm to find inorder predecessor
of a given node */
Node *inorderPredecessor(Node *root, int K){
 
    Node *predecessor 	= NULL;
    Node *current  		= root;
    
    if(!root)
        return NULL;
 
    while(current && current->value != K){
         /* Else take left turn and no need to update predecessor pointer */
        if(current->value >K){
            current= current->left;
        }
        /* If node value is less than the node which are looking for, then go to right sub tree
        Also when we move right, update the predecessor pointer to keep track of last right turn */
        else{
            predecessor = current;
            current = current->right;
        }
    }
    /*Once we reached at the node for which inorder predecessor is to be found,
    check if it has left sub tree, if yes then find the maximum in that right sub tree and return that node 
    Else last right turn taken node is already stored in predecessor pointer and will be returned*/
    if(current && current->left){
        predecessor = findMaximum(current->left);
    }
    return predecessor;
}
Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  int n = 0;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  root = addNode(root,37);
  root = addNode(root,45);
  
  Node *predecessor = inorderPredecessor(root, 40);
  printf("\n Inorder successor node is : %d ",predecessor ? predecessor->value: 0);
  
  return 0;
}

Complexity of algorithm to find inorder predecessor will be O(logN) in almost balanced binary tree. If tree is skewed, then we have worst case complexity of O(N).

Please share if there is something wrong or missing. If you want to contribute to website and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Inorder successor in binary search tree

Inorder successor in binary search tree

What is an inorder successor in binary tree? Inorder successor is the node which traversed next to given node in inorder traversal of binary tree.  In binary search tree, it’s the next big value after the node. For example, inorder successor of node(6) in below tree will 10 and for node(12) it’s 14.

inorder successor

If node is the rightmost node or in BST, the greatest node, then there is no inorder successor for that node.

Inorder successor : Thoughts

To find inorder successor, first thing to find is the node itself.  As we know in inorder traversal, root node is visited after left subtree.  A node can be successor for given node which is on left side of it.

Let’s come up with examples and see what algorithm works. First case, if the node is right most node of tree, there is no inorder successor, in that case return NULL. For example, successor for node 16 is NULL.

What if node has right subtree? In that case, minimum value in right subtree will be successor of given node.  We can find minimum value in tree by going deep down left subtree, till left subtree is NULL, and then return the last node. For example, successor node(5) is 6.

inorder successor

What are the other cases? Another case is that node does not have right subtree? Then parent of given node will be inorder successor. While moving down the tree on left side, keep track of parent node as it may be the solution. Successor of node(7) will be 10 as that’s where we moved to left subtree last time.  Note that we change successor candidate only  while moving down left subtree.

Algorithm to find inorder successor

  1. Start with root, current = root, successor = NULL.
  2. If node.value < current.value, then successor = current, current = current.left.
  3. If node.value > current.value, current = current.right.
  4. If node.value == current.value and node.right != null, successor = minimum(current.right).
  5. Return successor

Inorder successor : Implementation

#include<stdio.h>
#include<stdlib.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};

typedef struct node Node;


//this function finds the minimum node in given tree rooted at node root
Node * findMinimum(Node *root){
    if(!root)
        return NULL;
   // Minimum node is left most child. hence traverse down till left most node of tree.
    while(root->left) root = root->left;
   // return the left most node
    return root;
}
/* This function implements the logic described in algorithm to find inorder successor
of a given node */
Node *inorderSuccessor(Node *root, Node *node){
 
    Node *successor = NULL;
    Node *current  = root;
    if(!root)
        return NULL;
 
    while(current->value != node->value){
        /* If node value is greater than the node which are looking for, then go to left sub tree
        Also when we move left, update the successor pointer to keep track of lst left turn */
        
        if(current->value > node->value){
            successor = current;
            current= current->left;
        }
        /* Else take right turn and no need to update successor pointer */
        else
            current = current->right;
    }
    /*Once we reached at the node for which inorder successor is to be found,
    check if it has right sub tree, if yes then find the minimum in that right sub tree and return taht node 
    Else last left turn taken node is already stored in successor pointer and will be returned*/
    if(current && current->right){
        successor = findMinimum(current->right);
    }
 
    return successor;
}


Node * createNode(int value){
  Node *newNode =  (Node *)malloc(sizeof(Node));
  
  newNode->value = value;
  newNode->right= NULL;
  newNode->left = NULL;
  
  return newNode;
  
}

Node * addNode(Node *node, int value){
  if(node == NULL){
      return createNode(value);
  }
  else{
     if (node->value > value){
        node->left = addNode(node->left, value);
      }
      else{
        node->right = addNode(node->right, value);
      }
  }
  return node;
}
 
/* Driver program for the function written above */
int main(){
  Node *root = NULL;
  int n = 0;
  //Creating a binary tree
  root = addNode(root,30);
  root = addNode(root,20);
  root = addNode(root,15);
  root = addNode(root,25);
  root = addNode(root,40);
  Node *node = root;
  root = addNode(root,37);
  root = addNode(root,45);
  
  Node *successor = inorderSuccessor(root, node);
  printf("\n Inorder successor node is : %d ",successor ? successor->value: 0);
  
  return 0;
}

Complexity of algorithm to find inorder successor will be O(logN) in almost balanced binary tree. If tree is skewed, then we have worst case complexity of O(N).

Please share if there is something wrong or missing. If you want to contribute to website and share your knowledge with thousands of learners across the world, please reach out to us at communication@algorithmsandme.com

Iterative postorder traversal

Iterative postorder traversal

In last two posts, iterative inorder and iterative preorder traversal, we learned how stack can be used to replace recursion and why recursive implementation can be dangerous in production environment. In this post, let’s discuss iterative postorder traversal of binary tree which is most complex of all traversals. What is post order traversal ? A traversal where  left and right subtrees are visited before root is processed. For example, post order traversal of below tree would be : [1,6,5,12,16,14,10]

iterative postorder traversal

Iterative postorder traversal  : Thoughts

Let’s look at the recursive implementation of postorder.

    private void postOrder(Node root){
        if(root == null) return;

        postOrder(root.left);
        postOrder(root.right);
        System.out.println(root.value);

    }

As we are going into left subtree and then directly to right subtree, without visiting root node. Can you find the similarity of structure between preorder and postorder implementation?  Can we reverse the entire preorder traversal to get post order traversal? Reverse preorder will give us right child, left child and then root node, however order expected is left child, right child and root child.
Do you remember we pushed left and right node onto stack in order where right child went before left. How about reversing that?

There is one more problem with just reversing the preorder. In preorder, a node was processed as soon as popped from stack, like root node will  be the first node to be processed. However, in postorder, root node is processed last. So, we actually need the order of processing too be reversed. What better than using a stack to store the reverse order of root nodes to processed.
All in all, we will be using two stacks, one to store left and right child, second to store processing order of nodes.

  1. Create two stacks s an out and push root node onto s
  2. While stack s is not empty
    1. op from stack s, current = s.pop
    2. Put current onto stack out.
    3. Put left and right child of current on to stack s
  3. Pop everything from out stack and process it.

Postorder traversal with two stacks : Implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 22.5.18.
 */
public class BinarySearchTreeTraversal {

    private Node root;

    public void BinarySearchTree(){
        root = null;
    }

    public class Node {
        private int value;
        private Node left;
        private Node right;

        public Node(int value) {
            this.value = value;
            this.left = null;
            this.right = null;
        }
    }

    public void insert(int value){
        this.root =  insertNode(this.root, value);
    }

    private Node insertNode(Node root, int value){
        if(root == null){
            //if this node is root of tree
            root = new Node(value);
        }
        else{
            if(root.value > value){
                //If root is greater than value, node should be added to left subtree
                root.left = insertNode(root.left, value);
            }
            else{
                //If root is less than value, node should be added to right subtree
                root.right = insertNode(root.right, value);
            }
        }
        return root;
    }
    private void postOrder(Node root){
       if(root == null) return;

       postOrder(root.left);
       postOrder(root.right);
       System.out.println(root.value);
    }

    public void postOrderTraversal(){
        postOrderIterative(root);
    }

    private void postOrderIterative(Node root){
        Stack<Node> out = new Stack<>();
        Stack<Node> s = new Stack<>();

        s.push(root);

        while(!s.empty()){
            Node current = s.pop();

            out.push(current);
            if(current.left != null) s.push(current.left);
            if(current.right != null) s.push(current.right);
        }

        while(!out.empty()){
            System.out.println(out.pop().value);
        }
    }
}

Complexity of iterative implementation is O(n) with additional space complexity of O(n).

Can we avoid using two stack, and do it with one stack? Problem with root in postorder traversal is that it is visited three times, moving down from parent, coming up from left child and coming up from right child. When should be the node processed? Well, when we are coming up from right child.

How can we keep track of how the current node was reached? If we keep previous pointer, there are three cases:

  1. Previous node is parent of current node, we reached node from parent node, nothing is done.
  2. Previous node is left child of current node, it means we have visited left child, but still not visited right child, move to right child of current node.
  3. Previous node is right child of current node, it means  we have visited left and right child of current node,  process the current node.

Let’s formulate  postorder traversal algorithm then.

  1. Push root node onto stack s, set prev = null.
  2. Repeat below steps till stack is not empty (!s.empty())
  3. current = s.pop(), pop from the stack.
  4. If (prev == null || prev.left == current || prev.right == current) then
    1. If current.left != null, push current.left onto stack.
    2. If current.right != null, push current.right onto stack.
    3. If current.left == current.right == null, process current.
  5. If current.left == prev, i.e. moving up left child then
    1. If current.right == null, process current.
    2. If current.right != null, push it to stack.
  6. If current.right == prev i.e moving up from right child
    1. process current.
    2. prev = current, current = s.pop.

Iterative Postorder traversal : Implementation

package com.company.BST;

import java.util.Stack;

/**
 * Created by sangar on 22.5.18.
 */
public class BinarySearchTreeTraversal {

    private Node root;

    public void BinarySearchTree(){
        root = null;
    }

    public class Node {
        private int value;
        private Node left;
        private Node right;

        public Node(int value) {
            this.value = value;
            this.left = null;
            this.right = null;
        }
    }

    public void insert(int value){
        this.root =  insertNode(this.root, value);
    }
  
    private Node insertNode(Node root, int value){
        if(root == null){
            //if this node is root of tree
            root = new Node(value);
        }
        else{
            if(root.value > value){
                //If root is greater than value, node should be added to left subtree
                root.left = insertNode(root.left, value);
            }
            else{
                //If root is less than value, node should be added to right subtree
                root.right = insertNode(root.right, value);
            }
        }
        return root;
    }


    private void inorder(Node root){
            if(root == null) return;

            if(root.left != null) inorder(root.left);
            System.out.println(root.value);
            if(root.right != null) inorder(root.right);
        }

        private void preOrder(Node root){
            if(root == null) return;

            System.out.println(root.value);
            preOrder(root.left);
            preOrder(root.right);
        }

        private void postOrder(Node root){
            if(root == null) return;

            postOrder(root.left);
            postOrder(root.right);
            System.out.println(root.value);

        }
        public void postOrderTraversal(){
          //  postOrder(root);
            postOrderIterative2(root);
            //postOrderIterative(root);
        }

        private void postOrderIterative2(Node root){
            Node prev = null;
            Stack<Node> s = new Stack<>();

            s.push(root);

            while(!s.empty()){
                Node current  = s.peek();
                if(prev == null || ( prev.left == current || prev.right == current )){
                    if(current.left != null) s.push(current.left);
                    else if(current.right != null) s.push(current.right);
                }
                else if(prev == current.left){
                    if(current.right != null) s.push(current.right);
                }else{
                    System.out.println(current.value);
                    s.pop();
                }

                prev = current;
            }
    }

}

Complexity of code is O(n) again, with additional space complexity of O(n).

Please share if there is something wrong or missing. If you want to contribute and share your learning with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Iterative preorder traversal

Iterative preorder traversal

In last post Iterative inorder traversal , we learned how to do inorder traversal of binary tree without recursion or in iterative way. Today we will learn how to do iterative preorder traversal of binary tree. In preorder traversal, root node is processed before left and right subtrees. For example, preorder traversal of below tree would be [10,5,1,6,14,12,15],

iterative preorder traversal without recursion

We already know how to implement preorder traversal in recursive way, let’s understand how to implement it in non-recursive way.

Iterative preorder traversal : Thoughts

If we look at recursive implementation, we see we process the root node as soon as we reach it and then start with left subtree before touching anything on right subtree.

Once left subtree is processed, control goes to first node in right subtree. To emulate this behavior in non-recursive way, it is best to use a stack. What and when push and pop will happen on the stack?
Start with pushing the root node to stack. Traversal continues till there at least one node onto stack.

Pop the root node from stack,process it and push it’s right and left child on to stack. Why right child before left child? Because we want to process left subtree before right subtree. As at every node, we push it’s children onto stack, entire left subtree of node will be processed before right child is popped from the stack. Algorithm is very simple and is as follows.

    1. Start with root node and push on to stack s
    2. While there stack is not empty
      1. Pop from stack current  = s.pop() and process the node.
      2. Push current.right onto to stack.
      3. Push current.left onto to stack.

Iterative preorder traversal : example

Let’s take and example and see how it works. Given below tree, do preorder traversal on it without recursion.

iterative preorder traversal without recursion

Let’s start from root node(10) and push it onto stack. current = node(10).

Here loop starts, which check if there is node onto stack. If yes, it pops that out. s.pop will return node(10), we will print it and push it’s right and left child onto stack. Preorder traversal till now : [10].

Since stack is not empty, pop from it.current= node(5). Print it and push it’s right and left child i.e node(6) and node(1) on stack.

Again, stack is not empty, pop from stack. current  = node(1). Print node. There is no right and left child for this node, so we will not push anything on the stack.

Stack is not empty yet, pop again. current= node(6). Print node. Similar to node(1), it also does not have right or left subtree, so nothing gets pushed onto stack.

However, stack is not empty yet. Pop. Current = node(14). Print node, and as there are left and right children, push them onto stack as right child before left child.

Stack is not empty, so pop from stack, current = node(12). Print it, as there are no children of node(12), push nothing to stack.

Pop again from stack as it not empty. current = node(15). Print it. No children, so no need to push anything.

At this point, stack becomes empty and we have traversed all node of tree also.

Iterative preorder traversal : Implementation

#include <stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};
typedef struct node Node;

#define STACK_SIZE 10
 
typedef struct stack{
        int top;
        Node *items[STACK_SIZE];
}stack;
 
void push(stack *ms, Node *item){
   if(ms->top < STACK_SIZE-1){
       ms->items[++(ms->top)] = item;
   }
   else {
       printf("Stack is full\n");
   }
}
 
Node * pop (stack *ms){
   if(ms->top > -1 ){
       return ms->items[(ms->top)--];
   } 
   else{
       printf("Stack is empty\n");
   }
}
Node * peek(stack ms){
  if(ms.top < 0){
      printf("Stack empty\n");
      return 0;
   }
   return ms.items[ms.top];
}
int isEmpty(stack ms){
   if(ms.top < 0) return 1;
   else return 0;
}
void preorderTraversalWithoutRecursion(Node *root){
	stack ms;
	ms.top = -1;
	
	if(root == NULL) return ;

	Node *currentNode = NULL;
	/* Step 1 : Start with root */
	push(&ms,root);
	
	while(!isEmpty(ms)){
		/* Step 5 : Pop the node */
		currentNode = pop(&ms);
		/* Step 2 : Print the node */
		printf("%d  ", currentNode->value);
		/* Step 3: Push right child first */
		if(currentNode->right){
			push(&ms, currentNode->right);
		}
		/* Step 4: Push left child */
		if(currentNode->left){
			push(&ms, currentNode->left);
		}
	}
}


void preorder (Node *root){
	if ( !root ) return;
	
 	printf("%d ", root->value );
	preorder(root->left);
	preorder(root->right);
}
 
Node * createNode(int value){
    Node * newNode =  (Node *)malloc(sizeof(Node));
	
    newNode->value = value;
    newNode->right= NULL;
    newNode->left = NULL;
	
    return newNode;
}

Node * addNode(Node *node, int value){
    if(node == NULL){
    	return createNode(value);
    }
    else{
    	if (node->value > value){
    		node->left = addNode(node->left, value);
    	}
    	else{
    		node->right = addNode(node->right, value);
    	}
    }
    return node;
}
 
/* Driver program for the function written above */
int main(){
        Node *root = NULL;
        //Creating a binary tree
        root = addNode(root,30);
        root = addNode(root,20);
        root = addNode(root,15);
        root = addNode(root,25);
        root = addNode(root,40);
        root = addNode(root,37);
        root = addNode(root,45);
        
	preorder(root);
        printf("\n");
	
        preorderTraversalWithoutRecursion(root);
        return 0;
}

Complexity of iterative implementation of binary tree is O(n) as we will be visiting each node at least once. Also, there is added space complexity of stack which is O(n).

Please share if there is something wrong or missing. If you are willing to contribute and share your knowledge with thousands of learners across the world, please reach out to us at communications@algorithmsandme.com

Iterative inorder traversal

Iterative Inorder traversal

One of the most common things we do on binary tree is traversal. In Binary search tree traversals we discussed different types of traversals like inorder, preorder and postorder traversals. We implemented those traversals in recursive way. In this post, let’s focus on iterative implementation of inorder traversal or iterative inorder traversal without recursion.

Before solution, what is inorder traversal of binary tree? In inorder traversal, visit left subtree, then root and at last right subtree. For example, for given tree, inorder traversal would be: [1,5,6,10,12,14,15]

iterative inorder traversal

Iterative inorder traversal without stack : Thoughts

As we go into discussion, one quick question : why recursive inorder implementation is not that great? We know that recursion uses implicitly stack to store return address and passed parameters.  As recursion goes deep, there will be more return addresses and parameters stored on stack, eventually filling up all the space system has for stack. This problem is known as stack overflow.
When binary tree is skewed, that is when every node has only one child, recursive implementation may lead to stack overflow, depending on the size of tree. In production systems, we usually do not know upfront size of data structures, it is advised to avoid recursive implementations.

What are we essentially doing in recursive implementation?  We check if node is null, then return. If not, we move down the left subtree. When there is nothing on left subtree, we move up to parent, and then go to right subtree.

All these steps are easy to translate in iterative way. One thing needs to be thought of is : how to go to parent node? In inorder traversal, the last node visited before current node is the parent node.
If we keep these nodes on some structure, where we can refer them back, things will be easy.  As we refer the most recent node added to structure first (when finding parent of node, we have to just look at the last visited node), stack is great candidate for it which has last in first out property.

Iterative inorder traversal : algorithm

  1. Start from the root, call it current .
  2. If current is not NULL, push current on to stack.
  3. Move to left child of current and go to step 2.
  4. If current  == NULL and !stack.empty(),  current = s.pop.
  5. Process current and set current = current.right, go to step 2.

Let’s take an example and see how this algorithm works.

iterative inorder traversal

We start with node(10), current = node(10). Current node is not null, put it on stack.

As there is left child of node(10), move current = current.left, so current = node(5), which is not null, put node on to stack.

Again, move down to left child of node(5), current = current.left = node(1). Put the node on to stack.

Again move down to left child, which in this case it is null. What to do now? As stack is not empty, pop last node added to it. current = node(1). Process node(1). Traversal  = [1]

Move to right child of node(1), which is null, in that case pop from the stack and process the node, current  = node(5). Traversal = [1,5]

Move to the right child of node(5) i.e. node(6). Push on to the stack.

Move down to left subtree, which is null, so pop from stack. current = node(6), process it. Traversal = [1,5,6]

Move to right child of node(6), which is null, so pop from stack current = node(10). Process the node. Traversal = [1,5, 6,10]

Get right child of node(10), which is node(14), current = node(14), as current is not null, put it on to stack.

Again move to left child of current node (14), which is node(12). current = node(12) which is not null, put it onto stack.

inorder traversal with recursion

Get left child of current node, which is null. So pop from stack, current = node(12). Process it. Traversal = [1,5,6,10,12]

Current node = current.right, i.e null, so pop out of stack. current = node(14). Process node(14). Traversal = [1,5,6,10,12,14]

Again current = current.right which is node(15). Put it back on to stack.

Left child of node(15) is null, so we pop from stack. current = node(15). Process node(15). Fetch right child of current node which is again null and this time even stack is already empty. So stop processing and everything is done. Traversal = [1,5,6,10,12,14,15]

Iterative inorder traversal : Implementation

#include <stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
	int value;
	struct node *left;
	struct node *right;
};
typedef struct node Node;

#define STACK_SIZE 10
 
typedef struct stack{
        int top;
        Node *items[STACK_SIZE];
}stack;
 
void push(stack *ms, Node *item){
   if(ms->top < STACK_SIZE-1){
       ms->items[++(ms->top)] = item;
   }
   else {
       printf("Stack is full\n");
   }
}
 
Node * pop (stack *ms){
   if(ms->top > -1 ){
       return ms->items[(ms->top)--];
   } 
   else{
       printf("Stack is empty\n");
   }
}
Node * peek(stack ms){
  if(ms.top < 0){
      printf("Stack empty\n");
      return 0;
   }
   return ms.items[ms.top];
}
int isEmpty(stack ms){
   if(ms.top < 0) return 1;
   else return 0;
}

void inorderTraversalWithoutStack(Node *root){
	stack ms;
	ms.top = -1;
	Node *currentNode  = root;
	while(!isEmpty(ms) || currentNode ){
		if(currentNode){
			push(&ms, currentNode);
			currentNode = currentNode->left;
		}
		else {
			currentNode = pop(&ms);
			printf("%d  ", currentNode->value);
			currentNode = currentNode->right;
		}
	}
}

void inorder (Node * root){
	if ( !root ) return;
 
	inorder(root->left);
	printf("%d ", root->value );
	inorder(root->right);
}
 
Node * createNode(int value){
    Node * temp =  (Node *)malloc(sizeof(Node));
    temp->value = value;
    temp->right= NULL;
    temp->left = NULL;
    return temp;
}
Node * addNode(Node *node, int value){
    if(node == NULL){
    	return createNode(value);
    }
    else{
    	if (node->value > value){
    		node->left = addNode(node->left, value);
    	}
    	else{
    		node->right = addNode(node->right, value);
    	}
    }
    return node;
}
 
/* Driver program for the function written above */
int main(){
        Node *root = NULL;
        //Creating a binary tree
        root = addNode(root,30);
        root = addNode(root,20);
        root = addNode(root,15);
        root = addNode(root,25);
        root = addNode(root,40);
        root = addNode(root,37);
        root = addNode(root,45);
        inorder(root);
        printf("\n");
        inorderTraversalWithoutStack(root);
        return 0;
}

Complexity of iterative implementation of inorder traversal is O(n) with worst case space complexity of O(n).

Please share if there is something wrong or missing. If you want to contribute an share your knowledge with thousands of learners across the world, please reach out to communications@algorithmsandme.com

Identical binary trees

Identical binary trees

Given two binary trees, check if two trees are identical binary trees? First question arises : when do you call two binary trees are identical? If root of two trees is equal and their left and right subtrees are also identical, then two trees are called as identical binary trees. For example, two trees below are identical.

identical binary trees

whereas these two trees are not identical as node  6 and 8 differ as well as node 14 and 16.

identical binary trees

Identical binary trees : Thoughts

Solution to the problem lies in the definition of identical tree itself. We check if roots are equal, if there are not, there is no point continue down the tree, just return that trees are not identical.
If roots are equal, we have to check is subtrees are equal. To do, so take left subtrees of both original trees and validate if left subtrees are identical too. If not, return negative response. If yes, check if right subtrees are identical too.

Did you notice two things? First, that solution to original problem depends on solution of subtrees and at each node problem reduces itself to smaller subproblem.
Second,  processing order is preorder, we process roots of two trees, then left subtrees and at last right subtree. As Mirror binary search tree and Delete binary search tree are solved using postorder traversal, this problem can be solved using preorder traversal with special processing at root node.

We can also solve it by postorder traversal too, in that case, we will be unnecessary scanning subtrees even when roots themselves are not equal.

Identical binary trees : example

Let’s take an example and see how it works. Start with root nodes, both roots are equal, move down to left subtree.

Left node in both subtrees is node(5), which is again equal. Go down the left subtree.

Again, left nodes in both subtrees is node(1), move down the left subtrees of both.

As per definition, two empty binary trees are identical. So when we move down the left child of nodes, they are null, hence identical and we return true to parent node(1). Same is true for right child.

We already know that at node(1), left and right subtree are identical, as well as node values are equal, we return true to parent node(5).

Left subtrees of node(5) are identical, move to right subtree to node(6). Similar to node(1), it also return true to parent node.

At node(5), left and right subtrees are identical, and also node values are equal, we return true to node(10).

is identical bst

Left subtrees are identical, now go to right subtree of node(10) in both trees.

is identical binary tree

Node(14) are equal in both trees, so check left subtrees of node(14) of both trees are identical. Both left subtree and right subtree of node(14) identical, same as node(1) and node(6) in left subtree, so they return true to parent node(14).

identical binary search tree

Now, at node(14), left and right subtrees are identical, so return true up to parent node(10).

Now, at  root node of both trees, there left subtrees and right subtrees are identical, so we return true for question if these two trees are identical or not.

Can you draw non identical binary trees and come up with flow and determine when they will be called out to be non-identical?

Identical binary trees : Implementation

#include<stdio.h>
#include<stdlib.h>
#include<math.h>
 
struct node{
    int value;
    struct node *left;
    struct node *right;
};
typedef struct node Node;

#define true 1
#define false 0

void inoderTraversal(Node * root){
    if(!root) return;
    
    inoderTraversal(root->left);
    printf("%d ", root->value);
    inoderTraversal(root->right);
}

Node *createNode(int value){
    Node * newNode =  (Node *)malloc(sizeof(Node));
    
    newNode->value = value;
    newNode->right= NULL;
    newNode->left = NULL;
    
    return newNode;
}

Node *addNode(Node *node, int value){
    if(!node) return createNode(value);
    
    if (node->value > value)
        node->left = addNode(node->left, value);
    else
        node->right = addNode(node->right, value);
    
    return node;
}

int isIdenticalBST( Node * firstTree, Node *secondTree){
    if( ! (firstTree || secondTree ) ) //both of them are empty
        return true;
    
    if( !( firstTree && secondTree ) ) // one of them is empty
        return false;
    
    return ( firstTree->value == secondTree->value )
        && isIdenticalBST( firstTree->left, secondTree->left )
        && isIdenticalBST( firstTree->right, secondTree->right );
}

/* Driver program for the function written above */
int main(){
    Node *firstRoot = NULL;
	
    //Creating a binary tree
    firstRoot = addNode(firstRoot, 30);
    firstRoot = addNode(firstRoot, 20);
    firstRoot = addNode(firstRoot, 15);
    firstRoot = addNode(firstRoot, 25);
    firstRoot = addNode(firstRoot, 40);
    firstRoot = addNode(firstRoot, 38);
    firstRoot = addNode(firstRoot, 45);
    
    printf("Inorder traversal of tree is : ");
    inoderTraversal(firstRoot);
	
    printf("\n");
    
    Node *secondRoot = NULL;
    
    //Creating a binary tree
    secondRoot = addNode(secondRoot, 30);
    secondRoot = addNode(secondRoot, 20);
    secondRoot = addNode(secondRoot, 15);
    secondRoot = addNode(secondRoot, 25);
    secondRoot = addNode(secondRoot, 40);
    secondRoot = addNode(secondRoot, 38);
    secondRoot = addNode(secondRoot, 45);
    
    printf("Inorder traversal of tree is : ");
    inoderTraversal(secondRoot);
    printf("\n");
    
    printf( "Two trees are identical : %s" , 
           isIdenticalBST( firstRoot, secondRoot ) ? "True" :"false");

    return 0;
}

Complexity to find if two trees are identical binary trees is O(n) where n is number of trees in smaller tree.

Please share if there is something wrong or missing. If you are willing to contribute and share your knowledge with thousands of learners across the world, please reach out to us communications@algorithmsandme.com