# First non repeated character in stream

In last post, we discussed first non repeated character in string, where all characters where available in memory beforehand. Let’s take another variation of the problem : Given a stream of characters, find first non repeated character in stream in stat a given point of time.

For example, if stream till now is a,a,b,c,c,d; and it is asked what is first non repeated character at this point, it will be b. Next character in stream is let’s say b, then first non repeated character in stream at this point will be d.

## Non repeated character in stream : thoughts

If set of characters is already present in memory, then it is very simple as we discussed here. Idea is to create a hash map with character as key and number of times is occurs in string as value; increase the count when character is seen during scan of string.
Once all characters are processed, go through string again and return first character which has count 1. However, this approach does not work with continuous stream of characters.

Why a queue?

Problem is to return first non repeating character at a given point of time. A character which was unique at a certain point of time, may not be unique at another point of time in future, that character may have be duplicated in stream coming in during those instances of time. We need to store characters in order of their arrival, however remove them as they occur more than once. At any specific time, we want the first arrived character till that point and has only one occurrence. Are you able to pattern?

First in first out pattern and queues are the best data structure to implement first in first out pattern.

One thing to be noted that even though first in first out pattern applies, removal of character is ad-hoc between two queries when stream flows in.  Characters which happen to occur twice or more between queries will be removed from the queue. Why mentioned this because then we cannot have array based queue. What data structure is best to remove a random node? Well, it’s doubly linked list. Given a node, it’s `O(1)` operation to remove it.

Why a hash map?

Another problem is, how can we map a character coming out of stream to a node in queue? We scan the doubly linked list based queue and delete the node which contains the character. Even though it will be O(1) operation (as max queue can grow 256 nodes why?), it is more expensive computationally to do for all the characters coming from stream.

What can quickly map a node to character, obviously a hash map. So, key value pair is store in hash map with character as key and node address as value.
Hash map actually now servers another purpose too, which is to maintain the count of occurrence. If a character is present in hash, that means it has corresponding node in queue, which implies that character has already been seen once. So we remove it from the queue if there is value for character in map.
If there is not value for the character in hash map, that implies, this is the first time we have seen this element and should be added to queue.

### First non repeated character in stream : Algorithm

1. Process next character from stream.
2. Check in hash map if there is corresponding node in queue.
3. If there is node already in hash map
1. Delete the node corresponding node from queue
2. go to step 1.
4. If there is no node in hash map for this character
1. Create a node and push that at the tail of queue.
2. Update hash map with char, node address as key value pair

Algorithm looks fine, but there is an implementation problem here, can you see? We are freeing the node from queue however, we are keeping it in hash, to  indicate that character is already seen.
In ideal world, we should not store references to free memory which can be
accessed accidentally and cause segmentation fault. Idea would be to separate storing node pointer and seen functionalities with two hashes.

Create an additional hash map, character as key and bool as value, which indicates if character is seen twice already in stream before. With this hash map, we can remove the entry from other hash map which store node address of particular character, so no dangling pointers.

Modified algorithm

1. Process next character from stream.
2. Check in visited hash map if character was ever seen twice before.
3. If true, go to step 1.
4. If false, check the node address hash map, if there is node address for character
1. Delete the node corresponding node from queue and remove entry from hash.
2. Update visited hash map for this char as true.
3. go to step 1.
5. If there is no node in hash map for this character
1. Create a node and push that at the tail of queue.
2. Update hash map with char, node address as key value pair
```package com.company;

import java.util.HashMap;

/**
* Created by sangar on 8.10.18.
*/
public class NonRepeatedCharInStream {

private HashMap<Character, Boolean> visited =  new HashMap<>();
private HashMap<Character, ListNode> address =  new HashMap<>();
private Queue<Character> queue = new QueueImplementation<>();

public char getNonRepeatedCharacter(){
return queue.isEmpty() ? ' ' : queue.peek().getData();
}

public void process(char c){
if(visited.containsKey(c)) return;

visited.put(c, true);
}
else{
}
}
}

```

Queue implementation using Doubly linked list

```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public class QueueImplementation<E> implements Queue<E>{
ListNode<E> tail;

public QueueImplementation(){
tail = null;
}

@Override
public ListNode<E> deleteNode(ListNode<E> node){
if(this.isEmpty()) {
return null;
}

return node;
}

if(this.tail == node){
if(this.tail.getPrev() != null) this.tail.getPrev().setNext(null);
this.tail = this.tail.getPrev();
return node;
}
/*
We are deleting node in between. So following things happen
1. If node has prev, set node.prev.next = node.next.
2. If node has next, set node.next.prev = node.prev
*/
if(node.getPrev() != null) node.getPrev().setNext(node.getNext());
if(node.getNext() != null) node.getNext().setPrev(node.getPrev());

return node;
}

@Override
public ListNode peek() {
if(this.isEmpty()) {
return null;
}
}

@Override
public ListNode remove() {
if(this.isEmpty()) {
return null;
}
/*
We are deleting node at head. So following things happen
1. Set temporary node point to head.
2. Move head to next of node.
3. Set prev of new head to NULL.
4. Free the temp node.
*/

return tempNode;
}

@Override
public ListNode enqueue(E data) {
if(this.isEmpty()) {
}
ListNode<E> newNode = new ListNode<E>(data,null, this.tail);
this.tail.setNext(newNode);
this.tail = newNode;

return newNode;
}

@Override
public boolean isEmpty() {
}
}

```
```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public interface Queue<E> {
public ListNode<E> peek();
public ListNode<E> remove();
public ListNode<E> enqueue(E data);
public ListNode<E> deleteNode(ListNode<E> node);
public boolean isEmpty();
}
```

Test cases (add more and see if that works)

```package test;

import com.company.NonRepeatedCharInStream;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class NonRepeatedCharInStreamTest {

NonRepeatedCharInStream tester = new NonRepeatedCharInStream();

@Test
public void testNormalCase() {

tester.process('a');
tester.process('b');
tester.process('c');

assertEquals('a', tester.getNonRepeatedCharacter());

tester.process('a');
assertEquals('b', tester.getNonRepeatedCharacter());

tester.process('c');
tester.process('d');
tester.process('b');

assertEquals('d', tester.getNonRepeatedCharacter());
}

@Test
public void testNoUniqueCharCase() {

tester.process('a');
tester.process('b');
tester.process('c');
tester.process('a');
tester.process('b');
tester.process('c');

assertEquals(' ', tester.getNonRepeatedCharacter());
}
}

```

Let’s see how this algorithm works with an example. To start with both hash maps (visited and address) and queue are empty. Stream starts with character c, new node is added to queue, also, entry is entered into the address hash map to store address of node in queue.

Next d comes from stream,  we check if it is on visited map, it is not, so we check if it on address map. It is not there, it means, we saw c first time. Add it to queue and store pointer to node in address node.

Next character is e, same process followed as d and it gets added to queue and address map.

Imagine that stream puts d out now. We will check in visited map, it is not, however, in address map, it is present. This is where we know that d has occurred twice. We delete it from queue and address map. However, we add it to visited map.

Now comes the character f, it is not present in visited map nor in address map, so we add it to queue and store pointer in the address map.

Next comes the character d, we check visited map it true and so do not add it anywhere and skip it.

What if next character is c,  it is present in address map, so remove it from queue and map. Add it to visited map.

Complexity to find first non repeated character in stream is `O(1)` as number of characters processed from stream does not impact the processing time. Space complexity is also `O(1)` as it independent of number of characters processed.

Please share if there is something wrong or missing. Please reach out to us at [email protected] if you have doubt or need any help with your interview preparations.

# Print last n lines of file

A lot of times, when we are debugging production systems, we go through the logs being generated by systems. To see the logs which are most recent, we commonly use tail -n functionality of Unix.

Tail -n functionality prints the last n lines of each FILE to standard output

After going through many interview experiences at Microsoft, I found that this question regularly features in the majority of interviews. Let’s take an example and see what to expect out of the functionality.

The first thing we notice about this problem is that we have to print the last n lines. It means we have to maintain some kind of order. If we want the last line first, this is typical `LIFO`, which is implemented using the stack data structure.

However, another constraint is that we have to print most n lines. In that case, if the number of lines on stack goes more than n, we will remove some lines from it. Which lines should be removed? We will remove the lines which came first. Unstack all the lines from the stack and removed the first line and then put all lines back on to the stack.
When you read, we just read from the top of the stack till stack is empty which will give us last n lines of the file.

Also, tail functionality of Unix prints the line in forwarding order rather than reverse order. If we are implementing true tail functionality, the order will be `FIFO` rather than `LIFO`. But make sure that you clarify this with the interviewer.

The complexity of reading n lines is `O(n)` and putting a new line also takes `O(n)` complexity. If the stack is implemented using linked list, we do not require additional memory.

What if the file is continuously written on, and tail happens occasionally. As mentioned above, the stack solution has `O(n)` complexity to put every line, which is not ideal in here. Tail -f actually requires that output grows as things are added to the file.

```       -f, --follow[={name|descriptor}]
output appended data as the file grows;
```

What if we optimize the writing part using queues to store the last n lines of the file. Imagine a case, when a queue has last n lines of the file at a point of time. Now if a new line comes, we can add its tail of the queue and remove them from the front. If we keep track of tail of queue, insertion and removal operation both become `O(1)`.

To read lines, we have to read the queue in reverse order. This should give us the idea that a doubly linked list should be used to implement queue. Using doubly linked list, if we have the tail pointer, we can always traverse queue in reverse order. The complexity of reading n lines will still be `O(n)`. Real Tail does not require it, you can print the entire queue in FIFO manner, howver, it is good to mention in interview why you chose DLL over singly linked list to implement queue.

### Print last n lines of a file: Algorithm

1. For every line being added to the file, do the following:
2. If size of queue is less than n, we simply enqueue the line in queue.
3. If size of queue is greater than n, dequeue line from front and enqueue new line at the end.

If you are tailing an existing file, then read the whole file line by line and do the last two operations in the algorithm.

#### Print last n lines: implementation

```#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define MAX_SIZE 500
#define true 1
#define false 0

typedef struct queue_l{
char data[MAX_SIZE];
struct queue_l *next;
struct queue_l *prev;
}Queue;

typedef struct dummyNode{
int size;
struct queue_l *front;
struct queue_l *tail;
}dummyNode;

/* Below are the routine function for init queue, enqueue,
dequeue, queue_empty etc */
void initializeQueue(dummyNode **q){
*q  = (dummyNode *)malloc(sizeof(dummyNode));
if(*q){
(*q)->front = NULL;
(*q)->tail = NULL;
(*q)->size = 0;
}
}

int isEmpty(dummyNode *q){
if( !(q->size))
return true;

return false;
}

Queue * enqueue(dummyNode *q, char * elem){
Queue *newNode= (Queue *) malloc(sizeof(Queue));
if(newNode){
strcpy(newNode->data, elem);
newNode->next = NULL;
newNode->prev = q->tail;

if(q->tail){
q->tail->next = newNode;
}
q->tail = newNode;
if(!q->front)
q->front = newNode;
q->size++;
}
return newNode;
}

char * dequeue(dummyNode *d){

if(isEmpty(d)){
printf("\n Queue is empty");
return NULL;
}

Queue *q  = d->front;
d->front = q->next;

if(q->next)
q->next->prev = NULL;
else
d->tail = NULL;

char * deletedNode = q->data;
free(q);
d->size--;

return deletedNode;
}

void update_lines(dummyNode *d, char *s, int n){
if(d->size <n){
enqueue(d, s);
}
else{
dequeue(d);
enqueue(d, s);
}
}

int main(){

dummyNode *d =  NULL;

int n=10;

initializeQueue(&d);

char line[MAX_SIZE], *result;
FILE *stream;
/* Open the file */
stream  = fopen("problems.txt","rb");

/*Read lines one by one */
while((result =fgets(line, MAX_SIZE, stream)) != NULL){
update_lines(d, line,n);
}

fclose(stream);
print_queue(d);

return 0;
}
```

Please share if there is something wrong or missing. If you are preparing for an interview and want coaching session to prepare you fast, please book a free session with us.

## LRU cache implementation

This is commonly asked question in interview, especially Microsoft and Amazon interviews. Problem statement is very simple

Implement LRU cache or Least Recently Used cache

Before going further into solution, first let’s understand what is cache?  In computer architectural terms, a cache is small buffer of pages OS maintains in-order to avoid more expensive main memory accesses.

Usually cache accesses are faster than main memory access, improving overall performance. Whenever a process needs to access content from a specific memory location, it tries to search that page in the cache first. If it finds a page in the cache, it uses it and does not access memory. It’s called cache hit.

However, caches are very small in size as compared to main memory, there is a probability that page requested by the process is not present in the cache. In that case, the new page is moved into the cache and one of the existing pages is swapped out. When this happens, it is called cache miss.

Caching also happens at application layer too, for example, caching visited pages on a browser, caching frequently accessed data from DB to in-memory caches like Redis.

Based on how cache miss is handled in cache, there different type of caches like first in first out cache, least recently use cache, least frequently use cached.

In first in first out cache, in the event of a cache miss, an entity which came first into the cache is evicted first; whereas in least recently used cache, the page which was used least recently gets evicted. In the same vein, least frequently used cache is where page which is least frequently used among all the pages in cache.

## LRU cache implementation

Consider that you have a cache with space for an additional page. If cache miss happens, we bring a page from memory and put it in the cache for future access.

Now, if the cache is full, and cache miss happens, we have to bring in a new page and evict a page from cache. In LRU cache, this page will  be the page which accessed the longest time ago.

What if a page was accessed at the start, and then accessed just before the cache miss? Is it the least recently used page? On the contrary, it is the most recently used page and hence should be last to evict. The question now is which page should we evict? In this case, page which came after the first page goes out.

If page is brought into cache first, it is first candidate for eviction, if it is not accessed again before cache miss happens.

Why queue?

In principle, LRU cache is first in first out cache with a special case, that if a page is accessed again, it goes to end of the eviction order. Which data structure is best to implement FIFO pattern? Of course, it’s a queue. So our LRU cache will be a queue where each node will store a page. This queue will have a specific size as cache as a limited size. Whenever a new page is brought in, it is added at the rear of the queue. When eviction happens, it happens from the front of cache.

Why hash?

There is one requirement of LRU cache which does not map directly to queue data structure, which is to move a node corresponding recently accessed page to end of the queue. This poses two problems: First, how to find the node in the queue corresponding to page id being accessed? Second, how to move it to end as efficiently possible? Both problems are very similar to what we solved in first nonrepeated character in stream.

We will use hash which will store the node address in the queue corresponding to page id. This will give us immediate access to the node to be reshuffled.

Still the problem remains to move nodes around with moving all elements of the queue. Which data structure removes an element in O(1), given the element? Doubly linked list it is. If we implement queue as a doubly linked list, removing and adding pages from the queue will be O(1) operation.

### LRU cache algorithm

• Cache miss happens :
• If cache has free entry, enqueue page to queue.
• If cache is full,  remove the page from from of queue and add new page at the end of queue.
• Cache hit happens :
• delete node from current location in queue.
• Enqueue page at the end of queue.
• If page is present in hash, it’s a cache hit, if page is not present in hash map, it’s a cache miss.

Queue interface

```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public interface Queue<E> {
public ListNode<E> peek();
public ListNode<E> remove();
public ListNode<E> enqueue(E data);
public ListNode<E> deleteNode(ListNode<E> node);
public boolean isEmpty();
public int size();
}

```

Queue implementation

```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public class QueueImplementation<E> implements Queue<E>{
ListNode<E> tail;
int size;

public QueueImplementation(){
tail = null;
this.size = 0;
}

@Override
public ListNode<E> deleteNode(ListNode<E> node){
if(this.isEmpty()) {
return null;
}

this.size--;
return node;
}

if(this.tail == node){
if(this.tail.getPrev() != null) this.tail.getPrev().setNext(null);
this.tail = this.tail.getPrev();
this.size--;
return node;
}
/*
We are deleting node in between. So following things happen
1. If node has prev, set node.prev.next = node.next.
2. If node has next, set node.next.prev = node.prev
*/
if(node.getPrev() != null) node.getPrev().setNext(node.getNext());
if(node.getNext() != null) node.getNext().setPrev(node.getPrev());

this.size--;
return node;
}

@Override
public ListNode peek() {
if(this.isEmpty()) {
return null;
}
}

@Override
public ListNode remove() {
if(this.isEmpty()) {
return null;
}
/*
We are deleting node at head. So following things happen
1. Set temporary node point to head.
2. Move head to next of node.
3. Set prev of new head to NULL.
4. Free the temp node.
*/

this.size--;
return tempNode;
}

@Override
public ListNode enqueue(E data) {
if(this.isEmpty()) {
this.size++;
}
ListNode<E> newNode = new ListNode<E>(data,null, this.tail);
this.tail.setNext(newNode);
this.tail = newNode;

this.size++;
return newNode;
}

@Override
public boolean isEmpty() {
}

@Override
public int size() {
return this.size;
}
}

```

LRU cache implementation in java

```package com.company;

import java.util.ArrayList;
import java.util.HashMap;

/**
* Created by sangar on 9.10.18.
*/
public class LRUCache {
private Queue<Integer> queue;
private HashMap<Integer, ListNode> pages;
private int cacheSize;

public LRUCache(int cacheSize){
this.cacheSize = cacheSize;
queue = new QueueImplementation<>();
pages = new HashMap<>();
}

/* This function implements the LRU cache */

/*Cache Miss and we can add the page in the cache */
if (!pages.containsKey(pageId) && queue.size() < cacheSize) {
pages.put(pageId, queue.enqueue(pageId));
return;
}

/* Cache Miss and we cannot add new page to cache,
remove the LRU page */
if (!pages.containsKey(pageId) && queue.size() >= cacheSize) {
//First remove it from the queue.
ListNode evictedPage = queue.remove();
//Remove node from hash table
pages.remove(evictedPage.getData());
//Enqueue new node and add it at tail
pages.put(pageId, queue.enqueue(pageId));
return;
}

/* Cache is Hit */
if (pages.containsKey(pageId)) {
updateCache(pageId);
}

return;
}

/* This function modifies queue when there is cache hit */
public void updateCache(int pageId){

/* Case where queue may be empty - defensive programing*/
if(queue.isEmpty() && queue.size() < cacheSize){
pages.put(pageId,queue.enqueue(pageId));
}
/* Update the pointers of neighbor nodes and tail in dummy node */
else{
ListNode node = queue.deleteNode(pages.get(pageId));
if(node != null){
queue.enqueue((Integer)node.getData());
}
}
}

public ArrayList<Integer> cacheState(){
ListNode current = queue.peek();
ArrayList<Integer> cacheState = new ArrayList<>();
while(current != null){
current = current.getNext();
}
return cacheState;
}
}
```

#### Test case for LRU cache implementation

```package test;

import com.company.LRUCache;
import org.junit.jupiter.api.Test;

import java.util.ArrayList;
import java.util.Arrays;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class LRUCacheTest {

LRUCache tester = new LRUCache(5);

@Test
public void testCacheInsertion() {

ArrayList<Integer> cacheState = new ArrayList<>(Arrays.asList(1,2,3,4,5));

assertEquals(cacheState, tester.cacheState());
}

@Test
public void testCacheHit() {

ArrayList<Integer> cacheState = new ArrayList<>(Arrays.asList(1,4,5,2,3));

assertEquals(cacheState, tester.cacheState());
}

@Test
public void testCacheMiss() {

ArrayList<Integer> cacheState = new ArrayList<>(Arrays.asList(2,3,4,5,6));

assertEquals(cacheState, tester.cacheState());
}

@Test
public void testEvictionAndInsertion() {

ArrayList<Integer> cacheState = new ArrayList<>(Arrays.asList(3,4,5,6,1));

assertEquals(cacheState, tester.cacheState());
}

@Test
public void testEmptyCache() {
ArrayList<Integer> cacheState = new ArrayList<>();

assertEquals(cacheState, tester.cacheState());
}
}
```

Let’s take an example and see how whole thing works. Let’s say we have cache of size 5, which is empty to start with. Application accesses page id 1,2,3,4 and 5. As they are all cache miss to start with and there is space in cache, all these pages are brought into cache.

Now, the application accesses page 6. We have a cache miss. What happens? As the cache is full, we evict the page which is at the front and add page 6 to it.

Application accesses page 2 again, which is already present in the cache, it’s a cache hit. As page 2 is the most recently used page, it has to go to the end of the queue.

Let’s say next page accessed is page 7, it is a cache miss. Evict a page from the cache, which is the first node in the queue (3).

Insertion and removal from queue is O(1) complexity.

# Linked list-based implementation of queue

A Queue is an abstract data structure which follows the First In First Out FIFO principle. It means the element which came into the queue first will leave the queue first. This ordering is very helpful in a lot of solutions. Two things are important for a queue: front and rear or tail.

A new element is added into the queue at the rear or at the tail, which is called enqueue operation.

The front is the point from where an element of a queue is taken out. This operation is called dequeue operation.

Interface for a queue would be as shown below. This interface can be implemented in multiple ways. There are different ways in which an abstract data structure can be implemented using concrete data structures, for example, a queue can be implemented using arrays or linked lists. Limitation in array-based implementation is that we have to allocate array size beforehand which restricts the number of elements that can be accommodated. Another issue is to correctly tell if the queue is empty or full. We have to maintain an extra counter for that purpose.

```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public interface Queue<E> {
public ListNode<E> peek();
public ListNode<E> remove();
public ListNode<E> enqueue(E data);
public boolean isEmpty();
public int size();
}
```

Let’s discuss how to implement a queue using linked lists.

## Single linked list based implementation of queue

A linked list a collection of nodes where each node has two components, first component store the data for the node and second component points to the next node in the linked list. In the last node, the second component points to NULL, which signifies the end of the linked list.

If we use a linked list, we solve the first problem of statically allocating memory beforehand for the queue. The linked list is a dynamic data structure, we can allocate memory at the runtime based on our requirements. Also, to find if a queue is empty, check if linked list empty, which is a simple operation to check if the head of linked list NULL.

The time complexity to remove an element out to the singly linked list based queue is `O(1)`, remove the head and make the next node of the head new head. However, to add an element into the singly linked list, we have to go the end of the lists which has a time complexity of `O(n)`.

This problem can be easily be solved by keeping a tail pointer, which points to the last node of the linked list. When we have to enqueue an element in the queue, we update the next of tail to point to the new node and make new node has the tail of the queue. The complexity of enqueue operation is also `O(1)`.

The singly linked list seems to be working for the queue implementation, with dynamic size, dequeue and enqueue operation with `O(1)` complexity.

One more operation performed on queues to solve certain problems like LRU Cache, non repeated characters in a stream etc. This operation is to delete a random node in queue. Given a random node in the queue, remove that node from the queue.

This problem is tricky in a singly linked list. Brute force solution is to traverse the linked list, go till the previous node and the do the pointer rearrangement and then free the memory of the given node. This operation in the worst case requires `O(n)` operations. There is a trick method, which is to copy the data of the next node of the given node to the given node and then delete the next node. Caveat to this trick, which I have discussed in delete a node from linked list.

To delete a node from a linked list, two pointers are required: the previous node of the node and the next node of the node. All we do is to make the next pointer of the previous node point to the next node of the given node and free the given node.

## Doubly linked list based implementation of queues

From the above discussion, it is easy to guess which type of linked list implementation will give us previous and next node of a given node without traversing the list. It is doubly linked list.

All the operations remain the same, with same time complexity. With the doubly linked list, delete operation also becomes `O(1)`. So, whenever you have a use case where you may have to delete a random node from the queue, always go for the doubly linked list based implementation. The only overhead is that we have to store double the number of pointers than a singly linked list.

```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public interface Queue<E> {
public ListNode<E> peek();
public ListNode<E> remove();
public ListNode<E> enqueue(E data);
public ListNode<E> deleteNode(ListNode<E> node);
public boolean isEmpty();
public int size();
}
```
```package com.company;

/**
* Created by sangar on 8.10.18.
*/
public class QueueImplementation<E> implements Queue<E>{
ListNode<E> tail;
int size;

public QueueImplementation(){
tail = null;
this.size = 0;
}

@Override
public ListNode<E> deleteNode(ListNode<E> node){
if(this.isEmpty()) {
return null;
}

this.size--;
return node;
}

if(this.tail == node){
if(this.tail.getPrev() != null)
this.tail.getPrev().setNext(null);
this.tail = this.tail.getPrev();
this.size--;
return node;
}
/*
We are deleting node in between. So following things happen
1. If node has prev, set node.prev.next = node.next.
2. If node has next, set node.next.prev = node.prev
*/
if(node.getPrev() != null) node.getPrev().setNext(node.getNext());
if(node.getNext() != null) node.getNext().setPrev(node.getPrev());

this.size--;
return node;
}

@Override
public ListNode peek() {
if(this.isEmpty()) {
return null;
}
}

@Override
public ListNode remove() {
if(this.isEmpty()) {
return null;
}
/*
We are deleting node at head. So following things happen
1. Set temporary node point to head.
2. Move head to next of node.
3. Set prev of new head to NULL.
4. Free the temp node.
*/

this.size--;
return tempNode;
}

@Override
public ListNode enqueue(E data) {
if(this.isEmpty()) {
this.size++;
}
ListNode<E> newNode = new ListNode<E>(data,null, this.tail);
this.tail.setNext(newNode);
this.tail = newNode;

this.size++;
return newNode;
}

@Override
public boolean isEmpty() {
}

@Override
public int size() {
return this.size;
}
}
```

### Circular linked list base implementation of queue

Sometimes, the interviewer asks to you solve a trick question like this: Implement queue using only one pointer, either front or rear

The correct answer to it is to use a circular linked list, where the last pointer points back to the head or front pointer. In that case, we will use only the rear pointer.

Enqueue operation:
We create a new node, point the next of new node to the next of tail node, make it next of the tail node and new node becomes the tail node. This whole operation is in constant time, hence the complexity of this operation is `O(1)`.

```   newNode.next=tail.next;
tail.next=newNode;
tail=newNode;
```

Dequeue operation:

```   node = tail.next //node to be removed
tail.next =  node.next // point to the next of front node.
```

We learned different ways to implement a queue using linked lists. Based on the requirements and constraints of the problem we choose one of the give implementations. To understand more how queues are implemented in Java, please read Queue Implementations

Please share if there is something wrong or missing. If you are preparing for an interview and need personalized coaching to help you with preparation, please book a free session with us.

# Implement queue using stack

In last post, we learned about stack data structure, in this post, we will discuss another data structure called queue. However, problem at hand is to implement queue using stack. Implement following functions on queue using stack
1. push() : inserts element at the back of queue.
2. pop() : removes element from front of the queue.
3. peek() : return element at front of the queue.
4. empty() : returns true if there is no element in queue.

Keep in mind that you can only use standard stack operations : push(), pop(), peek() and empty()

Stack is data structure where the element which is entered at top is taken out from top. It’s called LIFO pattern. Oppose to that queue is a FIFO data structure, where elements are entered at the rear and taken out from front. So effectively, we have to implement a FIFO data structure using LIFO data structure.

## Implement queue using stack : Line of thoughts

To implement a FIFO using LIFO data structure, we need two stacks.

Push()
When element is inserted i.e `push()` operation, new element has to be pushed down the stack at bottom, as this should be the last element to be popped. So, to push an element in queue, we will take out all existing elements from stack s1 and put them into stack s2. Now, push the new element on to stack s1. At last, pop all elements from stack s2 back to stack s1. Below picture shows what happens when you we push 3 to queue and what happens behind the scene using stacks.

Complexity of push operation with this method is `O(n)`. If there are n elements already inserted into queue, inserting a new element will require n pops from s1, n pushes to s2, then n pops from s2 and then again pushes to s1.

Pop()
If we follow the push operation described above, pop operation would be nothing but to return top of s1, which is constant operation with complexity of `O(1)`.

Peek and empty functions also run always on stack s1. For peek, return `s1.peek()` and for empty return `s1.empty()`

### Queue with stack : Push O(n), pop O(1) implementation

```package com.company;

import java.util.Stack;

/**
* Created by sangar on 23.9.18.
*/
public class QueueWithStack {
private Stack<Integer> s1;
private Stack<Integer> s2;

public QueueWithStack(){
s1 = new Stack<>();
s2 = new Stack<>();
}

public void push(int x){
if(s1.empty()) s1.push(x);
else{
while(!s1.empty()){
s2.push(s1.pop());
}
s2.push(x);
while(!s2.empty()){
s1.push(s2.pop());
}
}
}

public int pop(){
if(s1.empty()) return -1;
return s1.pop();
}

public boolean isEmpty(){
return s1.empty();
}

public int peek(){
return s1.peek();
}
}
```

Test class for above implementation would be:

```package test;

import com.company.QueueWithStack;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

/**
* Created by sangar on 23.9.18.
*/
public class QueueWithStackTest {

QueueWithStack tester = new QueueWithStack();
@Test
public void queueTest() {

tester.push(2);
tester.push(3);
tester.push(5);
tester.push(1);

assertEquals(2, tester.pop());
assertEquals(3, tester.pop());
assertEquals(5, tester.peek());
assertEquals(false, tester.isEmpty());
}
}
```

Can we do better than `O(n)` while pushing element in queue?

### Queue with stack : Push O(1), pop amortized complexity O(1) implementation

Push()
What if we push on s1 as it is. What does it change? It make push operation on queue `O(1)`.

Pop()
How does it impacts pop operation? If we pop all element from s1 and push them onto s2, at the top of s2 is actually the element we need. Also, due to this pop and push operation, s2 now contains all the elements in correct pop order for queue.
So idea is to always push in s1 as it is, however when popping out, check if s2 is empty or not? If not, then pop from s2 and return, if it is empty, pop all elements from s1 and push them all on s2 and return the top.

How does it impact the performance? Well, it is true that if there is not element in s2, we have pop and push on s2, which has complexity of `O(n)`. HOwever, all subsequent pop operations are O(1), this is called amortized complexity of `O(1)`.

Empty()
Queue to be empty, there should not any element in either s1 or s2.

Peek()
If s2 is empty, then pop from s1 and push on to s2 and then return peek of s2.

```package com.company;

import java.util.Stack;

/**
* Created by sangar on 23.9.18.
*/
public class QueueWithStackOptimized {
private Stack<Integer> s1;
private Stack<Integer> s2;
private int front;

public QueueWithStackOptimized(){
s1 = new Stack<>();
s2 = new Stack<>();
}

public void push(int x){
if(s1.empty()) front = x;
s1.push(x);
}

public int pop(){
if(!s2.empty()) return s2.pop();
if(!s1.empty()) return -1;

while(!s1.empty()){
s2.push(s1.pop());
}
return s2.pop();
}

public boolean isEmpty(){
return s1.empty() && s2.empty();
}

public int peek(){
if(!s2.empty()) return s2.peek();

return front;
}
}
```

Complexity of peek function is again amortized to `O(1)`. Can you write test cases for implemented queue?

Reference : Leetcode

Please share if there is something wrong or missing. If you want to have personal coaching from our experienced coaches, please reach out to us at [email protected]