Queues and stacks

Queue

Quick reference
Worst Case
space O(n)
enqueue O(1
dequeue O(1)
peek O(1)

queue stores items in a first-in, first-out (FIFO) order.

Picture a queue like the line outside a busy restaurant. First come, first served.

Strengths:
  • Fast operations. All queue operations take O(1) time.
Uses
  • Breadth-first search uses a queue to keep track of the nodes to visit next.
  • Printers use queues to manage jobs—jobs get printed in the order they’re submitted.
  • Web servers use queues to manage requests—page requests get fulfilled in the order they’re received.
  • Processes wait in the CPU scheduler’s queue for their turn to run.
Implementation

Queues are easy to implement with linked lists:

  • To enqueue, insert at the tail of the linked list.
  • To dequeue, remove at the head of the linked list.

You could implement a queue with an array or dynamic array, but it would get kinda messy. Try drawing it out. You’ll notice that you’d need to build out a “scoot over” or “re-center” operation that automatically fires when your queue items hit the bottom edge of the array.

Stack
Quick reference
Worst Case
space O(n)
push O(1)
pop O(1)
peek O(1)

stack stores items in a last-in, first-out (LIFO) order.

Picture a pile of dirty plates in your sink. As you add more plates, you bury the old ones further down. When you take a plate off the top to wash it, you’re taking the last plate you put in. “Last in, first out.”

Strengths:
  • Fast operations. All stack operations take O(1) time.
Uses:
  • The call stack is a stack that tracks function calls in a program. When a function returns, which function do we “pop” back to? The last one that “pushed” a function call.
  • Depth-first search uses a stack (sometimes the call stack) to keep track of which nodes to visit next.
  • String parsing—stacks turn out to be useful for several types of string parsing.
Implementation

You can implement a stack with either a linked list or a dynamic array—they both work pretty well:

Stack Push Stack Pop
Linked Lists insert at head remove at head
Dynamic Arrays append remove last element
Practice
Question 1

You want to be able to access the largest element in a stack. 

You’ve already implemented this Stack class:

class Stack {
  constructor() {

    // Initialize an empty stack
    this.items = [];
  }

  // Push a new item onto the stack
  push(item) {
    this.items.push(item);
  }

  // Remove and return the last item
  pop() {

    // If the stack is empty, return null
    // (It would also be reasonable to throw an exception)
    if (!this.items.length) {
      return null;
    }
    return this.items.pop();
  }

  // Returns the last item without removing it
  peek() {
    if (!this.items.length) {
      return null;
    }
    return this.items[this.items.length - 1];
  }
}

Use your Stack class to implement a new class MaxStack with a method getMax() that returns the largest element in the stack. getMax() should not remove the item.

Your stacks will contain only integers.

Do you have an answer? No?

Here is a hint to get you started:

Initial Breakdown

just-in-time approach is to have getMax() simply walk through the stack (popping all the elements off and then pushing them back on) to find the max element. This takes O(n) time for each call to getMax(). But we can do better.

Think you have an answer now? Yes?

Then check your answer against these gotchas one at a time:

Gotchas

What if we push several items in increasing numeric order (like 1, 2, 3, 4…), so that there is a new max after each push()? What if we then pop() each of these items off, so that there is a new max after each pop()? Your algorithm shouldn’t pay a steep cost in these edge cases.

You should be able to get a runtime of O(1) for push()pop(), and getMax().

Full Breakdown

just-in-time approach is to have getMax() simply walk through the stack (popping all the elements off and then pushing them back on) to find the max element. This takes  time for each call to getMax(). But we can do better.

To get O(1) time for getMax(), we could store the max integer as a member variable (call it max). But how would we keep it up to date?

For every push(), we can check to see if the item being pushed is larger than the current max, assigning it as our new max if so. But what happens when we pop() the current max? We could re-compute the current max by walking through our stack in O(n) time. So our worst-case runtime for pop() would be O(n). We can do better.

What if when we find a new current max (newMax), instead of overwriting the old one (oldMax) we held onto it, so that once newMax was popped off our stack we would know that our max was back to oldMax?

What data structure should we store our set of maxes in? We want something where the last item we put in is the first item we get out (“last in, first out”).

We can store our maxes in another stack!

Solution

We define two new stacks within our MaxStack classstack holds all of our integers, and maxesStack holds our “maxima.” We use maxesStack to keep our max up to date in constant time as we push() and pop():

  1. Whenever we push() a new item, we check to see if it’s greater than or equal to the current max, which is at the top of maxesStack. If it is, we also push() it onto maxesStack.
  2. Whenever we pop(), we also pop() from the top of maxesStack if the item equals the top item in maxesStack.
class MaxStack {
  constructor() {
    this.stack = new Stack();
    this.maxesStack = new Stack();
  }

  // Add a new item to the top of our stack. If the item is greater
  // than or equal to the last item in maxesStack, it's
  // the new max! So we'll add it to maxesStack.
  push(item) {
    this.stack.push(item);
    if (this.maxesStack.peek() === null || item >= this.maxesStack.peek()) {
      this.maxesStack.push(item);
    }
  }

  // Remove and return the top item from our stack. If it equals
  // the top item in maxesStack, they must have been pushed in together.
  // So we'll pop it out of maxesStack too.
  pop() {
    const item = this.stack.pop();
    if (item === this.maxesStack.peek()) {
      this.maxesStack.pop();
    }
    return item;
  }

  // The last item in maxesStack is the max item in our stack.
  getMax() {
    return this.maxesStack.peek();
  }
}
Complexity

 time for push()pop(), and getMax() additional space, where m is the number of operations performed on the stack.

Bonus

Our solution requires O(m) additional space for the second stack. But do we really need it?

Can you come up with a solution that requires O(1) additional space? (It’s tricky!)

What We Learned

Notice how in the solution we’re spending time on push() and pop() so we can save time on getMax(). That’s because we chose to optimize for the time cost of calls to getMax().

But we could’ve chosen to optimize for something else. For example, if we expected we’d be running push() and pop() frequently and running getMax() rarely, we could have optimized for faster push() and pop() methods.

Sometimes the first step in algorithm design is deciding what we’re optimizing for. Start by considering the expected characteristics of the input.

Question 2

Implement a queue with 2 stacks. Your queue should have an enqueue and a dequeue method and it should be “first in first out” (FIFO).

Optimize for the time cost of m calls on your queue. These can be any mix of enqueue and dequeue calls.

Assume you already have a stack implementation and it gives O(1) time push and pop.

Do you have an answer? No?

Here is a hint to get you started:

Initial Breakdown

Let’s call our stacks stack1 and stack2.

To start, we could just push items onto stack1 as they are enqueued. So if our first 3 calls are enqueues of ab, and c (in that order) we push them onto stack1 as they come in.

But recall that stacks are last in, first out. If our next call was a dequeue() we would need to return a, but it would be on the bottom of the stack.

Think you have an answer now? Yes?

Then check your answer against these gotchas one at a time.

Gotchas

We can get O(m) runtime for m calls. Crazy, right?

Breakdown

Let’s call our stacks stack1 and stack2.

To start, we could just push items onto stack1 as they are enqueued. So if our first 3 calls are enqueues of ab, and c (in that order) we push them onto stack1 as they come in.

But recall that stacks are last in, first out. If our next call was a dequeue() we would need to return a, but it would be on the bottom of the stack.

Look at what happens when we pop c, b, and a one-by-one from stack1 to stack2.
Notice how their order is reversed.

We can pop each item 1-by-1 from stack1 to stack2 until we get to a.

We could return a immediately, but what if our next operation was to enqueue a new item d? Where would we put dd should get dequeued after c, so it makes sense to put them next to each-other . . . but c is at the bottom of stack2.

Let’s try moving the other items back onto stack1 before returning. This will restore the ordering from before the dequeue, with a now gone. So if we enqueue d next, it ends up on top of c, which seems right.

So we’re basically storing everything in stack1, using stack2 only for temporarily “flipping” all of our items during a dequeue to get the bottom (oldest) element.

This is a complete solution. But we can do better.

What’s our time complexity for m operations? At any given point we have O(m) items inside our data structure, and if we dequeue we have to move all of them from stack1 to stack2 and back again. One dequeue operation thus costs O(m). The number of dequeues is O(m), so our worst-case runtime for these m operations is O(m^2).

Not convinced we can have O(m) dequeues and also have each one deal with O(m) items in the data structure? What if our first .5m operations are enqueues, and the second . are alternating enqueues and dequeues. For each of our .25m dequeues, we have .5m items in the data structure.

We can do better than this O(m^2) runtime.

What if we didn’t move things back to stack1 after putting them on stack2?

Solution

Let’s call our stacks inStack and outStack.

For enqueue, we simply push the enqueued item onto inStack.

For dequeue on an empty outStack, the oldest item is at the bottom of inStack. So we dig to the bottom of inStack by pushing each item one-by-one onto outStack until we reach the bottom item, which we return.

After moving everything from inStack to outStack, the item that was enqueued the 2nd longest ago (after the item we just returned) is at the top of outStack, the item enqueued 3rd longest ago is just below it, etc. So to dequeue on a non-empty outStack, we simply return the top item from outStack.

With that description in mind, let’s write some code!

class QueueTwoStacks {
  constructor() {
    this.inStack = new Stack();
    this.outStack = new Stack();
  }

  enqueue(item) {
    this.inStack.push(item);
  }

  dequeue() {
    if (this.outStack.length() === 0) {

      // Move items from inStack to outStack, reversing order
      while (this.inStack.length() > 0) {
        const newestInStackItem = this.inStack.pop();
        this.outStack.push(newestInStackItem);
      }

      // If outStack is still empty, raise an error
      if (this.outStack.length() === 0) {
        throw new Error("Can't dequeue from empty queue!");
      }
    }
    return this.outStack.pop();
  }
}
Complexity

Each enqueue is clearly O(1) time, and so is each dequeue when outStack has items. Dequeue on an empty outStack is order of the number of items in inStack at that moment, which can vary significantly.

Notice that the more expensive a dequeue on an empty outStack is (that is, the more items we have to move from inStack to outStack), the more -time dequeues off of a non-empty outStack it wins us in the future. Once items are moved from inStack to outStack they just sit there, ready to be dequeued in O(1) time. An item never moves “backwards” in our data structure.

We might guess that this “averages out” so that in a set of m enqueues and dequeues the total cost of all dequeues is actually just O(m). To check this rigorously, we can use the accounting method, counting the time cost per item instead of per enqueue or dequeue.

So let’s look at the worst case for a single item, which is the case where it is enqueued and then later dequeued. In this case, the item enters inStack (costing 1 push), then later moves to outStack (costing 1 pop and 1 push), then later comes off outStack to get returned (costing 1 pop).

Each of these 4 pushes and pops is  time. So our total cost per item is O(1). Our m enqueue and dequeue operations put m or fewer items into the system, giving a total runtime of O(m).

What We Learned

People often struggle with the runtime analysis for this one. The trick is to think of the cost per item passing through our queue, rather than the cost per enqueue() and dequeue().

This trick generally comes in handy when you’re looking at the time cost of not just one call, but “m” calls.

Question 3

I like parentheticals (a lot).

“Sometimes (when I nest them (my parentheticals) too much (like this (and this))) they get confusing.”

Write a function that, given a sentence like the one above, along with the position of an opening parenthesis, finds the corresponding closing parenthesis.

Example: if the example string above is input with the number 10 (position of the first parenthesis), the output should be 79 (position of the last parenthesis).

Do you think you have an answer? No?

Here is a hint to get you started:

Initial Breakdown

How would you solve this problem by hand with an example input?

Think you have a solution now? Yes?

Then check your solution against these gotchas one by one:

Gotchas

We can do this in O(n) time.

We can do this in O(1) additional space.

Solution

We simply walk through the string, starting at our input opening parenthesis position. As we iterate, we keep a count of how many additional “(” we find as openNestedParens. When we find a “)” we decrement openNestedParens. If we find a “)” and openNestedParens is 0, we know that “)” closes our initial “(“, so we return its position.

function getClosingParen(sentence, openingParenIndex) {
  let openNestedParens = 0;

  for (let position = openingParenIndex + 1; position < sentence.length; position++) {
    const char = sentence[position];

    if (char === '(') {
      openNestedParens += 1;
    } else if (char === ')') {
      if (openNestedParens === 0) {
        return position;
      }
      openNestedParens -= 1;
    }
  }

  throw new Error('No closing parenthesis :(');
}
Complexity

 time, where n is the number of chars in the string. O(1) space.

What We Learned

The trick to many “parsing” questions like this is using a stack to track which brackets/phrases/etc are “open” as you go.

So next time you get a parsing question, one of your first thoughts should be “use a stack!”

In this problem, we can realize our stack would only hold ‘(‘ characters. So instead of storing each of those characters in a stack, we can store the number of items our stack would be holding.

That gets us from O(n) space to O(1) space.

It’s pretty cool when you can replace a whole data structure with a single integer 🙂

Question 4

You’re working with an intern that keeps coming to you with JavaScript code that won’t run because the braces, brackets, and parentheses are off. To save you both some time, you decide to write a braces/brackets/parentheses validator.

Let’s say:

  • ‘(‘, ‘{‘, ‘[‘ are called “openers.”
  • ‘)’, ‘}’, ‘]’ are called “closers.”

Write an efficient function that tells us whether or not an input string’s openers and closers are properly nested.

Examples:

  • “{ [ ] ( ) }” should return true
  • “{ [ ( ] ) }” should return false
  • “{ [ }” should return false

Do have an answer? No?

Here is a hint to get you started:

Initial Breakdown

We can use a greedy approach to walk through our string character by character, making sure the string validates “so far” until we reach the end.

What do we do when we find an opener or closer?

Think you have an answer now? Yes?

Then check your answer against these gotchas one at a time:

Gotchas

Simply making sure each opener has a corresponding closer is not enough—we must also confirm that they are correctly ordered.

For example, “{ [ ( ] ) }” should return false, even though each opener can be matched to a closer.

We can do this in O(n) time and space. One iteration is all we need!

Full Breakdown

We can use a greedy approach to walk through our string character by character, making sure the string validates “so far” until we reach the end.

What do we do when we find an opener or closer?

Well, we’ll need to keep track of our openers so that we can confirm they get closed properly. What data structure should we use to store them? When choosing a data structure, we should start by deciding on the properties we want. In this case, we should figure out how we will want to retrieve our openers from the data structure! So next we need to know: what will we do when we find a closer?

Suppose we’re in the middle of walking through our string, and we find our first closer:

[ { ( ) ] . . . .
      ^

How do we know whether or not that closer in that position is valid?

A closer is valid if and only if it’s the closer for the most recently seen, unclosed opener. In this case, ‘(‘ was seen most recently, so we know our closing ‘)’ is valid.

So we want to store our openers in such a way that we can get the most recently added one quickly, and we can remove the most recently added one quickly (when it gets closed). Does this sound familiar?

What we need is a stack! 

Solution

We iterate through our string, making sure that:

  1. each closer corresponds to the most recently seen, unclosed opener
  2. every opener and closer is in a pair

We use a stack to keep track of the most recently seen, unclosed opener. And if the stack is ever empty when we come to a closer, we know that closer doesn’t have an opener.

So as we iterate:

  • If we see an opener, we push it onto the stack.
  • If we see a closer, we check to see if it is the closer for the opener at the top of the stack. If it is, we pop from the stack. If it isn’t, or if the stack is empty, we return false.

If we finish iterating and our stack is empty, we know every opener was properly closed.

function isValid(code) {

  const openersToClosers = {
    '(': ')',
    '[': ']',
    '{': '}',
  };

  const openers = new Set(['(', '[', '{']);
  const closers = new Set([')', ']', '}']);

  const openersStack = [];

  for (let i = 0; i < code.length; i++) {
    const char = code.charAt(i);

    if (openers.has(char)) {
      openersStack.push(char);
    } else if (closers.has(char)) {
      if (!openersStack.length) {
        return false;
      }
      const lastUnclosedOpener = openersStack.pop();

      // If this closer doesn't correspond to the most recently
      // seen unclosed opener, short-circuit, returning false
      if (openersToClosers[lastUnclosedOpener] !== char) {
        return false;
      }
    }
  }
  return openersStack.length === 0;
}
Complexity

 time (one iteration through the string), and  space (in the worst case, all of our characters are openers, so we push them all onto the stack).

Bonus

In Ruby, sometimes expressions are surrounded by vertical bars, “|like this|”. Extend your validator to validate vertical bars. Careful: there’s no difference between the “opener” and “closer” in this case—they’re the same character!

What We Learned

The trick was to use a stack. 

It might have been difficult to have that insight, because you might not use stacks that much.

Two common uses for stacks are:

  1. parsing (like in this problem)
  2. tree or graph traversal (like depth-first traversal)

So remember, if you’re doing either of those things, try using a stack!