# Sorting, searching, and logarithms

##### Binary Search Algorithm

A binary search algorithm finds an item in a sorted array in O(lg(n)) time.

A brute force search would walk through the whole array, taking O(n) time in the worst case.

Let’s say we have a sorted array of numbers. To find a number with a binary search, we:

1. Start with the middle number: is it bigger or smaller than our target number? Since the array is sorted, this tells us if the target would be in the left half or the right half of our array.
2. We’ve effectively divided the problem in half. We can “rule out” the whole half of the array that we know doesn’t contain the target number.
3. Repeat the same approach (of starting in the middle) on the new half-size problem. Then do it again and again, until we either find the number or “rule out” the whole set.

We can do this recursively, or iteratively. Here’s an iterative version:

function binarySearch(target, nums) {

// See if target appears in nums

// We think of floorIndex and ceilingIndex as "walls" around
// the possible positions of our target, so by -1 below we mean
// to start our wall "to the left" of the 0th index
// (we *don't* mean "the last index")
let floorIndex = -1;
let ceilingIndex = nums.length;

// If there isn't at least 1 index between floor and ceiling,
// we've run out of guesses and the number must not be present
while (floorIndex + 1 < ceilingIndex) {

// Find the index ~halfway between the floor and ceiling
// We have to round down, to avoid getting a "half index"
const distance = ceilingIndex - floorIndex;
const halfDistance = Math.floor(distance / 2);
const guessIndex = floorIndex + halfDistance;

const guessValue = nums[guessIndex];

if (guessValue === target) {
return true;
}

if (guessValue > target) {

// Target is to the left, so move ceiling to the left
ceilingIndex = guessIndex;
} else {

// Target is to the right, so move floor to the right
floorIndex = guessIndex;
}
}

return false;
}

How did we know the time cost of binary search was ? The only non-constant part of our time cost is the number of times our while loop runs. Each step of our while loop cuts the range (dictated by floorIndex and ceilingIndex) in half, until our range has just one element left.

So the question is, “how many times must we divide our original array size (n) in half until we get down to 1?”

n * 1/2 * 1/2 * 1/2 * 1/2 * … = 1How many 1/2‘s are there? We don’t know yet, but we can call that number :

n * (1/2)^x = 1Now we solve for :

n * 1^x/2^x = 1

n * 1/2^x = 1
n/

n = 2^xNow to get the x out of the exponent. How do we do that? Logarithms.

Recall that log_10(100), “what power must we raise 10 to, to get 100”? The answer is 2.

So in this case, if we take the log_2 of both sides…

log_2(n) = log_2(2^x)The right hand side asks, “what power must we raise 2 to, to get 2^x?” Well, that’s just .

log_2(n) = xSo there it is. The number of times we must divide n in half to get down to 1 is log_2(n). So our total time cost is O(lg(n))

Careful: we can only use binary search if the input array is already sorted.

##### Question 1

I want to learn some big words so people think I’m smart.

I opened up a dictionary to a page in the middle and started flipping through, looking for words I didn’t know. I put each word I didn’t know at increasing indices in a huge array I created in memory. When I reached the end of the dictionary, I started from the beginning and did the same thing until I reached the page I started at.

Now I have an array of words that are mostly alphabetical, except they start somewhere in the middle of the alphabet, reach the end, and then start from the beginning of the alphabet. In other words, this is an alphabetically ordered array that has been “rotated.” For example:

const words = [
'ptolemaic',
'supplant',
'undulate',
'xenoepist',
'asymptote',  // <-- rotates here!
'babka',
'banoffee',
'engender',
'karpatka',
'othellolagkage',
];

Write a function for finding the index of the “rotation point,” which is where I started working from the beginning of the dictionary. This array is huge (there are lots of words I don’t know) so we want to be efficient here.

To keep things simple, you can assume all words are lowercase.

Do you have an answer? No?

Here is a hint to get you started:

##### Breakdown

The array is mostly ordered. We should exploit that fact.

Think you have the answer now?  Yes?

##### Gotchas

We can get O(lg(n)) time.

Here is the full solution

##### Breakdown

The array is mostly ordered. We should exploit that fact.

What’s a common algorithm that takes advantage of the fact that an array is sorted to find an item efficiently?

Binary search! We can write an adapted version of binary search for this.

In each iteration of our binary search, how do we know if the rotation point is to our left or to our right?

Try drawing out an example array!

words = ['k', 'v', 'a', 'b', 'c', 'd', 'e', 'g', 'i'];
^

If our “current guess” is the middle item, which is ‘c’ in this case, is the rotation point to the left or to the right? How do we know?

Notice that every item to the right of our rotation point is always alphabetically before the first item in the array.

So the rotation point is to our left if the current item is less than the first item. Else it’s to our right.

##### Solution

This is a modified version of binary search. At each iteration, we go right if the item we’re looking at is greater than the first item and we go left if the item we’re looking at is less than the first item.

We keep track of the lower and upper bounds on the rotation point, calling them floorIndex and ceilingIndex (initially we called them “floor” and “ceiling,” but because we didn’t imply the type in the name we got confused and created bugs). When floorIndex and ceilingIndex are directly next to each other, we know the floor is the last item we added before starting from the beginning of the dictionary, and the ceiling is the first item we added after.

function findRotationPoint(words) {
const firstWord = words;

let floorIndex = 0;
let ceilingIndex = words.length - 1;

while (floorIndex < ceilingIndex) {

// Guess a point halfway between floor and ceiling
const guessIndex = Math.floor(floorIndex + ((ceilingIndex - floorIndex) / 2));

// If guess comes after first word or is the first word
if (words[guessIndex] >= firstWord) {

// Go right
floorIndex = guessIndex;
} else {

// Go left
ceilingIndex = guessIndex;
}

// If floor and ceiling have converged
if (floorIndex + 1 === ceilingIndex) {

// Between floor and ceiling is where we flipped to the beginning
// so ceiling is alphabetically first
break;
}
}

return ceilingIndex;
}
##### Complexity

Each time we go through the while loop, we cut our range of indices in half, just like binary search. So we have O(lg(n)) loop iterations.

In each loop iteration, we do some arithmetic and a string comparison. The arithmetic is constant time, but the string comparison requires looking at characters in both words—every character in the worst case. Here, we’ll assume our word lengths are bounded by some constant so we’ll say the string comparison takes constant time.

The longest word in English is pneumonoultramicroscopicsilicovolcanoconiosis, a medical term. It’s 45 letters long.

Putting everything together, we do O(lg(n)) iterations, and each iteration is O(1) time. So our time complexity is O(lg(n)).

Some languages—like German, Russian, and Dutch—can have arbitrarily long words, so we might want to factor the length of the words into our runtime. We could say the length of the words is , each string comparison takes  time, and the whole algorithm takes  time.

We use O(1) space to store the first word and the floor and ceiling indices.

##### Bonus

This function assumes that the array is rotated. If it isn’t, what index will it return? How can we fix our function to return 0 for an unrotated array?

##### What We Learned

The answer was a modified version of binary search.

This is a great example of the difference between “knowing” something and knowing something. You might have seen binary search before, but that doesn’t help you much unless you’ve learned the lessons of binary search.

Binary search teaches us that when an array is sorted or mostly sorted:

1. The value at a given index tells us a lot about what’s to the left and what’s to the right.
2. We don’t have to look at every item in the array. By inspecting the middle item, we can “rule out” half of the array.
3. We can use this approach over and over, cutting the problem in half until we have the answer. This is sometimes called “divide and conquer.”

So whenever you know an array is sorted or almost sorted, think about these lessons from binary search and see if they apply.

##### Question 2

Find a duplicate, Space Edition™.

We have an array of integers, where:

1. The integers are in the range 1..n
2. The array has a length of n+1

It follows that our array has at least one integer which appears at least twice. But it may have several duplicates, and each duplicate may appear more than twice.

Write a function which finds an integer that appears more than once in our array. Don’t modify the input! (If there are multiple duplicates, you only need to find one of them.)

We’re going to run this function on our new, super-hip MacBook Pro With Retina Display™. Thing is, the damn thing came with the RAM soldered right to the motherboard, so we can’t upgrade our RAM. So we need to optimize for space!

Do you have an answer? No?

Here is a hint to get you started:

##### Breakdown

This one’s a classic! We just do one walk through the array, using a set to keep track of which items we’ve seen!

function findRepeat(numbers) {
for (let i = 0; i < numbers.length; i++) {
const number = numbers[i];
return number;
}
}

// Whoops--no duplicate
throw new Error('no duplicate!');
}

Bam. O(n) time and… O(n) space…
Right, we’re supposed to optimize for space. O(n) is actually kinda high space-wise. Hm. We can probably get O(1)…

Think you have an answer now? Yes?

##### Gotchas

We can do this in O(1) space.

We can do this in less than O(n^2) time while keeping O(1) space.

We can do this in O(n lg(n)) time and O(1) space.

We can do this without modifying the input.

Most O(n lg(n)) algorithms double something or cut something in half. How can we rule out half of the numbers each time we iterate through the array?

Here is the full solution below:

##### Breakdown

This one’s a classic! We just do one walk through the array, using a set to keep track of which items we’ve seen!

function findRepeat(numbers) {
for (let i = 0; i < numbers.length; i++) {
const number = numbers[i];
return number;
}
}

// Whoops--no duplicate
throw new Error('no duplicate!');
}

Bam. O(n) time and… O(n) space…
Right, we’re supposed to optimize for space. O(n) is actually kinda high space-wise. Hm. We can probably get O(1)…

We can “brute force” this by taking each number in the range 1..n and, for each, walking through the array to see if it appears twice.

function findRepeat(numbers) {
for (let needle = 1; needle < numbers.length; needle++) {
let hasBeenSeen = false;
for (let i = 0; i < numbers.length; i++) {
const number = numbers[i];
if (number === needle) {
if (hasBeenSeen) {
return number;
} else {
hasBeenSeen = true;
}
}
}
}

// Whoops--no duplicate
throw new Error('no duplicate!');
}

This is O(1) space and O(n^2) time.

That space complexity can’t be beat, but the time cost seems a bit high. Can we do better?

One way to beat O(n^2) time is to get  time. Sorting takes  time. And if we sorted the array, any duplicates would be right next to each-other!

But if we start off by sorting our array we’ll need to take O(n) space to store the sorted array

…unless we sort the input array in place!

Okay, so this’ll work:

1. Do an in-place sort of the array (for example an in-place merge sort).
2. Walk through the now-sorted array from left to right.
3. Return as soon as we find two adjacent numbers which are the same.

This’ll keep us at O(1) space and bring us down to  time.

But modifying the input is kind of a drag—it might cause problems elsewhere in our code. Can we maintain this time and space cost without modifying the input?

Let’s take a step back. How can we break this problem down into subproblems?

If we’re going to do  time, we’ll probably be iteratively doubling something or iteratively cutting something in half. That’s how we usually get a ““. So what if we could cut the problem in half somehow?

Well, binary search works by cutting the problem in half after figuring out which half of our input array holds the answer.

But in a binary search, the reason we can confidently say which half has the answer is because the array is sorted. For this problem, when we cut our unsorted array in half we can’t really make any strong statements about which elements are in the left half and which are in the right half.

What if we could cut the problem in half a different way, other than cutting the array in half?

With this problem, we’re looking for a needle (a repeated number) in a haystack (array). What if instead of cutting the haystack in half, we cut the set of possibilities for the needle in half?

The full range of possibilities for our needle is 1..n. How could we test whether the actual needle is in the first half of that range (1..n/2) or the second half (n/2+1..n)?

A quick note about how we’re defining our ranges: when we take n/2 we’re doing integer division, so we throw away the remainder. To see what’s going on, we should look at what happens when n is even and when n is odd:

• If n is 6 (an even number), we have n/2=3 and n/2+1=4, so our ranges are 1..3 and 4..6.
• If n is 5 (an odd number), n/2=2 (we throw out the remainder) and n/2+1=3, so our ranges are 1..2 and 3..5.

So we can notice a few properties about our ranges:

1. They aren’t necessarily the same size.
2. They don’t overlap.
3. Taken together, they represent the original input array‘s range of 1..n. In math terminology, we could say their union is 1..n.

So, how do we know if the needle is in 1..n/2 or (n/2+1)..n?

Think about the original problem statement. We know that we have at least one repeat because there are n+1 items and they are all in the range 1..n, which contains only n distinct integers.

This notion of “we have more items than we have possibilities, so we must have at least one repeat” is pretty powerful. It’s sometimes called the pigeonhole principle. Can we exploit the pigeonhole principle to see which half of our range contains a repeat?

Imagine that we separated the input array into two subarrays—one containing the items in the range 1..n/2 and the other containing the items in the range (n/2+1)..n.

Each subarray has a number of elements as well as a number of possible distinct integers (that is, the length of the range of possible integers it holds).

Given what we know about the number of elements vs the number of possible distinct integers in the original input array, what can we say about the number of elements vs the number of distinct possible integers in these subarrays?

The sum of the subarrays’ numbers of elements is n+1 (the number of elements in the original input array) and the sum of the subarrays’ numbers of possible distinct integers is n (the number of possible distinct integers in the original input array).

Since the sums of the subarrays’ numbers of elements must be 1 greater than the sum of the subarrays’ numbers of possible distinct integers, one of the subarrays must have at least one more element than it has possible distinct integers.

Not convinced? We can prove this by contradiction. Suppose neither array had more elements than it had possible distinct integers. In other words, both arrays have at most the same number of items as they have distinct possibilities. The sum of their numbers of items would then be at most the total number of possibilities across each of them, which is n. This is a contradiction—we know that our total number of items from the original input array is n+1, which is greater than n.

Now that we know one of our subarrays has 1 or more items more than it has distinct possibilities, we know that subarray must have at least one duplicate, by the same pigeonhole argument that we use to know that the original input array has at least one duplicate.

So once we know which subarray has the count higher than its number of distinct possibilities, we can use this same approach recursively, cutting that subarray into two halves, etc, until we have just 1 item left in our range.

Of course, we don’t need to actually separate our array into subarrays. All we care about is how long each subarray would be. So we can simply do one walk through the input array, counting the number of items that would be in each subarray.

Can you formalize this in code?

Careful—if we do this recursively, we’ll incur a space cost in the call stack! Do it iteratively instead.

##### Solution

Our approach is similar to a binary search, except we divide the range of possible answers in half at each step, rather than dividing the array in half.

1. Find the number of integers in our input array which lie within the range 1..n/2.
2. Compare that to the number of possible unique integers in the same range.
3. If the number of actual integers is greater than the number of possible integers, we know there’s a duplicate in the range 1..n/2, so we iteratively use the same approach on that range.
4. If the number of actual integers is not greater than the number of possible integers, we know there must be duplicate in the range n/2 + 1..n, so we iteratively use the same approach on that range.
5. At some point, our range will contain just 1 integer, which will be our answer.
function findRepeat(numbers) {

let floor = 1;
let ceiling = numbers.length - 1;

while (floor < ceiling) {

// Divide our range 1..n into an upper range and lower range
// (such that they don't overlap)
// lower range is floor..midpoint
// upper range is midpoint+1..ceiling
const midpoint = Math.floor(floor + ((ceiling - floor) / 2));
const lowerRangeFloor = floor;
const lowerRangeCeiling = midpoint;
const upperRangeFloor = midpoint + 1;
const upperRangeCeiling = ceiling;

const distinctPossibleIntegersInLowerRange = lowerRangeCeiling - lowerRangeFloor + 1;

// Count number of items in lower range
let itemsInLowerRange = 0;
numbers.forEach(item => {

// Is it in the lower range?
if (item >= lowerRangeFloor && item <= lowerRangeCeiling) {
itemsInLowerRange += 1;
}
});

if (itemsInLowerRange > distinctPossibleIntegersInLowerRange) {

// There must be a duplicate in the lower range
// so use the same approach iteratively on that range
floor = lowerRangeFloor;
ceiling = lowerRangeCeiling;
} else {

// There must be a duplicate in the upper range
// so use the same approach iteratively on that range
floor = upperRangeFloor;
ceiling = upperRangeCeiling;
}
}

// Floor and ceiling have converged
// We found a number that repeats!
return floor;
}
Complexity

space and  time.

Tricky as this solution is, we can actually do even better, getting our runtime down to  while keeping our space cost at . The solution is NUTS; it’s probably outside the scope of what most interviewers would expect. But for the curious…here it is!

##### Bonus

This function always returns one duplicate, but there may be several duplicates. Write a function that returns all duplicates.

##### What We Learned

Our answer was a modified binary search. We got there by reasoning about the expected runtime:

1. We started with an O(n^2) “brute force” solution and wondered if we could do better.
2. We knew to beat O(n^2) we’d probably do O(n) or , so we started thinking of ways we might get an O(n lg(n)) runtime.
3. lg(n) usually comes from iteratively cutting stuff in half, so we arrived at the final algorithm by exploring that idea.

Starting with a target runtime and working backward from there can be a powerful strategy for all kinds of coding interview questions.

##### Question 3

You created a game that is more popular than Angry Birds.

Each round, players receive a score between 0 and 100, which you use to rank them from highest to lowest. So far you’re using an algorithm that sorts in  time, but players are complaining that their rankings aren’t updated fast enough. You need a faster sorting algorithm.

Write a function that takes:

1. an array of unsortedScores
2. the highestPossibleScore in the game

and returns a sorted array of scores in less than  time.

For example:

const unsortedScores = [37, 89, 41, 65, 91, 53];
const HIGHEST_POSSIBLE_SCORE = 100;

sortScores(unsortedScores, HIGHEST_POSSIBLE_SCORE);
// returns [91, 89, 65, 53, 41, 37]

We’re defining n as the number of unsortedScores because we’re expecting the number of players to keep climbing.

And, we’ll treat highestPossibleScore as a constant instead of factoring it into our big O time and space costs because the highest possible score isn’t going to change. Even if we do redesign the game a little, the scores will stay around the same order of magnitude.

Do you have an answer? No?

Here is a hint to get you started:

##### Initial Breakdown

is the time to beat. Even if our array of scores were already sorted we’d have to do a full walk through the array to confirm that it was in fact fully sorted. So we have to spend at least O(n) time on our sorting function. If we’re going to do better than , we’re probably going to do exactly O(n).

What are some common ways to get O(n) runtime?

Do you have an answer now? Yes?

Then test it against the following gotchas one at a time:

##### Gotchas

Multiple players can have the same score! If 10 people got a score of 90, the number 90 should appear 10 times in our output array.

We can do this in O(n) time and space.

Below is the full solution

##### Breakdown

is the time to beat. Even if our array of scores were already sorted we’d have to do a full walk through the array to confirm that it was in fact fully sorted. So we have to spend at least O(n) time on our sorting function. If we’re going to do better than , we’re probably going to do exactly .

What are some common ways to get O(n) runtime?

One common way to get O(n) runtime is to use a greedy algorithm. But in this case we’re not looking to just grab a specific value from our input set (e.g. the “largest” or the “greatest difference”)—we’re looking to reorder the whole set. That doesn’t lend itself as well to a greedy approach.

Another common way to get O(n) runtime is to use counting.  We can build an array scoreCounts where the indices represent scores and the values represent how many times the score appears. Once we have that, can we generate a sorted array of scores?

What if we did an in-order walk through scoreCounts. Each index represents a score and its value represents the count of appearances. So we can simply add the score to a new array sortedScores as many times as count of appearances.

##### Solution

We use counting sort.

function sortScores(unorderedScores, highestPossibleScore) {

// Array of 0s at indices 0..highestPossibleScore
const scoreCounts = new Array(highestPossibleScore + 1).fill(0);

// Populate scoreCounts
unorderedScores.forEach(score => {
scoreCounts[score]++;
});

// Populate the final sorted array
const sortedScores = [];

// For each item in scoreCounts
for (let score = highestPossibleScore; score >= 0; score--) {
const count = scoreCounts[score];

// For the number of times the item occurs
for (let time = 0; time < count; time++) {
sortedScores.push(score);
}
}

return sortedScores;
}
##### Complexity

time and  space, where n is the number of scores.

Wait, aren’t we nesting two loops towards the bottom? So shouldn’t it be O(n^2) time? Notice what those loops iterate over. The outer loop runs once for each unique number in the array. The inner loop runs once for each time that number occurred.

So in essence we’re just looping through the  numbers from our input array, except we’re splitting it into two steps: (1) each unique number, and (2) each time that number appeared.

Here’s another way to think about it: in each iteration of our two nested loops, we append one item to sortedScores. How many numbers end up in sortedScores in the end? Exactly how many were in our input arrayn.

If we didn’t treat highestPossibleScore as a constant, we could call it k and say we have O(n+k) time and O(n+k) space.

##### Bonus

Note that by optimizing for time we ended up incurring some space cost! What if we were optimizing for space?

We chose to generate and return a separate, sorted array. Could we instead sort the array in place? Does this change the time complexity? The space complexity?