# Misc Divide and Conquer Algs

Tags | AlgorithmsCS161 |
---|

# Recognizing D&Q algs

Divide and conquer algorithms arise when you can make a measurement and split things into multiple self-similar segments. Often, knowing this requires you to figure out some sort of fundamental truth about the system. The $a_i = i$ finder is one example.

Other times, when you see “larger or smaller”, think pivot assignment. This is quicksort, but it is also questions like the lightbulb problem (see below)

# General strategies

- identity natural subproblems: arrays smaller, things smaller and larger, etc. Often problems have self-symmetry

- imagine you have the magical ability to solve these problems: what would you do? Can you solve the larger subproblem?

In general, you should start with smaller examples, look at familiar algorithms, and bring in analysis tools. This can give you a hint on how many operations you can do for each step.

And more philosophically, think positively! If your limitation is $O(n)$, think “oh good! I can solve the problem after this many steps!” Instead of thinking about it as some sort of draconian restriction.

# $a_i = i$ finder

In a list where $a_1 < a_2, ... < a_n$, you can find if $a_i = i$ in $O(\log n)$ time. This is because if you look at $a_k$ and $a_k > k$, then none of the later ones work. Same with $a_k < k$, but none of the previous ones work. Therefore, pick the middle, look at what state it’s in, and recurse if necessary.

# Smallest missing element

In a sorted array of non-negative distinct integers, you can find the smallest missing element by using the exact same technique as above, with a little twist. We know that it’s never possible to have $a_i < i$, but if the middle is $a_i > i$, then we know that the missing element happened somewhere before the middle. Otherwise, it happened after the middle.

# Maximum sum subarray

with an array of integers $A$, find a contiguous subarray with the maximum possible sum.

The brute force solution would be $O(n^2)$, where you try different windows and slide it across the array.

Here’s the critical insight. If we split the array in half, the maximum sum subarray is either in the left half, the right half, or somewhere in-between. You can ask recursively to compute the maximum sum subarray for both halves.

Then, for the center, there’s a neat trick. Because you know your starting point (the center), you can find two max-sums: center-left and center-right. For center-left, you start at the center and move left. Keep track of the maximum sum and the current sum. As you search back, you update the maximum sum and current sum of the array. Same thing with moving right. This takes $O(n)$ time.

Taken together, this approach takes $T(n) = 2T(n/2) + O(n)$ time, which yields $O(n\log n)$

There exists an $O(n)$ solution out there: https://en.wikipedia.org/wiki/Maximum_subarray_problem#Kadane's_algorithm.

# The lightbulb problem

The solution is very simple: it’s literally like quicksort but with a little twist. You want to pick a lightbulb at random and find the socket it matches with. Then, you want to partition the sockets into sockets that are larger $S_L$ and sockets that are smaller $S_S$ than this matching socket. Likewise, you want to partition the lightbulbs into lightbulbs that are larger $L_L$ than the matching lightbulb and lightbulbs that are smaller $L_S$ than the matching lightbulb. Tben, call the same algorithm again with $S_L, L_L$ and $S_S, L_S$.

This works because at every step, you regroup the sockets and the lightbulbs into two groups such that matching lights and sockets are in the same group at all times but you decrease the group size until you reach the base case where there is only one light and one socket.

Partition is done in $O(n)$ time so the whole thing is $O(n\log n)$ in expectation. In the worst case, it’s $O(n^2)$ if you choose an adversarial pivot.