# Intro

We have explored divide-and-conquer before in merge sort. In it, we use recursive function to solve it and once the problem is small enough, we will solve straightforwardly. Sometimes, we have to solve the sub-problems differently from the original and the solving cost is part of the combine step.

## Recurrence

It is an equation or inequality that describes a function in terms of its value on smaller input.

We have three ways to solve recurrence relationship.

• Substitution method, give the recurrence a guess for bound then use induction to prove it.
• Recursion-tree method, convert the recursion into a tree whose node is the cost of each subprobelm.
• Master method

## Some technicalities

We usually ignore the floor and ceiling function since they rarely makes any difference. (change more than a constant factor).

# Maximum-subarray problem

We are trying to find a subarray whose sum will be the largest among all possible subarray.

## Brute Force

It is pretty easy, we just compose every possible subarray and keep track of the max and location of the max-making subarray.

Bruteforce(A)
n = A.length
max = 0
max_right = 0
max_left = 0
for i from 0 to n
subarray_sum = 0
for j from i to n
subarray_sum = subarray_sum + A[j] //kept track of subarray_sum so no recompute
if subarray_sum > max
max = subarray_sum
max_right = j
max_left = i



To find the running time, there is a intuitive way; We need to select 2 end points out of all n points and ${n \choose 2} \in O(n)$

It is not fast enough.

## Divide and conquer

There are only three possible ways for the maximum subarray to exit. Assume the left side is l, right side is r and the midpoint is m. The midpoint will either be

1. Entirely in A[l, m]
2. Cross the midpoint m
3. Entirely in A[m, r]

If we are in situation 1 or 3, its good since they will just be smaller instances of the original problem.

### In the middle situation

If the maximum sbuarray is in the middle, then it is not a smaller instance of the original problem because now we have to cross the mid point. Since any max_subarray in the mid point situation must have the form A[i..m] and A[m + 1.. j], we can just find the max subarray for these two subarray.

find_max_crossing_array(A, low, mid, high)

left_sum = -infty
sum = 0
for i = mid downto low
sum = sum + A[i]
if sum > left_sum
left_sum = sum
max-left = i

right_sum = -infty
sum = 0
for i = mid upto high
sum = sum + A[i]
if sum > right_sum
right_sum = sum
max_right = i

return(max-left, max-right, left-sum + right-sum


Now with find middle at hand, we can solve the max_subarray problem

find_max_subarray(A, low, high)
if high == low
return (low, high, A[low]) // base case

else mid = floor((low + high) / 2)

(left-low, left-high, left-sum) = find_max_subarray(A, low, mid)
(right-low, right-high, right-sum ) = find_max_subarray(A, mid | 1, high)
(cross-low, cross-high, cross-sum) = find_max_crossing_array(A, low, mid, high)

if left-sum is the biggest
return (left-low, left-high, left-sum)
else if right-sum is the biggest
return (right-low, right-high, right-sum )
else
return (cross-low, cross-high, cross-sum)


## Analyze

On each term, we divide the problem size by 2 and solve two of these subproblems. $T(n) = 2T(\dfrac{n}{2}) + \Theta(n)$ where the last linear term is the find_max_mid_array’s cost.

It is the same recurrence relationship as merge sort. So the running time is $n \log n$

## Another method in linear time

There is a linear method called Kadane’s method that will do in linear time. It starts with a question: if we know the maximum subarray ending at i is Bi, what will be the maximum subarray ending at i + 1? The answer is it will either contain the previous maximum sub-array as prefix or just by itself depending on who is bigger. So in other word, we the maximum sub array ending at i + 1 will be the one that gives value of $max(B_i + A_{i + 1}, A_i)$. The only case to discard prefix is when the prefix subarray is negative.

kadane(A)
max_ending_here = A[0]
max_so_far = A[0]
for i from 1 = A.length
max_ending_here = max(x, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)

return max_so_far


# Strassen matrix multiplication

The usual way will yield $O(n^3)$ running times.

## Naive divide_and_conquer

We know that block matrix multiplication is the same with matrix multiplication. So we can partition the matrix evenly and multiply them in the usual way.

$C_{11} = A_{11} * B_{11} + A_{12} * B_{21}$ and so on.

So the algorithm for square matrix multiplication is:

We can see the number of sub problems is 8 for each recurrence. The problem size becomes 1/2 of the original. (We aren’t counting entries here). And the addition of the matrices takes $O(n^2)$ time. So

$T(n) = 8T(n/2) + O(n^2)$ From master theorem,

We can use indexing to avoid copying entries. So no cost is incurred. With master theorem, we know it is still $O(n^3)$

# Strassen’s method

The key is the make more additions and subtraction and reduce one fewer multiplication. In other word, making the recursion tree less bushy. We will have 7 intermediate matrices that are the product of 7 matrix multiplication. Then using these 7 matrices to perform additions and subtractions to get the result we want. It is faster based on Master theorem.

# The substitution method for solving recurrences

1. Guess the form of the solution
2. Use induction to find the constants and show the solution works.

Sometimes the boundary condition can’t hold but since for asymptotic notation