# Intro to heapsort

In sorting, data are rarely isolated. Each data element is called record that contains a *key*, the value to be sorted, and a *satellite data*, the actual content of the data.

## Why sorting:

- Some problems are inherently sorting problem
- Sorting can be a key sub-routine for some other algorithms
- We can prove a non-trivial lower bound for sorting and our best upper bounds match with the lower bound asymptotically. So we know the sorting algorithms are asymptotically optimal.
**We can also use the lower bound to prove other thing**

## Comparison sorts

Sorts that determine the sorted order based on comparison is called comparison sort. Insertion, quick, merge and heap sort are all comparison sort.

But if we can gather information about the sorted order of the input by means *other than comparing elements*, we can beat comparison sort. Bucket sort and radix sort.

## Order statistics:

the ith order statistics of a set of n number is the ith smallest number in the set. With a randomized algorithm, we can find the ith order statictics in O(n) time

## Overview of heap sort

Heap sort can run O(nlgn) but it is also in place( unlike merge sort). The term “heap” was coined in the context of heapsort. But now people also refer to it for garbage collection.

# heaps

A (binary)heap is a data structure is an array that we can view as almost a binary tree(except the lowest level, which is filled from left up to the end.

An array A that represent the heap has two attribute, one is A.length, representing the array’s length, the other is A.heapsize, representing the heap size. Note these two are different, and only A[1..A.heapsize] are valid member of a heap.

The root of tree is A[1] and given the index i, we can know the index its parent, left child and right child

- parent = floor(i / 2)
- left_child = 2i
- right_child = 2i + 1

We can use bit operations. parent = i >> 1, left_child = i << 1, right_child = i<<1 + 1.

There are two kinds of heap, max-heap and min-heap. Max-heap has max-heap property that keeps every parent bigger than its children. A[parent(i)] A[i]. The largest elem is the root A[1]. Min-heap is the opposite. parents are smaller or equal to their children.

Heap is a binary tree and the height of the tree, the number of edges on the longest simple downward path from the node to a leaf, is

# Maintain the heap

To do so, we will use max-heapify function that takes in an array A and an index i. It will assume the left and right child of A[i] will be max heap but A[i] itself might violate the heap property(smaller than its children.) So max-heapify will **float down** A[i] so the sub tree rooted at ith position in the array will have the heap property.

```
max_heapify(A,i)
l = 2i
r = 2i + 1
max_pos = i
if l <= A.heapsize and max < A[l]
max_pos = l
else max_pos = i
if r<= A.heapsize and A[r] > A[max_pos]
max_pos = r
if max_pos != i
swap A[i] with A[max_pos]
max_heapify(A, max_pos)
```

The fix up among children and parents are .

The children’s sub tree can have size at most 2n/3, when the bottom level of the tree is exactly half full. The running time is T(n) <= T(2n/3) + .

By Master Theorem, the run time is . Alternative, we can switch from n to height. The running time is O(h).

# Building a heap

We can use max-heapify in a bottom-up manner. Start from the leaves node and go through the remaining node to run heapify.

```
build_max_heap(A)
A.heapsize = A.length
for i = floor(n/2) to 1
max_heapify(A, i)
```

Use loop invariant to show it will maintain the heap properity. The invariant statement is:

at the start iteration of for, each node i + 1, i+ 2 …n is the root of a max-heap.

- Init: at the beginning, we know floor(n/2) + 1 … n are all leaves node, so they are trivial max heap.
- maintain: on ith iteration, we know all nodes with index higher than i are heaps and any children of nodes i to 1 will be a max-heap. So node i’s children will be root of max heap. So we can run max_heapify for all node i through 1,
- Terminate: when i = 0 it terminate. At that time node 1 through n are all root of some heap. Especially 1 is the root.

To find a tighter upperbbound we need to use the knowledge that a heap at height h has at most celling() for any height.

We know max-heapify run on height h will take O(h)

(1)

# Heap sort algorithm

We first start with BUILD_MAX_HEAP, and we will have a heap of size n. Since it is a heap, the max element of the array will be A[1]. We swap A[1] with A[n] and reduce the size of the heap to n – 1. Then we Hipify the heap. Then A[1] will hold the second largest elem. Swap it with A[n – 1] and do it again. We stop when the heap size is 2.

```
HEAPSORT(A)
BUILD_MAX_HEAP(A)
for i = n downto 2
A[i] = A[1]
A.heapsize - = 1
MAX_HIPIFY(A,1)
```

The BUILD_MAX_HEAP takes O(n) time, and each MAX_HIPIFY takes O(lgn) time and we do it n times. So the running time is O(nlgn).

# Priority Queue

IT is the most popular application of the heap. It is a data structure for storing a set of element S. Each element has an associated value called key. A max priority queue supports:

- Insert(S,x) inserts an element into S.
- Maximum(S) returns the element with the largest priority
- Extract_max(S) removes and returns the elem with the largest element
- Increase_key(S,x,k) increases element x’s priority to k (k should already be bigger than x’s priority.

Max-priority queue are used as **job scheduler** on time share computer.

Alternatively, we have min-priority queue. It is used in simulation where time occurence is the key. We want to have the next closest value to run.

We will write some pseudo code for the functions

```
Extract_max(A)
if A.heapsize < 1
error "heap underflow"
max = A[1]
A[1] = A[A.heapsize] //we will reduce the heapsize by 1, so move the last one to 1's position.
A.heapsize -= 1
MAX_Heapify(A, 1)
return max
```

The running time is O(lgn) since the max_heapify takes O(lgn) time.

```
Increase_key(A,i,key)
// increase the priority of the element at A[i] to key
if key < A[i]
error "new key smaller than curr key"
A[i] = key
while i > 1 and A[parent(i)] < A[i]
swap A[i] with A[parent(i)]
i = parent(i)
```

Since key can only be bigger than key, the subtree rooted at i are still max-heap. But the parents of i might not keep the heap property.(If key is supper, big, it should have been the root.) So we need to traverse the key at ith element up the tree till it’s smaller than its current parent or become the root.

Since the path going up to the root(height) is of order O(lgn), the running time of Incrase_key is O(lgn).

```
Insert(A, key)
A.heapsize += 1
A[A.heapsize] = -infty
Max_increase_key(A, A[A.heapsize], key)
```

We first make one additional space for the new key value and assign that slog -infty, so no matter what the key value is, we can still increase A[A.heapsize].

# Exercise

## 6.1

### min and max number of elem in a heap of height h.

Since a heap is a binary tree(ish) structure, we know the number from root to second to last layer will be . The least the last level can have is 1 node, so the min is . The max the tree can have is a full binary tree and that is . (Bit tricks to show

### An n-element heap has height floor(lgn)

We can write where m is as large as possible. The heap is a complete binary tree plus additional leaves. The binary tree is of height so the heap’s height will be (plus the last level leaves. So h = .

### Show with the array representation for storing an n-elem heap, the leaves are the nodes indexed by floor(n/2) + 1, floor(n/2) + 2…

We know the left child formula is 2i, where i is the parent index. To let the leaves don’t have any children, we need to make 2i an invalid index. One way to do so is let . So let will make 2i > n. So floor(n/2) + 1… will be the leaves index.