main resources links contact

Recitation 08: Sorting

Selection Sort

The main idea behind selection sort is that we want to go through the array repeatedly and find the smallest element to build a sorted array. So, on the first iteration, we scan $n$ elements to find the smallest one. Then, we swap this element with the one at index 0. We now know that a subarray that only includes the first element is sorted. On the second iteration, we look at the remaining $n - 1$ elements to find the minimum and swap this with index 1. Now we have a sorted subarray of size 2. This continues $n$ times until our sorted "subarray" is simply the entire array. As an independent exercise, you should be able to prove this using the contracts.

Implementation: [selectsort.c0] [sortutil.c0]

After what we covered last time, a logical question to ask is what our runtime is. However, the answer to this question may not be immediately obvious. We have a for loop over $[0, n)$ that calls the linear function min_index. However, the number of indices looked at by min_index changes each time! The first time, there are $n$ comparisons made, $n - 1$ the second time, down to just 1 the final time. So, how do we make any sense of this? Trying to add the total number of indices examined, we get a sum with a well-known closed form: $$1 + 2 + \cdots + (n - 1) + n = \sum_{i = 1}^n i = \frac{n(n+1)}{2} = \frac{n^2 + n}{2}$$ Formal definition aside, we can now clearly see that this function has a quadratic runtime. As an exercise for yourself, show this with a formal proof.

Quicksort

So, the obvious question at this point is: can we do better? The answer: of course. The quicksort algorithm has three major steps:

  1. Choose an arbitrary (random) element of the array. We will refer to this as the "pivot."
  2. Go through the remaining elements and sort them into two categories: those that are smaller than the pivot and those that are greater. This is called the partitioning phase.
  3. Recursively sort these two subarrays.

I haven't yet mentioned what the actual runtime of this algorithm is. For this, we introduce the idea of an average runtime. So far, when we've discussed big-O, we've been concerned with the worst case runtime. We will not rigorously prove this, but I claim that quicksort has an average runtime of $O(n\log n)$ and a worst case runtime of $O(n^2)$. A rough justification for this relies on how the size of the two partitions compare. In the average case, we expect that each partition will be roughly equal in size. Therefore, the partition phase is linear and we will have a logarithmic number of recursive calls. On the other hand, in the worst case, one partition will contain all of the elements and the other none. The reason that we choose a pivot randomly is that, for any systematic method of choosing pivots, I could devise a particular list that exhibits this worst case behavior. For example, suppose I always choose the first element as the pivot. In this case, a list that is already sorted has this worst case behavior. It turns out that, randomly choosing pivots gives an $n\log n$ runtime on average. An alternate method is to choose 3 random elements and take the middle of the three to be the pivot.

Implementation: [qsort-fp.c0]

As we can see in the implementation, the bulk of the work of quicksort is actually done in the partition function. We also use a few tricks to write a good implementation. Initially, the pivot is just some element in the middle of the list. As is, there isn't really an easy way to partition the remaining elements, since we don't know how big each partition will be. We also don't want to have to allocate any extra to do this job. The trick here is that we swap the pivot with the last element in the list. This means that all of the elements we need to partition are all together in the array.

But how do we actually create the partitions? We know that, for each of the remaining elements, it is either less than or equal to the pivot, or greater than it. This may seem trivial, but it's actually really important. Suppose that the pivot has index $i$ in the final, sorted list. Therefore, we know that every element greater than the pivot must be at an index greater than $i$. The reverse argument is true for the elements less than or equal to the pivot.

So, we begin by letting left = lower and right = upper -2 (-2 because A[upper - 1] is holding the pivot). Essentially, starting at lower, we inspect each element. If A[lower] <= pivot then it belongs in the left-hand partition. So, we leave it there and move on to the next element. On the other hand, if A[lower] > pivot, then it belongs in the right-hand partition. We deal with this by swapping it as far to the right-hand side of the array as possible, just to the left of the last element we've placed in the right-hand partition. The element we swap with is being pointed to by upper. Initially, this is just the last element. Afterwards, it is the highest index we have not yet explicitly placed an element at. In other words, we build the 'less than or equal to' partition up from the smallest index and we build the 'greater than' partition down from the largest index.

You may wonder why our loop condition is left <= right and not just left < right. If you carefully consider the boundary case, quitting when they are equal means that we haven't really examined one element yet. We still need to decide which partition this last element belongs in. A subtle, but important point.

After we've finished the partition, we do one last swap between the pivot and left. At this point, left actually points to the first element in the right-hand partition. So, what does this accomplish? It turns out that now the pivot is right in between the two partitions. More importantly, since everything to the left is less than or equal to the pivot and everything to the right greater, we can actually say that the index of the pivot is its index in the final sorted list! We then return this index so we know where to divide the array for the recursive calls. We have two recursive calls: one on the resulting left-hand partition and the other on the right-hand partition. Eventually, we hit our base case--we know that the single-element array and the empty array are sorted.

For example (29 is the randomly chosen pivot):

962518316030 17123829 swap pivot (29) with 60
962518312930 17123860 initially: left=0, right=8
382518316030 17129629 Swap 96 and 38 left=0, right=7
122518316030 17389629 Swap 12 and 38 left=0, right=6
122518316030 17389629 Keep 12 left=1, right=6
122518316030 17389629 Keep 25 left=2, right=6
122518316030 17389629 Keep 18 left=3, right=6
122518176030 31389629 Swap 31 and 17 left=3, right=5
122518176030 31389629 Keep 17 left=4, right=5
122518173060 31389629 Swap 60 and 30 left=4, right=4
122518173060 31389629 Swap 30 with itself left=4, right=3
122518172960 31389630 Replace pivot: swap 30 and 29

After many recursive calls, you will have the sorted array.

Aside: Mergesort

You may have heard of another sorting algorithm before called mergesort (if you haven't or need a refresh: wikipedia). Mergesort actually has an average and worst case runtime of $n \log n$. However, in practice, mergesort generally runs slower than quicksort. This seems a little disappointing, since big-O would tell us that mergesort is probably the better choice. In addition to time complexity, there's also something called space complexity--how much extra memory a program uses. Mergesort works by copying the elements into new memory at each step. As we noted earlier, we implement quicksort cleverly so that we don't need to allocate any extra memory. This is called an "in place" algorithm. It turns out that copying data in memory is relatively expensive, so the in place algorithm is faster for this reason.

If you have any comments or see any errors, please let me know.