Function brokenQuickSort(array, left, right)
pivot <- partition(array, left, right)
brokenQuickSort(array, left, pivot-1)
brokenQuickSort(array, pivot+1, right)
EndFunction
You should make sure to review the sorting algorithms we covered in class and be ready to discuss them.
What is the worst-case time complexity of Selection Sort? QuickSort? MergeSort?
Selection Sort and QuickSort both have worst-case complexity of \(O(n^2)\). MergeSort has a worst-case complexity of \(O(n \log n)\).
Although QuickSort’s worst-case time complexity is worse than MergeSort’s, we use QuickSort as a default in practice. Why?
Randomized QuickSort’s expected worst-case complexity is \(O(n \log n)\). Also, QuickSort is an in-place sort, meaning that it performs less copying than MergeSort.
What is an in-place sorting algorithm?
An in-place sorting algorithm is one in which the contents of the array are sorted while remaining within the array.
What is wrong with the following QuickSort pseudocode? What would happen if we used this pseudocode to sort an array? How could we fix it?
Function brokenQuickSort(array, left, right)
pivot <- partition(array, left, right)
brokenQuickSort(array, left, pivot-1)
brokenQuickSort(array, pivot+1, right)
EndFunction
There is no base case in the provided pseudocode. If this code were executed, it would break the list down into partitions of size 0 and keep trying to split those size 0 partitions indefinitely. We need to add a condition that does not try to sort if the array is of size less than two.
Write pseudocode for a partition function which can be used in the QuickSort algorithm.
Any partition function which correctly organizes the array into a "low section" and a "high section", returning the index of the pivot between them, is acceptable. Here’s the Lomuto algorithm:
Function partition(array, start, end)
pivot <- array[end]
frontier <- start
fence <- start
While frontier < end
If array[frontier] < pivot
swap array[frontier] with array[fence]
fence <- fence + 1
EndIf
frontier <- frontier + 1
EndWhile
swap array[end] with array[fence]
Return fence
EndFunction
Here’s the algorithm we covered in class:
Function partition(array, left, right)
pivot <- right
right--
while (left < right)
if array[left] <= array[pivot]
left++
else if array[right] >= array[pivot]
right--
else
swap array[left] with array[right]
swap array[left] with array[pivot]
return left
Write pseudocode for the merge function of the MergeSort algorithm.
Function merge(a1, a2, am)
i <- 0
j <- 0
k <- 0
While i < length(a1) And j < length(a2)
If a1[i] < a2[j]
am[k] <- a1[i]
i <- i + 1
k <- k + 1
Else
am[k] <- a2[j]
j <- j + 1
k <- k + 1
EndIf
EndWhile
While i < length(a1)
am[k] <- a1[i]
i <- i + 1
k <- k + 1
EndWhile
While j < length(a2)
am[k] <- a2[j]
j <- j + 1
k <- k + 1
EndWhile
EndFunction
Recall the definition of big-O notation we used in class: \(f(n)\) is \(O(g(n))\) if there are some positive \(c\) and \(n_0\) such that, for every \(n\geq n_0\), \(f(n) \leq c \cdot g(n)\). You should be prepared, given this definition, to prove complexity bounds on functions.
Prove that \(f(n) = \frac{n^2+1}{n}\) is \(O(n)\).
We want to show that \(\frac{n^2+1}{n}\) is \(O(n)\). Let \(n_0=1\) and let \(c=2\). For all \(n\geq n_0\), we have
\(\frac{n^2+1}{n}\) |
\(=\) |
\(n + \frac{1}{n}\) |
(because \(n \neq 0\)) |
\(\leq\) |
\(n + n\) |
(because \(\frac{1}{n} \leq n\) when \(n \geq 1\)) |
|
\(=\) |
\(2n\) |
||
\(=\) |
\(c\cdot n\) |
This shows, when \(c=2\) and \(n_0=1\), we have for every \(n\geq n_0\) that \(f(n) \leq c\cdot g(n)\), so \(f(n)\) is \(O(g(n))\).
Prove that \(f(n) = 3n^3 + 5n^2 + 8\) is \(O(n^3)\).
We want to show that \(3n^3 + 5n^2 + 8\) is \(O(n^3)\). Let \(n_0=1\) and let \(c=16\). For all \(n\geq n_0\), we have
\(3n^3 + 5n^2 + 8\) |
\(\leq\) |
\(3n^3 + 5n^3 + 8\) |
(because \(5n^2 \leq 5n^3\) when \(n\geq1\)) |
\(\leq\) |
\(3n^3 + 5n^3 + 8n^3\) |
(because \(8 \leq 8n^3\) when \(n\geq1\)) |
|
\(=\) |
\(16n^3\) |
||
\(=\) |
\(c\cdot n^3\) |
This shows, when \(c=16\) and \(n_0=1\), we have for every \(n\geq n_0\) that \(f(n) \leq c\cdot g(n)\), so \(f(n)\) is \(O(g(n))\).
Prove that \(f(n) = (2n+4)(n-1)\) is \(O(n^2)\).
We want to show that \((2n+4)(n-1)\) is \(O(n^2)\). Let \(n_0=2\) and let \(c=3\). For all \(n\geq n_0\), we have
\((2n+4)(n-1)\) |
\(=\) |
\(2n^2+2n-4\) |
(by distribution) |
\(<\) |
\(2n^2+2n\) |
(because \(-4 < 0\)) |
|
\(\leq\) |
\(3n^2\) |
(because \(2n \leq n^2\) when \(n\geq2\)) |
|
\(=\) |
\(c\cdot n^2\) |
This shows, when \(c=3\) and \(n_0=2\), we have for every \(n\geq n_0\) that \(f(n) \leq c\cdot g(n)\), so \(f(n)\) is \(O(g(n))\).
What is the worst-case big-O runtime of the following pseudocode
function secretFunction?
Give a brief explanation.
function secretFunction(n) {
for i = 1 ... n {
for j = 1 ... n/2 {
for k = 1 ... n {
for m = 1 ... n {
print "hello!"
}
}
}
}
}
This function is \(O(n^4)\). It has 4 nested loops which each
do linearly many iterations. Even though the j loop only does
\(n/2\) iterations, the number of iterations grows linearly as
\(n\) grows. If we counted up the number of times that
"hello!" is printed, it would be \(n*(n/2)*n*n\), which is
upper-bounded by a constant times \(n^4\).
What is the worst-case big-O runtime of the following pseudocode
function obscureFunction?
Give a brief explanation.
function helper(n) {
x = 0
for i = 1 ... n {
for j = 1 ... n {
x = x + i*j
}
}
return x
}
function obscureFunction(n) {
for i = 1 ... 10 {
print helper(i)
}
}
The function helper is \(O(n^2)\) because it has two nested
loops which each do linearly many iterations. The function
obscureFunction calls helper 10 times, so it is also
\(O(n^2)\), since 10 is a constant.
What is the worst-case big-O runtime of the following pseudocode
function arcaneFunction(n)?
Give a brief explanation.
function arcaneFunction(n) {
if (n = 1) {
print 1
}
else {
print n
arcaneFunction(n/2)
}
}
If we trace through this recursive function, we can see that it will print out a sequence of numbers \(1, 2, 4, \ldots, n\). There are \(\log_2(n)\) many numbers that get printed out, so if "print" takes 1 unit of work, the total work will be \(O(\log_2 (n))\).