CS35: Test 2 study guide

The tests in this course may contain any material covered in lecture up until this point, although much stronger emphasis will be placed on material for which you have received graded feedback. This page contains a collection of exercises that you may wish to complete to prepare yourself. We also encourage you to review the lecture notes from topics presented since the previous test.

Sorting

Make sure to review the sorting algorithms we covered in class and be ready to discuss them.

  1. What is the worst-case time complexity of SelectionSort? QuickSort? MergeSort?

Solution:

   SelectionSort and QuickSort both have worst-case complexity of O(n^2).
   MergeSort has a worst-case complexity of O(n log n).

  1. Although QuickSort’s worst-case time complexity is worse than MergeSort’s, we use QuickSort as a default in practice. Why?

Solution:

  Randomized QuickSort's *expected* worst-case complexity is O(n log n).
  Also, QuickSort is an *in-place* sort, meaning that it performs less copying
  than MergeSort.

  1. What is an in-place sorting algorithm?

Solution:

  An *in-place* sorting algorithm is one in which the contents of the array
  are sorted while remaining within the array.

  1. What is wrong with the following QuickSort pseudocode? What would happen if we used this pseudocode to sort an array? How could we fix it?

    Function brokenQuickSort(array, left, right)
        pivot <- partition(array, left, right)
        brokenQuickSort(array, left, pivot-1)
        brokenQuickSort(array, pivot+1, right)
    EndFunction

Solution:

   There is no recursive base case in the provided pseudocode.  If this code
   were executed, it would break the list down into paritions of size 0 and
   keep trying to split those size 0 partitions indefinitely.  We need to add
   a condition that does not try to sort if the array is of size less than two.

  1. Write pseudocode for a partition function which can be used in the QuickSort algorithm.

Solution:

  Any partition function which correctly organizes the array into a "low
  section" and a "high section", returning the index of the pivot between them,
  is acceptable.  Here's the algorithm which was discussed in class:

  Function partition(A, start, end)
    pivot <- A[end]
    left <- start
    right <- end-1
    While left <= right
      If A[left] <= pivot
        left = left + 1
      ElseIf A[right] > pivot
        right = right - 1
      Else
        swap A[left] and A[right]
    swap A[left], A[end]
    Return left
  EndFunction

  1. Write pseudocode for the merge function of the MergeSort algorithm.

Solution:

  Function merge(a1, s2, a2, s2, result)
    Input: sorted arrays a1 of size s1, a2 of size s2
           array result
    Output: none, but sorted combination stored in result

    i <- 0
    j <- 0
    k <- 0
    While i < s1 And j < s2
      If a1[i] <= a2[j]
        result[k] <- a1[i]
        i <- i + 1
        k <- k + 1
      Else
        result[k] <- a2[j]
        j <- j + 1
        k <- k + 1
      EndIf
    EndWhile
    While i < s1
      result[k] <- a1[i]
      i <- i + 1
      k <- k + 1
    EndWhile
    While j < s2
      result[k] <- a2[j]
      j <- j + 1
      k <- k + 1
    EndWhile
  EndFunction

Big-O Proofs

Recall the definition of big-\(O\) notation we used in class: \(f(n)\) is \(O(g(n))\) if there exists \(c>0, n_0\geq 1\) such that for all \(n \geq n_0\), we have \(f(n) \leq c\cdot g(n)\). You should be prepared, given this definition, to prove complexity bounds on functions.

  1. Prove that \(f(n) = 3n^3 + 5n^2 + 8\) is \(O(n^3)\).

Solution:

Note that for all integers \(n\geq 1\), it holds that \(8\leq 8n^3\) since we multiply 8 by a number that is at least 1. Similarly when we multiply \(5n^2\) by \(n \geq 1\), we get \(5n^2 \leq 5n^3\). So, set \(n_0 := 1\) and \(c := 16\). Then, we have \begin{align*} f(n) &= 3n^3 + 5n^2 + 8 \\ &\leq 3n^3 + 5n^3 + 8n^3 \\ &= 16n^3 \\ &= c\cdot n^3\ . \end{align*} Hence, \(f(n) = O(n^3)\).

  1. Prove that \(f(n) = n^2 - 2n +2\) is \(O(n^2)\).

Solution:

Set \(c := 3\) and \(n_0 := 1\). Note that for all \(n\geq n_0\), n is positive, hence \(-2n < 0\). Also note that for all \(n \geq n_0\), we have \(2 \leq 2n\) since we're multiplying 2 by a number that is at least 1, thus making the number bigger. Therefore, we have \begin{align*} f(n) &= n^2 -2n + 2 \\ &\leq n^2 + 2 \\ &\leq n^2 + 2n^2 \\ &= 3n^2 \\ &= c\cdot n^2\ . \end{align*} Hence, \(f(n) = O(n^2)\).

  1. Prove that \(f(n) = \frac{n^2+1}{n}\) is \(O(n).\)

Solution:

  It's helpful to first simplify \(f(n)\).  We have
  \begin{align*}
    f(n) &= \frac{n^2+1}{n} = \frac{n^2}{n} + \frac{1}{n} = n+\frac{1}{n}\ .
  \end{align*}

  Note that for any \(n\geq 1\), it holds that \(\frac{1}{n} \leq 1 \leq n\).  Choose \(c := 2\) and \(n_0 := 1\).  Then for all \(n \geq n_0\), it holds that
  \begin{align*}
    f(n) &= \frac{n^2+1}{n} \\
    &= n + \frac{1}{n} \\
    &\leq n+1\\
    &\leq n + n \\
    &=2 \cdot n \\
    &= c \cdot n\ .
  \end{align*}

  Hence, \(f(n) = O(n)\).

Data Structures and Invariants

You should be prepared to discuss the data structures we have created in class, the fields in their implementations, and the invariants we rely upon in order to write algorithms for those data structures.

  1. What is an ADT? What is the difference between an ADT and a data structure? Give two examples of each.

Solution:

An *ADT* is a description of how data can be accessed and manipulated.  In
other words, an ADT is an interface allowing the caller to store and retrieve
data.  A *data structure* is a concrete realization of an ADT; data structures
give specific algorithms to implement the interface that the ADT describes.

Two examples of ADTs are Lists and Stacks.  Lists and Stacks are both defined
by a collection of methods that allow a user to store and manipulate data
(e.g. `insertFirst`, `getLast`, etc. for List; `pop`, `push`, etc. for Stack).
Two examples of data structures are linked lists and array lists.  Both linked
lists and array lists are concrete implementations of the List ADT; each of
them give implementations for the interface that the List ADT describes.

  1. What is the difference between a List and a Stack? When might we use a Stack instead of a List?

Solution:

`List` and `Stack` are two different ADTs.  The `List` ADT supports a variety
of operations (such as `get` by index) that a `Stack` does not.  A `Stack` may
be implemented using a `List` but is not a kind of `List`.

We might use a `Stack` instead of a `List` if the operations provided by
`Stack` are enough for our algorithm.  By using a "weaker" data structure, the
algorithm that uses it is easier to verify as correct since it's easier to
understand how the data structure behaves.

  1. What are the big-\(O\) complexities of the following operations? Answer for each of (a) LinkedList, and (b) ArrayList. Briefly explain each answer.

    1. Insert to the front of the list.

    2. Insert to the back of the list.

    3. Retrieve the first element of the list.

    4. Retrieve the middle element of the list.

    5. Remove an element from the back of the list.

Solution:

|--------------------------------------------------|
| Operation   |     ArrayList         | LinkedList |
|-------------|-----------------------|------------|
| insertFront |       O(n)            |   O(1)     |
| insertLast  | O(1) + expandCapacity |   O(1)     |
| getFirst    |       O(1)            |   O(1)     |
|  get(i)     |       O(1)            |   O(n)     |
| removeLast  |       O(1)            |   O(n)     |
|--------------------------------------------------|

  1. Draw a stack diagram of a stack frame in which the following code has been run. Assume the LinkedStack was implemented using a LinkedList.

    Stack<int>* stack = new LinkedStack<int>();
    stack->push(4);
    stack->push(5);

Solution:

See the board outside SCI 260 for a solution.