Quiz 3 Study Guide

Amortized big-O analysis

  1. What is amortized complexity?

    Amortized complexity is the average complexity of operations in a sequence over that sequence. For instance, the amortized worst-case complexity of ArrayList's insertLast is \(O(1)\) even though the worst case complexity is \(O(n)\): on any given call to insertLast, \(O(n)\) is the worst possible outcome. However, a sequence of \(n\) calls to insertLast on the same list has time complexity \(O(n)\) so, on average once the sequence is finished, each call can be assigned \(O(1)\) time.

  2. A magical dragon loves collecting things for their horde. Every so often, they will return to their lair and turn every collected item into gold. We might simulate the dragon using the C++ code below.

    class MagicDragon {
    private:
      LinkedQueue<string> myStuff;
      int maxHorde;
    public:
      MagicDragon();
      void collect(string item);
    };
    MagicDragon::MagicDragon() {
        this->maxHorde = 1;
    }
    void MagicDragon::collect(string item) {
        this->myStuff.enqueue(item);
        cout << "Collecting " << item << "." << endl;
        if (this->myStuff.size() > this->maxHorde) {
            this->maxHorde = this->myStuff.size();
            // turn it to gold!
            for (int i=0;i<this->maxHorde;i++) {
    	    string current = this->myStuff.dequeue();
                cout << "Turning " << current << " into gold!" << endl;
            }
        }
    }

    Suppose we create a MagicDragon and call collect("pebble") six times in a row. Show what each of these six calls to collect would print, and the contents of this→myStuff after each call.

    After the first call, this→myStuff contains ["pebble"].

    printed:

    "Collecting pebble."

    After the second call, this→myHorde contains nothing.

    printed:

    Collecting pebble.

    Turning pebble into gold!

    Turning pebble into gold!

    After the third call, this→myHorde contains ["pebble"].

    printed:

    Collecting pebble.

    After the fourth call, this→myHorde contains ["pebble", "pebble"].

    printed:

    Collecting pebble.

    After the fifth call, this→myHorde contains nothing.

    printed:

    Collecting pebble.

    Turning pebble into gold!

    Turning pebble into gold!

    Turning pebble into gold!

    After the third call, this→myHorde contains ["pebble"].

    printed:

    Collecting pebble.

    Suppose that n is equal to maxHorde of the dragon. What is the worst-case big-O runtime of a call to collect in terms of n?

    The worst case runtime is \(O(n)\), because sometimes the call to maxHorde involves dequeueing everything in myStuff, which contains \(n+1\) items at that time.

    Suppose that n is equal to maxHorde of the dragon. What is the amortized big-O runtime of a call to collect in terms of n?

    The amortized runtime is \(O(1)\), since \(n\) consecutive calls to collect will take constant time until the last one, which will take linear time. This amortizes to linear time.

Lists, stacks, and queues

  1. What is the difference between a List and a Queue? When might we use a Queue instead of a List?

    List and Queue are two different ADTs. The List ADT supports a variety of operations (such as get by index) that a Queue does not. A Queue may be implemented using a List but is not a kind of List.

    We might use a Queue instead of a List if the operations provided by Queue are enough for our algorithm. By using a "weaker" data structure, the algorithm that uses it is easier to verify as correct since it’s easier to understand how the data structure behaves.

  2. What are the big-O complexities of the following operations? Answer for each of (a) LinkedList with a tail pointer, (b) LinkedList without a tail pointer, and (c) ArrayList. Briefly explain each answer.

    1. Insert to the front of the list.

    2. Insert to the back of the list.

    3. Retrieve the first element of the list.

    4. Retrieve the middle element of the list.

    5. Remove an element from the back of the list.

    LinkedList with tail LinkedList without tail ArrayList

    insertFirst

    \(O(1)\)

    A new head is created and linked to the old head.

    \(O(1)\)

    Identical to LinkedList with tail.

    \(O(n)\)

    Every element must shift down one space.

    insertLast

    \(O(1)\)

    A new tail is created and linked to the old tail.

    \(O(n)\)

    Without a tail pointer, we must walk to the end of the list to find the tail.

    amortized \(O(1)\)

    We add to the end of the array; this may require us to grow the array.

    getFirst

    \(O(1)\)

    The first node of the list is immediately available.

    \(O(1)\)

    Identical to LinkedList with tail.

    \(O(1)\)

    Accessing an array is constant time.

    get

    \(O(n)\)

    We have to find the appropriate node.

    \(O(n)\)

    Identical to LinkedList with tail.

    \(O(1)\)

    Accessing an array is constant time.

    removeLast

    \(O(n)\)

    The tail pointer lets us find the last element, but we need to find the penultimate element (second to last) so we can make it the new last element. This requires us to walk the list.

    \(O(n)\)

    We must walk to the end of the list to find the last two nodes.

    \(O(1)\)

    We can simply decrease the size of the ArrayList by one; the array doesn’t need to change.

  3. Consider the following pseudocode function, which is passed a queue and removes the "middle" element from the queue. For example, if the queue contains three items [A, B, C] then the middle item is "B", and after calling this function, the queue should contain [A, C]. If the queue contains [D, E, F, G], then the middle items is "E", and the modified queue should be [D, F, G].

    Function removeMiddle(Q) {
        mid = floor (Q.size / 2 )
        end = Q.size
        for i=1...mid-1 {
        	Q.enqueue(Q.dequeue)
        }
        x = Q.dequeue
        for i=mid+1 ... end {
        	Q.enqueue(Q.dequeue)
        }
        return x
    }

    What is the worst-case runtime complexity of removeMiddle in terms of the size \(n\) of the input queue? Briefly explain.

    removeMiddle is \(O(n)\) since it removes (and re-inserts) \(n/2\) elements from Q, removes the middle element, and removes (and re-inserts) \(n/2\) more elements from Q. If each remove/insert takes constant time (for example, using a LinkedQueue), there will be \(O(n)\) operations total.

Pseudocode analysis

  1. The ninjas would like to keep track of which students are waiting to ask questions, so that they can be answered in order. Consider the following pseudocode.

    Function ninjaSession(n) {
        LinkedList waitingStudents;
        // add the waiting students
        for i=1...n {
            if (student i has a question) {
    	    waitingStudents.insertFirst(student i)
    	}
        }
        // answer the questions in order
        while (waitingStudents is not empty) {
    	currentStudent = waitingStudents.removeLast()
    	help(currentStudent)
        }
    }

    What is the worst-case big-O runtime of ninjaSession in terms of the number \(n\) of students?

    In the worst case, every single student has a question, so the function does \(n\) insertFirst operations (which are each constant time for a LinkedList) and then \(n\) removeLast operations (which each take linear time for a LinkedList). This gives a total runtime of \(O(n^2)\).

    What changes could be made to the code so that the behavior of the ninjaSession function is the same, but the worst-case runtime is improved?

    By changing insertFirst to insertLast and also changing removeLast to removeFirst, we can improve the runtime while keeping the order of student questions the same. Both insertLast and removeFirst operations are constant time for LinkedList objects, so this change means that the total runtime for ninjaSession will be improved to \(O(n)\).