Problem1 For each n, I exhibit a sequence of Fibonacci heap operations on n items that produce a heap ordered tree of depth $\geq n - 3 = \Omega(n)$. For $n \leq 3$, the solution is trivial as the required lower-bound on the depth is $\leq 0$. For $n \geq 4$, here is a sequence of operations on $n$ items with keys from $1$ to $n$ (each item will be noted by its key) that produce a heap-ordered tree of depth $n-3$, specifically either the tree 3->4 if n=4 or, if n>4, the tree 2 -> ... -> n-3 -> n-1 -> n, i.e. the tree with item 2 as a root & with each greater item except item n-2 present as the only child of the previous smaller item present. insert(n), insert(n-1), insert(n-2), delete-min. set k to 4 while (n > k): insert(n-2), insert(n-k+1), insert(n-k), delete-min, delete(n-2) increment k Problem 2 I show that modifying Fibonacci heaps so that a node is cut only after losing k children improves the amortized cost of decrease-key (to a better constant) at the cost of a worse cost for delete-min (by a constant factor). decrease-key I analyze the amortized cost of decrease-key, when a node is cut only afer losing k children (i.e. having k marks). I choose the potential function: \Phi = 2/(k-1) (number of marks) + (number of roots) so that the cost of a cascading cut is 0: 1 (for cutting the node) + 1 (for adding a root) - (k-1) 2 / (k-1) (for removing the k-1 marks on this node) ----------------- 0 Hence, the amortized cost of decrease-key is 2+2/(k-1): 1 (for cutting the node) + 1 (for adding a root) + 2/(k-1) (for adding a mark) + 0 (for cascading cuts). ---------- 2+2/(k-1) As k>2, the constant cost of decrease-key decreases. delete-min For delete-min, I show that the trees are exponential in degree, so that adding the children of the min as roots result in O(log(n)) new roots. By the union-by-rank procedure, the ith added child will have degree \geq i-k. Indeed, by virtue of being the ith child added, it must have been in the (i-1)th bucket, so it must have started with i-1 children. Since it losts at most (k-1) children, it still has at least i-k children. Now, let S_k = the number of descendants of a tree with k children. S_0 = 1 S_1 = 2 S_n \geq \sum_{i=0}^{n-k} S_i S_n \geq S_{n-1} + S_{n-k} S_n \geq C^k Hence, delete-min will add O(log_C(n)) roots. When k=2, C=\Phi (the golden number). Clearly, for k>2, C<\Phi, so log_C(n) < log_\Phi(n). Therefore, as k>2, the performance of delete-min will be worse by a constant factor (precisely log_C(\Phi)). Problem 3 a) I show how to use a priority queue P that performs insert, delete-min and merge in O(log n) time and make-heap in O(n) time to construct a priority queue Q that performs insert in O(1) amortized time while still performing delete-min and merge in O(log n) amortized time. For my new priority queue Q, I maintain two structures, a list to-merge and the priority queue p. On insert, I simply add the new element to the list to-merge. On delete-min and merge, I make a priority queue out of the list to-merge and merge it with the priority queue p, before calling the delete-min or merge procedure of p. define Q:make-heap(lst): to-merge = {} p = P:make-heap(lst) define Q:insert(el): to-merge.add(el) define Q:delete-min(): tmp-p = P:make-heap(to-merge) p = p.merge(tmp-p) to-merge = {} return p.delete-min() define Q:merge(p2): tmp-p = P:make-heap(to-merge) p.merge(tmp-p) to-merge = {} p.merge(p2) amortized analysis Let \Phi(Q) = number of elements in to-merge. Then: The amortized cost of insert is O(1): real cost : 1 \Delta\Phi : 1 ------------------ amortized cost: 2 The amortized cost of delete-min is O(log n): real cost : \Phi + O(log(n)) \Delta\Phi : - \Phi ----------------------------------- amortized cost: O(log(n)) Similarly, the amortized cost of merge is O(log n): real cost : \Phi + O(log(n)) \Delta\Phi : - \Phi ----------------------------------- amortized cost: O(log(n)) b) I show how even binary heaps can be modified to support insert in O(1) amortized time while maintaining an O(log n) time bound for delete-min. As in part (a), on insert, I add the new element to a list to-merge, which I make into a binary heap during delete-min. However, instead of merging, I add the new heap in a heap of heaps, a "super" heap. Therefore, for my new priority queue Q, I maintain two structures: a list to-merge and a heap of heaps super-heap. The super-heap is keyed by the smallest element in each inner heap. On delete-min, I make a new inner heap out of the list to-merge and add it to the super-heap. I find the min inner heap in the super-heap, perform delete-min on the min inner heap, and then min-heapify the super-heap to maintain the (super-)heap order. define Q:make-heap(lst): to-merge = {} super-heap = P:make-heap(P:make-heap(lst)) define Q:insert(el): to-merge.add(el) define Q:delete-min(): super-heap.insert(P:make-heap(to-merge)) to-merge = {} min = super-heap.min.delete_min() super-heap.min-heapify() return min amortized analysis The amortized analysis in similar to part (a). In delete-min, deleting the min from the min inner heap and maintaining the heap order of the super-heap each takes O(log n). Indeed, an inner heap has at most n elements while the super-heap has at most n inner heaps (with 1 element each). Therefore, as in part (a), the amortized cost of delete-min is O(log n): real cost : \Phi + O(log(n)) \Delta\Phi : - \Phi ----------------------------------- amortized cost: O(log(n))