HW3. Binary search; Master theorem (24th of Sep, 23.59)
EX1. Implement binary search algorithm and describe it briefly. Assess the speed of binary search by measuring how many searches can you perform in one minute for various sizes of the arrays? As always, describe the setup of the experiment, plot and comment on the results. (Don’t forget that the array needs to be sorted, but since this is not part of the binary search algorithm, don’t include sorting the array into the measurements). For extra Karma points compare the recursive and iterative versions of binary search.
EX2. Solve the recurrence T(n) = 7T(n/3) + n2
EX3. Find the complexity of binary search using Master theorem. Fix the constants a, b and the function f(n) for binary search. Find out which case (case 1, 2 or 3) it leads to and what will be the complexity assigned by the Master theorem.
EX4. Write a simple simulator for plotting the complexity of a given recurrence T(n) = a T( n/b ) + f(n). Plot the simulated growth functions for each of the three different Master Theorem cases as identified on lecture slides. Print out also the overall work "accounted for" at each depth of the recurrence tree. In addition you can plot the one for which the master method did not hold. Hint: execute the recursion tree and simply calculate the values (assume operation on n<=2 at cost 1).
EX5. Implement the Dual-Pivot Quicksort (Quicksort with two pivots) and evaluate its performance (you may compare it to the regular Quicksort implemented by you or to the built-in sort). One of the important steps of the algorithm is to split the array into semi-equal sets, this could be crucial for the performance. Sometimes if splits are very uneven - overall performance would suffer. Can you measure how often such splits (“bad splits”) occur with Dual-Pivot Quicksort and define reasons for them to occur (or to not occur)? Describe how you might try to avoid these “bad splits” to make your code more robust (ideally, try to prove it by implementing and comparing to the original implementation).
EX6. Bonus(2p) Merge sort does not work 'in place' within the original space limits. Simple method uses double the memory by copying values to the helper table and back. There are some rather complicated constant memory overhead schemes to convert merge sort to work in-place, while still ensuring O(n log(n) ) worst case (google). Your task is to experiment in practice with some simple, possibly non-optimal solution. Modify merge-sort to work in-place, possibly doing more the O(n) work for one merge operation. Measure the actual time on randomly generated data compared to more standard merge sort. Now try to break your own code by generating “worst possible” instances of data.