HW3. Master theorem and Quicksort (20th of Sep, 23.59)
EX1. Master theorem
Let’s study the Master theorem. Give a new example of a function for each of the three cases of the theorem.
Solve those given recurrences using the Master theorem (write down a, b, log_b(a) , f(n)), and show how you decided which case the function is. Make sure to check the regularity condition if needed!
Plot your chosen functions, you can write a basic program as done previously, e.g. define a function to calculate T(n) = a T(n/b) + f(n). You can assume again as in the previous week that T(1) is 1.
EX2. The master theorem is not almighty
Sometimes the recurrent function is not so simple. Consider function T(n) = T(n/2) + n(sin(n - pi/2) + 2), pi = 3.141592…. (sin(x) takes radians as input).
Can the Master theorem be applied here? If not, then why?
Try to plot the function and try to provide some upper and lower bounds for its asymptotic complexity, feel free to use any of the methods available for you (e.g. plotting, proofs, ...).
EX3 Virtual simulation of a complex recurrence
The Master theorem can be applied in the case of relatively simple straightforward recurrences. When the Master Theorem is not directly applicable one can still simulate it numerically through a straightforward recursive implementation of the very recurrence. Hint: n has to be an integer at all times (use integer division).
Recurrence:
T(n) = n if n < 15 T(n) = 2* L(n/3) + 3* R( n/4 ) + T( (2/3) * n ) + 18 n. L(n) = 5 if n < 10 L(n) = T(n/2 ) + 10 n R(n) = 20 if n < 20 R(n) = L( (3/4) * n ) + T(n/2) + 15
Hint: n has to be an integer at all times (use integer division).
a) Implement the above recurrence and calculate the value of function T(n) in cases n=100, n=100,000, n=100,000,000.
b) Plot the growth of a function for T(n) in between n=100,000 and n=100,000,000. c) Based on the plot and data, can you "guess" the T(n) for n=1,000,000,000?
EX4 How big input (n) would lead to X (=T(n)) calculation steps?
The recurrence T(n) allows counting how many calculations (X) is expected on the input of size n. We want to reverse the question and ask, how big inputs can we solve in a given time (given the number of calculation steps X).
Let’s say we have function T(n) that grows monotonically (at least it seems to grow monotonically). How can we search for n for which T(n) is closest to X? Or in other words, find n, for which T(n) <= X <= T(n+1) ? Report n, T(n), n+1, T(n+1).
Use the function for T(n) and apply binary search strategy to identify the value n and n+1 for which the T(n) <= 100,000,000,000 <= T(n+1).
In order to use binary search, you probably need to set some lower and higher bounds. This is left up to you. One way is to increase n in an exponential fashion, e.g. in multiples of 2.
How many times did you need to call T(n) in the whole process?
Why is it possible to use binary search to be clever in solving this task? What criteria need to be fulfilled for T()?
How would you solve this task if the target value (in this case 100,000,000,000) is not fixed but is just always given as an arbitrary input?
EX5 Quicksort with 1,2,3 pivots
Generate random input data as an array of 10 million 40-bit unsigned integers (Hint: use 64-bit integers but mask them for least significant 40 bits only).
Standard quicksort used a single pivot to divide data into “smaller-equal” and “larger” groups. Your task is to implement quicksort with one, two and three pivots. I.e. come up with a selection of pivots and implement the split of data into two, three or four ranges, respectively. Resolve regions of size 10 or fewer values by simpler sort, e.g. bubble or insertion.
In sorting algorithms and running code in general, it matters how many operations are performed in total, as well as the speed of the computer on how many operations it performs in some time unit. In this task let’s explicitly count the number of comparisons needed, the number of swaps needed, the number of recursive quicksort function calls needed and the nr of small-region sorting calls needed by the three different splitting approaches. For small-region sorting, count the exact number of times that ranges of size 2,3,...,10 required further sorting.
The key of the task is to come up with the splitting functions and counting the actual number of operations.
Hint: start working with much smaller input data (e.g. 100 or so) and only when your code works correctly, see if you can run it on the input of size 10 million (if not, then use some smaller size, like 1M, for example).
EX6 Continue with Quicksort task 5 (Bonus, 2 p)
Experiment further with task 5 and come up with strategies to minimise the nr of comparisons, swaps and recursive function calls needed. Note that these operations may have different “cost” in actual computations. Propose some coefficients for those actual cost ratios and follow that in your analysis.
Which strategy for two, three or four-way split in quicksort seems the best? What is the best cutoff point for resolving to sort of small array ranges with a simpler method, like bubble or insertion sort, which exactly?
Now make another version of the same code and remove the counting altogether. Measure the actual speed of the (hopefully) best methods. Do the “counting” and “time measurement” methods agree with each other? If not, can you explain the main source of the difference?
EX7 - Parallel execution (Extra bonus 1p)
Quicksort is nice, as it allows for parallel implementation. The initially split ranges are fully independent and could be sorted further in parallel. Can you utilise parallelism (multi-processor and multi-core machines) and gain further execution speed from EX6 speed measurements? How big is the speedup? Show the timing/plots.
Exercises 6 and 7 are bonus tasks. In order to get full points from a bonus task, we expect you to perform a more thorough analysis. Write up how you plan to solve the task, create nice plots and write an interpretation of the results.