HW7. Hashing, Bloom Filter (23rd of Oct, 23.59)
EX1. (1p) Use the integers as in here. Experiment with the universal hashing scheme as presented in the lecture. Assess the goodness of the universal scheme - hash the given integers to hash tables of sizes 1000, 10,000 , 100,000 and 1,000,000 slots. For each hash table size experiment with the p, a, and b selected randomly. Report (ideally visualize) the number of collisions in total for different table sizes (over many trials with random p, a and b).
EX2. (1p) Find some p, m, a and b (for a universal family as described in the lectures) for which the integers given in EX1 are especially non-uniformly distributed (have many collisions in total or have too many collisions for one specific slot). How easy it is to find such bad cases? I.e. how many different hash functions did you have to evaluate? Describe how ignoring rules of selecting p, m, a or b can lead to a bad case (many collisions). Attempt to illustrate how the number of collisions are spread over the large set of random hash functions.
EX3. (1p) So far we did not ask what to do when the hash table grows full and/or we do not know in advance the final nr. of elements to be hashed. One answer to this is so-called "Extendible hashing" or also "Dynamic hashing". Please study online materials, e.g https://loonytek.com/2016/05/17/extendible-hashing/. Summarise the key points of extendible hashing, providing also one illustration (can be from the internet; properly cited of course). Also, report the most useful resource(s).
EX4. (1p) Hash functions are usually represented as integer -> integer mappings. Yet, keys in dictionaries are usually words, ID-s etc. The purpose of this task is to try to map every English word (e.g. https://raw.githubusercontent.com/dwyl/english-words/master/words.txt) into an integer (before even applying the hash functions). Since there are many more potential words than integers of length 32 bit or 64 bit, one may ask - how many collisions one would generate simply by conversion of words into integers. Try out some simple ideas - like simply adding all character values ord(c) together and count nr of collisions. Then try to find and implement a better mapping - and report how many collisions there will still be when applied to above very long list of English words. You can measure two things: total nr of colliding integers; and the "most occupied" integer value - for both, the 32 and 64 bit integers.
EX5. (1p) Watch this presentation by Titus Brown (2011): http://pyvideo.org/video/402/pycon-2011--handling-ridiculous-amounts-of-data-w Explain very briefly the concept of Bloom filters and why Titus and his colleagues use Bloom filters in this case?
EX6. (Bonus 2p) Hidden messages using Bloom Filters. Experiment with the De Bruijn graph approach discussed by Titus Brown in the lecture (see EX3) for "storing text" using Bloom filters... Grab a dictionary like this: http://www.gutenberg.org/cache/epub/972/pg972.txt Insert word definitions as individual "messages" -
For example for the sentence: EXILE, n. One who serves his country by residing abroad, yet is not an ambassador. Store: "EXILE," , "XILE, " , "ILE, n", "LE, n.", ... (You can use padding in front, e.g. '..........EXILE,' to create keys of length 16 (10+5+1), then '.........EXILE, ' , etc ) ,
Hints: create a hash function for text - converting it first into a large integer use one large hash table bitmap (Bloom filter), and apply several different hash functions on the same array To "retrieve", query the Bloom filter with "EXILE," and re-create the entry by trying out all possible characters... (do you improve results by allowing English characters and punctuation only?) Experiment with changing the size of filter and what proportion of bits are set to 1 Experiment with the nr of 3-4-...X different hash functions and different keyword lengths. (you can assume all possible characters from byte code 0 to 255)
Measure how many word entries would be retrieved correctly and how many would "get broken" under different parameter settings.
EX7. (Bonus 2p) Stormen, Ceiserson, Livest, and Rein are four academics who wrote a very popular textbook in computer science, affectionately known as SCLR. They just found k first editions in their offices, and want to auction them off online for charity. Each bidder in the auction has a unique integer bidder ID and can bid some positive integer amount for a single copy (but may increase or decrease their bid while the auction is live). Describe a database supporting the following operations, assuming n is the number of bidders in the database at the time of the operation. For each operation, state whether your running time is worst-case, expected, and/or amortized.
- record(g, r, p): record a new bidder ID d with bid b in O(log n) time
- clear(g, r): update the bid of existing bidder ID d to bid b in O(log n) time
- ranked_receiver(k): return revenue from selling to the current k highest bidders in O(1) time