This course covers a concept which is of fundamental importance in Computer Science: the interplay between randomness and algorithms.
It is aimed at Master students, and credited toward either the specialization module 2.4. "Cryptography", or the module 4. "Electives"
No knowledge of probability theory is required beyond the basics: events (mutually exclusive events, independence, conditional probability), discrete random variables (expectation, independence, distribution). Beyond that, everything will be developed in the course (and in the textbook). One could say that MTAT.05.117 subsumes the (non-existent) course Probability for Computer Scientists.
In Computer Science, randomness has two important applications.
- Determining the success probability or running time of an algorithm on random input
- Randomized algorithms
(1) Suppose that a business decision has to be made about which algorithmic approach is to be developed to a specific problem. In other words: which algorithm is to be implemented? Usually, several choices are available, but implementing and testing is costly. You would like to focus on the most promising algorithm.
Let's say the computational task is time sensitive (as usual, your client/boss wants the computer to solve a crazy hard problem in the blink of an eye), so you compare running times. Worst-case running times are easy enough to obtain, but they don't give the whole story: The phenomenon that algorithms outperform their worst-case running times on "typical" input data is ubiquitous.
The answer is to analyze and compare the performance of the algorithms on typical, i.e., random, input data. You'd like to make sure that, say, for random input of the type we are interested in, there's a 99% probability that the algorithm completes its task very quickly.
Also very often, some resource constraints are so tight that the use of an algorithm which succeeds on every input is not feasible. You are forced to use an algorithm which might fail to produce the correct output. You'd then like to make sure that, say, for random input of the type we are interested in, there's a 99.9% probability that the algorithm succeeds.
In this course, you'll learn how to obtain results like these.
(2) A randomized algorithm is one which uses a random number generator in an essential way. The basic example is quicksort with a random pivot. For such an algorithm, you would like to estimate the probability that (a) it succeeds, or that (b) it succeeds quickly. In the case of quicksort, you prove that, in expectation, it takes O(n log n) comparisons. (You know that it succeeds even with the worst possible pivots, so there's no need to consider (a).)
Very often, randomized algorithms are simultaneously easier and more efficient as deterministic ones. For some problems, the best known algorithms are randomized, and no deterministic algorithms with even remotely similar performance exist.
In this course, you'll learn how to actually do (a) and (b).
The textbook for the course is
Probability and Computing ---
Randomized Algorithms and Probabilistic Analysis
M. Mitzenmacher & E. Upfal.
You will be required to read sections of that book in "individual study". (That is a bad choice of words, because it is highly recommended, more fun, and way easier to read it in a small group.) Moreover, in class, you are expected to have already read the parts of the textbook which are discussed.
The focus of the course will be on training skills (the ability to solve problems using the techniques covered) rather than teaching facts (theorems, definitions, ...). As such, the most difficult and time consuming part of the course will be solving challenging homework exercises. Keep in mind that the course will be a waste of time unless you work hard on at least 50% of the homework problems. The solutions to the homeworks will be discussed in the practice sessions.
The exam will consist of problems similar to the homework exercises. (Another reason to solve many of those.)
The course relies on undergraduate-level discrete probability. I recommend that you read the first Chapter of the textbook before the first class meeting. Ideally, you should feel comfortable with everything in Section 1.2 (except, possibly, Lemma 1.3), and Theorem 1.6. Theorem 1.7 should not be completely new to you. Ideal or not, in any case, as a brain teaser, I recommend that you solve the following exercises from the end of Chapter 1:
We will not repeat basic probability in class. If you didn't get enough out of the introductory probability course you attended, check out the video lectures ##18-23 of this undergraduate course on Mathematics for Computer Science at MIT.