Lectures

There will be four sets of lectures by the following distinguished lecturers:

The traditional approach to designing data structures that are to be accessed concurrently by several processes is to use exclusive-access locks. This approach has a number of disadvantages, including a lack of fault-tolerance, priority inversion, the possibility of deadlock or livelock, and inefficiency when slow processes hold a lock while fast processes wait for that lock. An alternate approach is to design lock-free implementations which coordinate different processes' accesses to the data structure without using locks. We will survey some of the most important ideas used over the past fifteen years in the development of lock-free data structure implementations, including some that are now included in the Java standard libraries. The survey will include some current research on the development of lock-free tree data structures.
Handouts - Lock-Free Shared Data Structures
Further reading - Lock-Free Shared Data Structures
Many practical applications in statistics and econometrics are hampered by numerical challenges: model selection, parameter estimation for robust statistics, and parametric regression are just a few. Quite often, the trouble arises because of the required optimization to maximize a likelihood function, minimize an information criterion, etc. As it turns out, many of these optimization problems are far from well behaved: from an optimization point of view, the objective functions are often non-convex but have multiple optima; there might be frictions in the search space; there might be constraints on valid solutions, etc. Unfortunately, standard, “out-of-the-box” optimization techniques fail to cope with these issues. It is therefore not unusual that, even for relatively simple problems, different software packages can produce different outcomes because they use different types of built-in optimizers which apparently have their limitations.
Heuristic optimization methods are a new class of general purpose techniques that can deal with ill-behaved problems. The lectures will present some of these techniques and demonstrate how they can be used in general and for problems in statistics, econometrics, and economics in particular.
Democracy is a defining attribute of civilization, but it is fragile and vulnerable. From the dawn of democracy to the present day people have sought to undermine the process of democracy by manipulating the outcome of elections. Over time, various procedures and technologies have evolved to try to guarantee accuracy and secrecy. However, the tension between the requirements of (universal) verifiability and ballot secrecy conflict this very challenging.
In the “modern” era, various technological approaches to securing elections have been deployed, in particular in the US: lever machines came into use at the end of the 19th century, followed by punch cards, optical scanners and touch screens. To date all such technologies have proved problematic and vulnerable to large-scale error or corruption; witness the controversies over the US 2000 and 2004 presidential elections.
Creating an election system that is demonstrably immune from error or corruption while at the same time guaranteeing ballot secrecy poses a unique and formidable challenge. Furthermore, such systems must not only be technically trustworthy, they must also be widely trusted and extremely easy to use. Over the last few decades, cryptographers and information security researchers have made significant progress against these challenges. In these lectures I will outline some developments in the search for demonstrably accurate and secret elections. In particular, I describe the Prêt à Voter and Pretty Good Democracy verifiable schemes.
Composition of systems out of autonomous subsystems pivots on coordination of concurrency. The most challenging aspect of concurrency involves the study of interaction and its properties. Interaction refers to what transpires among two or more active entities whose (communication) actions mutually affect each other. In spite of the long-standing recognition of the significance of interaction, classical models of concurrency resort to peculiarly indirect means to express interaction and study its properties. Formalisms such as process algebras/calculi, concurrent objects, actors, agents, shared memory, message passing, etc., all are primarily action-based models that provide constructs for the direct specification of things that interact, rather than a direct specification of interaction (protocols). Consequently, these formalisms turn interaction into a derived or secondary concept whose properties can be studied only indirectly, as the side-effects of the (intended or coincidental) couplings or clashes of the actions whose compositions comprise a model.
Alternatively, we can view interaction as an explicit first-class concept, complete with its own composition operators that allow the specification of more complex interaction protocols by combining simpler, and eventually primitive, protocols. The coordination language Reo serves as a premier example of such an interaction-based model of concurrency. In this paper, we describe Reo and its support tools. We show how exogenous coordination in Reo reflects an interaction-centric model of concurrency where an interaction (protocol) consists of nothing but a relational constraint on communication actions. In this setting, interaction protocols become explicit, concrete, tangible (software) constructs that can be specified, verified, composed, and reused, independently of the actors that they may engage in disparate applications.
Handouts - Coordination, Reo, and software composition
edit