There will be four sets of lectures by the following distinguished lecturers.
From Software Engineering to Evidence Engineering
The focus of software engineering over the last half century has shifted from squeezing the most from every (then very expensive) compute cycle, to improving developer productivity, and, as of late, to engineering user behaviors. The software systems now collect massive amounts of operational data related to users' individual and social activities and rely on it to create experiences that achieve desired outcomes, e.g., increase sales revenue or the quality of software (if the user is a software developer). Novel approaches to design, implement, test, and operate such systems are needed to transform this vast operational data into accurate and actionable information (evidence) either automatically of with social support. With operation and measurement becoming an integral part of software development, the separation between the software tools and end-user software are increasingly blurred. The software construction, development, build, delivery, and operation are both tools to build software systems and, at the same time, an integral part of these systems. The core software engineering questions need, therefore, to address the engineering principles needed for these systems not simply to store or push this massive data around, but also to reliably produce compelling evidence for users and developers alike: to refocus on evidence engineering.
Audris Mockus worked at AT&T, then Lucent Bell Labs and Avaya Labs for 21 years. Now he is the Ericsson-Harlan D. Mills Chair professor in the Department of Electrical Engineering and Computer Science of the University of Tennessee.
He specializes in the recovery, documentation, and analysis of digital remains left as traces of collective and individual activity. He would like to reconstruct and improve the reality from these projections via methods that contextualize, correct, and augment these digital traces, modeling techniques that present and affect the behavior of teams and individuals, and statistical models and optimization techniques that help understand the nature of individual and collective behavior. His work has improved the understanding of how teams of software engineers interact and how to measure their productivity.
Dr. Mockus received a B.S. and an M.S. in Applied
Mathematics from Moscow Institute of Physics and Technology
in 1988. In 1991 he received an M.S. and in 1994 he received a
Ph.D. in Statistics from Carnegie Mellon University.
Machine Learning: beliefs, models and inference
Machine learning is a quickly growing field that has generated a significant media interest in the last couple of years. Data driven learning has lead to a rapid development of applications across many different fields. Instead of explicit models machine learning focuses on how models or tasks can be learned directly from data. In this series of lectures we will start at the very beginning and try to understand the founding principles that allows us and machines to learn. We will then see how these relatively simple ideas can be formalised and show what underpins current machine learning.
In the first lecture we will discuss the history and the fundamental principles of learning. What does it mean to learn, how can we learn and what are the big historical inventions that have allowed us to build machines that learns from data. We will then proceed to discuss the process of modeling. How can we build that explains data, that are interpretable and allows us to introduce our beliefs in a principled manner. In specific we will look at non-parametric constructions which allows us to formulate models with adaptable complexity and work with infinite objects. In the final lecture we will look at how we can learn and fit models to data. Inference is often intractable which means we will focus on methods for approximate inference.
Dr. Carl Henrik Ek is a lecturer at the University of Bristol. His reasearch focuses on developing computational models that allows machines to learn from data. In specific he is interested in Bayesian non-parametric models which allows for principled quantification of uncertainty, easy interpretability and adaptable complexity. He has worked extensively on models based on Gaussian process priors with applications in robotics and computer vision.
Prior to moving to the University of Bristol he was an assistant
professor at the Royal Institute of Technology, Stockholm, Sweden
where he also holds a MEng degree in Vehicle Engineering and is a
docent in Machine Learning. He did his PhD at Oxford Brookes
University and his post-doc at the University of California at
Berkeley. Prior to moving to the University of Bristol he was an
assistant professor at the Royal Institute of Technology in Stockholm,
Digital Innovation Management: Reinventing Innovation Management Research in a Digital World (1st lecture)
Rapid and pervasive digitization of innovation processes and outcomes has
upended extant theories on innovation management by calling into question
fundamental assumptions about the definitional boundaries for innovation,
agency for innovation, and the relationship between innovation processes and
outcomes. There is a critical need for novel theorizing on digital
innovation management that does not rely on such assumptions and draws on
the rich and rapidly emerging research on digital technologies. We offer
suggestions for such theorizing in the form of four new theorizing logics,
or elements, that are likely to be valuable in constructing more accurate
explanations of innovation processes and outcomes in an increasingly digital
world. These logics can open new avenues for researchers to contribute to
this important area. Our suggestions in this paper, coupled with the six
research notes included in the special issue on digital innovation
management, seek to offer a broader foundation for reinventing innovation
management research in a digital world.
Metahuman systems- a new socio-technical challenge (2nd lecture)
Machine based Reinforcement learning and deep learning by machines is
currently giving rise to a new type of socio-technical systems we called
metahuman systems. Metahuman systems extend the concept of traditional
socio-technical systems in that they contain autonomous learning machines –
machines that can learn and act on their own initiative. This change in
machine capabilities creates a the need to formulate new ideas related to
the composition, function and properties of socio-technical metahuman
systems viewed as metahuman systems. It also calls foras an expansions of
past socio-technical theory. This paper advances a detailed assessment based
on a review of recent technology uses and trials of what effects metahuman
systems canwill have with respect to the people, tasks, and structures of
socio-technical systems and their emergent properties of control. The
analysis anticipates the need for an extended and revitalized discourse on
the long-term effects of the new classes of information technologies that
are now penetrateing contemporary work systems. In conclusionaddition,
wethe paper suggests how organization scholars, in collaboration with other
disciplinary fields, need tomight develop novel generalizable, impactful
knowledge about metahuman systems that migh willt inform future organization
"Computing" Requirements for Open Source Software: A Distributed Cognitive Approach (3rd lecture)
Most requirements engineering (RE) research has been conducted in the
context of structured and agile software development. Software, however, is
increasingly developed in open source software (OSS) forms which have
several unique characteristics. In this study, we approach OSS RE as a
sociotechnical, distributed cognitive process where distributed actors
“compute” requirements – i.e., transform requirements-related knowledge into
forms that foster a shared understanding of what the software is going to do
and how it can be implemented. Such computation takes place through social
sharing of knowledge and the use of heterogeneous artifacts. To illustrate
the value of this approach, we conduct a case study of a popular OSS
project, Rubinius – a runtime environment for the Ruby programming
language – and identify ways in which cognitive workload associated with RE
becomes distributed socially, structurally, and temporally across actors and
artifacts. We generalize our observations into an analytic framework of OSS
RE, which delineates three stages of requirements computation: excavation,
instantiation, and testing-in-the-wild. We show how the distributed,
dynamic, and heterogeneous computational structure underlying OSS
development builds an effective mechanism for managing requirements. Our
study contributes to sorely needed theorizing of appropriate RE processes
within highly-distributed environments as it identifies and articulates
several novel mechanisms that undergird cognitive processes associated with
distributed forms of RE.
Kalle Lyytinen is the Iris S. Wolstein Professor in Information Systems and Management Design and the Department Head at the Design & Innovation Department at Weatherhead School of Management in Case Western Reserve University. He received his PhD in Computer Science at the University of Jyvaskyla in 2016, was appointed to professor in information systems in 1987 (at the age of 34) and was the first dean of its IT faculty in 1998 until he moved to US in 2001. Professor Lyytinen currently directs the Doctor of Management Programs targeted for Executives at Weatherhead School of Management. This program is currently regarded as the best of its kind globally. He has consulted numerous government research agencies including Academy of Finland, Swedish, British, Norwegian, Danish, Dutch, German, Swiss, Hong Kong, and Spanish Research Councils, several directorates in EU and in National Science Foundation (NSF). He is the 1st rank knight of the Order of White Rose in Finland. Kalle Lyytinen received the Association of Information Systems (AIS) highest recognition - the LEO Award for Lifetime Exceptional Achievement in information systems – in 2013. He has authored or co-authored more than 160 journal articles, over 20 books and special issues, and several hundred conference papers. In the information system field he is among the three most productive scholars during the last two decades and among the 5 most cited scholars based on his h-index. Globally he is 254th most cited scholar in computing and electronics research based on his h-index and third within Scandinavia (highest in Finland). He is also the most connected author based on co-authorship relationships in the information system field. A new measure called Lyytinen number was recently proposed to reflect connectedness in scholarly publishing within the IS field. He has won several prestigious awards from IFIP, AIS and AoM for his groundbreaking research and he has guided or examined more than 100 PhD theses across the globe. His former PhD students work in significant faculty positions and as department heads currently in four continents in computing and IT management departments.
Kalle Lyytinen’s dominant research focus during the last decade has been on
digital innovation and its specific characteristics and modes of operation.
His research has significantly improved understanding of how digital
innovations shape organizations, their products and services and change
associated innovation processes. His research has helped organizations and
industries to understand how to more effectively identify, absorb, manage,
implement -- and be transformed by digital innovations.
Machine learning: predictive analytics in health and environment
Predictive analytics is one of the most popular areas in machine learning and data mining. I will start the lectures by reviewing some fundamentals in data science and then focus on time series analysis and prediction. More specifically, I will cover some fundamentals in predictive analytics, including reducing dimensionality of the data space by feature selection, stream processing, and learning interpretable models. I will also speak about patterns of missing data and its implications on predictive analytics in stream processing where no missing data imputation is possible. The solutions will be demonstrated in the application areas of environmental informatics, medical science and transportation and mobility.
Jaakko Hollmén (b. 1970) received the degrees of M.Sc. (Tech.) in 1996, Lic.Sc. (Tech.) in 1999, and D.Sc. (Tech.) in 2000, all at the Department of Computer Science and Engineering at the Helsinki University of Technology in Finland. Currently, He is faculty member at Aalto University in Espoo, Finland.
Jaakko Hollmén's research interests include theory and practice of machine learning and data mining, especially their applications in environmental informatics, especiallly time series analysis and medical applications.
He has organized various conferences in his research area. In 2017, he is co-chair of the Program Committe of ECML-PKDD 2017, the premier machine learning and data mining venue in Europe. It will be held in Skopje, Macedonia in September, with more information on http://ecmlpkdd2017.ijs.si