Dates Time Speakers/Topic Location
October 25, 2019 10:00 AM Ying Hung: Integration of Models and Data for Inference about Humans and Machines (I) Konstantin Mischaikow: Analyzing Imprecise Dynamics Jingjin Yu: Toward Scaleable and Optimal Autonomy COR-433
November 1, 2019 10:00 AM Rong Chen: Dynamic Systems and Sequential Monte-Carlo Jason Klusowski: Integration of Models and Data for Inference about Humans and Machines (II) Cun-Hui Zhang: Statistical Inference with High-Dimensional Data COR-433
November 8, 2019 10:00 AM Kostas Bekris: Generating Motion for Adaptive Robots Fred Roberts: Meaningless Statements in Performance Measurement for Intelligent Machines COR-433
November 15, 2019 10:00 AM Fioralba Cakoni: Inside-Out, Seen and Unseen Matthew Stone: Colors in Context Inference challenges in Bayesian cognitive science Wujun Zhang: Numerical approximation of optimal transport problem COR-433
February 7, 2020 10:00 AM Patrick Shafto (Mathematics and Computer Science; Rutgers University)

Title: Cooperation in Humans and Machines

Abstract: Cooperation, specifically cooperative information sharing, is a basic principle of human intelligence. Machine learning, in contrast, focuses on learning from randomly sampled data, which neither leverages others’ cooperation nor prioritizes the ability to communicate what has been learned. I will discuss ways in which our understanding of human learning may be leveraged to develop new machine learning, and form a foundation for improved integration of machine learning into human society.

May 8, 2020 10:00 AM Rene Vidal (Biomedical Engineering; Johns Hopinks University)

Title: From Optimization Algorithms to Dynamical Systems and Back

Abstract: Recent work has shown that tools from dynamical systems can be used to analyze accelerated optimization algorithms. For example, it has been shown that the continuous limit of Nesterov’s accelerated gradient (NAG) gives an ODE whose convergence rate matches that of NAG for convex, unconstrained, and smooth problems. Conversely, it has been shown that NAG can be obtained as the discretization of an ODE, however since different discretizations lead to different algorithms, the choice of the discretization becomes important. The first part of this talk will extend this type of analysis to convex, constrained and non-smooth problems by using Lyapunov stability theory to analyze continuous limits of the Alternating Direction Method of Multipliers (ADMM). The second part of this talk will show that many existing and new optimization algorithms can be obtained by suitably discretizing a dissipative Hamiltonian. As an example, we will present a new method called Relativistic Gradient Descent (RGD), which empirically outperforms momentum, RMSprop, Adam and AdaGrad on several non-convex problems. This is joint work with Guilherme Franca, Daniel Robinson and Jeremias Sulam.

June 5, 2020 12:00 PM Lydia Chilton (Computer Science; Columbia University)

Title: AI Tools for Creative Work

June 16, 2020 12:00 PM Mykhaylo Tyomkyn (Applied Mathematics; Charles University)

Title: Many Disjoint Triangles in Co-triangle-free Graphs

June 22, 2020 12:00 PM Lenka Zdeborova (Institute of Theoretical Physics; French National Centre for Scientific Research)

Title: Understanding Machine Learning with Statistical Physics

June 30, 2020 12:00 PM Rebecca Wright (Computational Science Center; Barnard College)

Title: Privacy in Today’s World

July 7, 2020 12:00 PM Vivek Singh, (Behavioral Informatics Lab; Rutgers University)

Title: Algorithmic Fairness

Abstract: Today Artificial Intelligence (AI) algorithms are used to make multiple decisions affecting human lives and many such algorithms have been reported to be biased. This includes parole decisions, search results, and product recommendation, among others. Using multiple examples of recent efforts from my lab, I will discuss how such bias can be systematically measured and how the underlying algorithms can be made less biased. More details available at:

July 17, 2020 10:00 AM Cynthia Rudin (Prediction Analysis Lab; Duke University)

Title: Interpretability vs. Explainability in Machine Learning

Abstract: With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes.

In this talk, I will discuss some of the reasons that black boxes with explanations can go wrong, whereas using inherently interpretable models would not have these same problems. I will give an example of where an explanation of a black box model went wrong, namely, I will discuss ProPublica’s analysis of the COMPAS model used in the criminal justice system: ProPublica’s explanation of the black box model COMPAS was flawed because it relied on wrong assumptions to identify the race variable as being important. Luckily in recidivism prediction applications, black box models are not needed because inherently interpretable models exist that are just as accurate as COMPAS.

I will also give examples of interpretable models in healthcare. One of these models, the 2HELPS2B score, is actually used in intensive care units in hospitals; most machine learning models cannot be used when the stakes are so high.

Finally, I will discuss two long-term projects my lab is working on, namely optimal sparse decision trees and interpretable neural networks.

July 21, 2020 12:00 PM Peter Winkler (Math and Computer Science; Dartmouth)

Title: Cooperative Puzzles

September 11, 2020 10:00 AM Mauro Maggioni (Data Intensive Computation; Johns Hopkins)

Title: Learning Interaction laws in particle- and agent-based systems

Abstract: Interacting agent-based systems are ubiquitous in science, from modeling of particles in Physics to prey-predator and colony models in Biology, to opinion dynamics in economics and social sciences. Oftentimes the laws of interactions between the agents are quite simple, for example they depend only on pairwise interactions, and only on pairwise distance in each interaction. We consider the following inference problem for a system of interacting particles or agents: given only observed trajectories of the agents in the system, can we learn what the laws of interactions are? We would like to do this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise distances. We consider this problem both the mean-field limit (i.e. the number of particles going to infinity) and in the case of a finite number of agents, with an increasing number of observations, albeit in this talk we will mostly focus on the latter case. We cast this as an inverse problem, and study it in the case where the interaction is governed by an (unknown) function of pairwise distances. We discuss when this problem is well-posed, and we construct estimators for the interaction kernels with provably good statistically and computational properties. We measure their performance on various examples, that include extensions to agent systems with different types of agents, second-order systems, and families of systems with parametric interaction kernels. We also conduct numerical experiments to test the large time behavior of these systems, especially in the cases where they exhibit emergent behavior. This is joint work with F. Lu, J.Miller, S. Tang and M. Zhong.

October 23, 2020 10:00 AM Jason Hartline (Computer Science; Northwestern University)

Title: Mechanism Design and Data Science

Abstract: Computer systems have become the primary mediator of social and economic interactions. A defining aspect of such systems is that the participants have preferences over system outcomes and will manipulate their behavior to obtain outcomes they prefer. Such manipulation interferes with data-driven methods for designing and testing system improvements. A standard approach to resolve this interference is to infer preferences from behavioral data and employ the inferred preferences to evaluate novel system designs.

In this talk Prof. Hartline will describe a method for estimating and comparing the performance of novel systems directly from behavioral data from the original system. This approach skips the step of estimating preferences and is more accurate. Estimation accuracy can be further improved by augmenting the original system; its accuracy then compares favorably with ideal controlled experiments, a.k.a., A/B testing, which are often infeasible. A motivating example will be the paradigmatic problem of designing an auction for the sale of advertisements on an Internet search engine.

October 27, 2020 10:00 AM Woojin Jung, (School of Social Science; Rutgers University)

Title: Using satellite imagery and deep learning to target aid in data-sparse contexts

Abstract: Aid policy has the potential to alleviate global poverty by targeting areas of concentrated need. A critical question remains, however, over whether aid is reaching the areas of most need. Often little ground-truth poverty data is available at a granular level (e.g., village) where aid interventions take place. This research explores remote sensing techniques to measure poverty and target aid in data-sparse contexts. Our study of Myanmar examines i) the performance of different methods of poverty estimation and ii) the extent to which poverty and other development characteristics explain community aid distribution. This study draws from the following sources of data: georeferenced community-driven development projects (n=12,504), daytime and nighttime satellite imagery, the Demographic and Health Survey, and conflict data. We first compare the accuracy of four poverty measures in predicting ground-truth survey data. Using the best poverty estimation in the first step, we investigate the association between village characteristics and aid per capita per village. Our results show that daytime features perform the best in predicting poverty as compared to the analysis of RSG color distribution, Kriging, and nighttime-based measures. We use a Convolutional Neural Network, pre-trained on ImageNet, to extract features from the satellite images in our best model. These features are then trained on the DHS wealth data to predict the DHS wealth index/poverty for villages receiving aid. The linear and non-linear estimator indicate that development assistance flows to low-asset villages, but only marginally. Aid is more likely to be disbursed to those villages that are less populous and farther away from fatal conflicts. Our study concludes that the nuances captured in satellite-based models can be used to target aid to impoverished communities.

November 13, 2020 10:00 AM Vivek Singh, (Behavioral Informatics Lab; Rutgers University)

Title: Auditing and Controlling Algorithmic Bias

Abstract: Today Artificial Intelligence algorithms are used to make multiple decisions affecting human lives, and many such algorithms, such as those used in parole decisions, have been reported to be biased. In this talk, I will share some recent work from our lab on auditing algorithms for bias, designing ways to reduce bias, and expanding the definition of bias. This includes applications such as image search, health information dissemination, and cyberbullying detection. The results will cover a range of data modalities, (e.g., visual, textual, and social) as well as techniques such as fair adversarial networks, flexible fair regression, and fairness-aware fusion.

December 4, 2020 10:00 AM Magnus Egerstedt (Electrical and Computer Engineering; Georgia Institute of Technology)

Title: Long Duration Autonomy With Applications to Persistent Environmental Monitoring

Abstract: When robots are to be deployed over long time scales, optimality should take a backseat to “survivability”, i.e., it is more important that the robots do not break or completely deplete their energy sources than that they perform certain tasks as effectively as possible. For example, in the context of multi-agent robotics, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives, such as assembling shapes or covering areas. But, what happens when these geometric objectives no longer matter all that much? In this talk, we consider this question of long duration autonomy for teams of robots that are deployed in an environment over a sustained period of time and that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding tasks and safety constraints through the development of non-smooth barrier functions, as well as a detour into ecology as a way of understanding how persistent environmental monitoring can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth.

Bio: Magnus Egerstedt is a Professor and School Chair in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, where he also holds secondary faculty appointments in Mechanical Engineering, Aerospace Engineering, and Interactive Computing. Prior to becoming School Chair, he served as the director for Georgia Tech’s multidisciplinary Institute for Robotics and Intelligent Machines. A native of Sweden, Dr. Egerstedt was born, raised, and educated in Stockholm. He received a B.A. degree in Philosophy from Stockholm University, and M.S. and Ph.D. degrees in Engineering Physics and Applied Mathematics, respectively, from the Royal Institute of Technology. He subsequently was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. He is a Fellow of both the IEEE and IFAC, and is a foreign member of the Royal Swedish Academy of Engineering Sciences. He has received a number of teaching and research awards for his work, including the John. R. Ragazzini Award from the American Automatic Control Council, the O. Hugo Schuck Best Paper Award from the American Control Conference, and the Best Multi-Robot Paper Award from the IEEE International Conference on Robotics and Automation.

December 18, 2020 10:00 AM Tanya Berger-Wolf (Computer Science and Engineering; Ohio State University)

Title: TBA

February 19, 2021 10:00 AM Dan Halperin (Computer Science; Tel Aviv University)

Title: TBA

TBD 10:00 AM YingLi Tian (Electrical Engineering; The City College of New York)

Title: TBA

TBD 10:00 AM Dana Randall (Computer Science; Georgia Institute of Technology)

Title: TBA