Dates | Time | Speakers/Topic | Location | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

October 25, 2019 | 10:00 AM | Ying Hung: Integration of Models and Data for Inference about Humans and Machines (I) |
Konstantin Mischaikow: Analyzing Imprecise Dynamics |
Jingjin Yu: Toward Scaleable and Optimal Autonomy |
COR-433 | |||||||||

November 1, 2019 | 10:00 AM | Rong Chen: Dynamic Systems and Sequential Monte-Carlo |
Jason Klusowski: Integration of Models and Data for Inference about Humans and Machines (II) |
Cun-Hui Zhang: Statistical Inference with High-Dimensional Data |
COR-433 | |||||||||

November 8, 2019 | 10:00 AM | Kostas Bekris: Generating Motion for Adaptive Robots |
Fred Roberts: Meaningless Statements in Performance Measurement for Intelligent Machines |
COR-433 | ||||||||||

November 15, 2019 | 10:00 AM | Fioralba Cakoni: Inside-Out, Seen and Unseen |
Matthew Stone: Colors in Context Inference challenges in Bayesian cognitive science |
Wujun Zhang: Numerical approximation of optimal transport problem |
COR-433 | |||||||||

February 7, 2020 | 10:00 AM |
Patrick Shafto (Mathematics and Computer Science; Rutgers University)
Title: Abstract: Cooperation, specifically cooperative information sharing, is a basic principle of human intelligence. Machine learning, in contrast, focuses on learning from randomly sampled data, which neither leverages others’ cooperation nor prioritizes the ability to communicate what has been learned. I will discuss ways in which our understanding of human learning may be leveraged to develop new machine learning, and form a foundation for improved integration of machine learning into human society. |
COR-431 | |||||||||||

May 8, 2020 | 10:00 AM |
Rene Vidal (Biomedical Engineering; Johns Hopinks University)
Title: Abstract: Recent work has shown that tools from dynamical systems can be used to analyze accelerated optimization algorithms. For example, it has been shown that the continuous limit of Nesterov’s accelerated gradient (NAG) gives an ODE whose convergence rate matches that of NAG for convex, unconstrained, and smooth problems. Conversely, it has been shown that NAG can be obtained as the discretization of an ODE, however since different discretizations lead to different algorithms, the choice of the discretization becomes important. The first part of this talk will extend this type of analysis to convex, constrained and non-smooth problems by using Lyapunov stability theory to analyze continuous limits of the Alternating Direction Method of Multipliers (ADMM). The second part of this talk will show that many existing and new optimization algorithms can be obtained by suitably discretizing a dissipative Hamiltonian. As an example, we will present a new method called Relativistic Gradient Descent (RGD), which empirically outperforms momentum, RMSprop, Adam and AdaGrad on several non-convex problems. This is joint work with Guilherme Franca, Daniel Robinson and Jeremias Sulam. |
Online | |||||||||||

June 5, 2020 | 12:00 PM |
Lydia Chilton (Computer Science; Columbia University)
Title: |
Online | |||||||||||

June 16, 2020 | 12:00 PM |
Mykhaylo Tyomkyn (Applied Mathematics; Charles University)
Title: |
Online | |||||||||||

June 22, 2020 | 12:00 PM |
Lenka Zdeborova (Institute of Theoretical Physics; French National Centre for Scientific Research)
Title: |
Online | |||||||||||

June 30, 2020 | 12:00 PM |
Rebecca Wright (Computational Science Center; Barnard College)
Title: |
Online | |||||||||||

July 7, 2020 | 12:00 PM |
Vivek Singh, (Behavioral Informatics Lab; Rutgers University)
Title: Abstract: Today Artificial Intelligence (AI) algorithms are used to make multiple decisions affecting human lives and many such algorithms have been reported to be biased. This includes parole decisions, search results, and product recommendation, among others. Using multiple examples of recent efforts from my lab, I will discuss how such bias can be systematically measured and how the underlying algorithms can be made less biased. More details available at: http://wp.comminfo.rutgers.edu/vsingh/algorithmic-bias/ |
Online | |||||||||||

July 17, 2020 | 10:00 AM |
Cynthia Rudin (Prediction Analysis Lab; Duke University)
Title: Abstract: With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. In this talk, I will discuss some of the reasons that black boxes with explanations can go wrong, whereas using inherently interpretable models would not have these same problems. I will give an example of where an explanation of a black box model went wrong, namely, I will discuss ProPublica’s analysis of the COMPAS model used in the criminal justice system: ProPublica’s explanation of the black box model COMPAS was flawed because it relied on wrong assumptions to identify the race variable as being important. Luckily in recidivism prediction applications, black box models are not needed because inherently interpretable models exist that are just as accurate as COMPAS. I will also give examples of interpretable models in healthcare. One of these models, the 2HELPS2B score, is actually used in intensive care units in hospitals; most machine learning models cannot be used when the stakes are so high. Finally, I will discuss two long-term projects my lab is working on, namely optimal sparse decision trees and interpretable neural networks. |
Online | |||||||||||

July 21, 2020 | 12:00 PM |
Peter Winkler (Math and Computer Science; Dartmouth)
Title: |
Online | |||||||||||

September 11, 2020 | 10:00 AM |
Mauro Maggioni (Data Intensive Computation; Johns Hopkins)
Title: Abstract: Interacting agent-based systems are ubiquitous in science, from modeling of particles in Physics to prey-predator and colony models in Biology, to opinion dynamics in economics and social sciences. Oftentimes the laws of interactions between the agents are quite simple, for example they depend only on pairwise interactions, and only on pairwise distance in each interaction. We consider the following inference problem for a system of interacting particles or agents: given only observed trajectories of the agents in the system, can we learn what the laws of interactions are? We would like to do this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise distances. We consider this problem both the mean-field limit (i.e. the number of particles going to infinity) and in the case of a finite number of agents, with an increasing number of observations, albeit in this talk we will mostly focus on the latter case. We cast this as an inverse problem, and study it in the case where the interaction is governed by an (unknown) function of pairwise distances. We discuss when this problem is well-posed, and we construct estimators for the interaction kernels with provably good statistically and computational properties. We measure their performance on various examples, that include extensions to agent systems with different types of agents, second-order systems, and families of systems with parametric interaction kernels. We also conduct numerical experiments to test the large time behavior of these systems, especially in the cases where they exhibit emergent behavior. This is joint work with F. Lu, J.Miller, S. Tang and M. Zhong. |
Online | |||||||||||

October 23, 2020 | 10:00 AM |
Jason Hartline (Computer Science; Northwestern University)
Title: |
TBD | |||||||||||

November 20, 2020 | 10:00 AM |
Tanya Berger-Wolf (Computer Science and Engineering; Ohio State University)
Title: |
TBD | |||||||||||

TBD | 10:00 AM |
YingLi Tian (Electrical Engineering; The City College of New York)
Title: |
TBD | |||||||||||

TBD | 10:00 AM |
Dana Randall (Computer Science; Georgia Institute of Technology)
Title: |
TBD |