Faculty Recruiting Support CICS

Shlomo Zilberstein and Multi-Institutional Colleagues Receive a $5M NSF Grant to Improve Causal Decision Making

Shlomo Zilberstein
Shlomo Zilberstein

Manning College of Information and Computer Sciences (CICS) Professor Shlomo Zilberstein is one of five co-principal investigators for a $5 million, multi-institutional National Science Foundation (NSF) grant to improve AI-based causal decision making. 

The four-year grant will support the project, “Causal Foundations of Decision Making and Learning,” a collaborative effort led by Elias Bareinboim of Columbia University and co-principal investigators Rina Dechter of the University of California Irvine, Sven Koenig of the University of Southern California, and Jin Tian of Iowa State University, in partnership with ACM A.M. Turing Award winner Judea Pearl of the University of California Los Angeles.  

As AI becomes increasingly integrated with daily life, automated systems are entrusted with making some of the decisions traditionally left in the hands of humans. Much like a detective analyzes clues to solve cases, the current crop of AI systems is heavily reliant on data, using statistical methods to make sense of it; however, relying solely on statistical associations has limitations. Current methods utilized in AI decision-making, such as model-based planning and model-free reinforcement learning, fall short of explicitly addressing causal mechanisms like environment changes. While these methods are excellent at working with patterns and statistical associations, they lack the depth needed to understand the intricate cause-and-effect relationships of an ever-changing world. 

“Many of the hardest challenges for AI systems could be better addressed with causal reasoning. For instance, a medical diagnostic system could establish a causal relationship between a proposed treatment and an outcome, whereas today’s methods can only determine a correlation,” says Zilberstein. “Causality offers a far better foundation for verification and explanation of automated decisions.” 

The project will combine the framework of structural causal models (SCM) with the leading approaches for decision-making in AI–including model-based planning, reinforcement learning, and graphical models such as influence diagrams–to create a new perspective, envisioning AI systems as existing within a world that's treated as its own SCM. The team also plans to design a unique architecture that clearly distinguishes the system model from the world model. This separation would allow them to articulate precisely what the system knows, like a causal graph – and what it doesn't know, such as confounding factors or unknown causal mechanisms. The goal is to create new foundations, including principles, methods, and tools for systems that make decisions based on causal relationships.  

To this end, the project is structured around three main research thrusts. In thrust one, the researchers will study various aspects of causal decision-making, such as online and offline policy learning, interventional planning, imitation learning, curriculum learning, knowledge transfer, and adaptation to ensure that decisions made by autonomous agents and decision-support systems are not only robust but also efficient and precise. Thrust two concentrates on additional aspects of causal decision-making, especially in scenarios where humans are involved. This includes utilizing causality for developing explanations, deciding when to involve humans, and giving AI systems competent awareness and the ability to make fair decisions aligned with user values.  

Zilberstein’s efforts will be primarily dedicated to thrust three, which focuses on the scalability and efficiency of the tools developed. This includes addressing the tradeoff between multiple objectives, balancing explainability and decision quality, and tackling the challenge of learning causal models of the world. 

“Optimizing decisions remains difficult because there is never complete and perfect information to inform these decisions. Managing the tradeoffs that arise is key to success and scalability. My lab is developing new methods that allow users to better understand and control these tradeoffs and provide better input to guide the AI system,” he says. 

The combination of these three thrusts is expected to contribute to the development of a new generation of powerful AI tools, specifically geared towards creating autonomous agents and decision-support systems that are not only highly capable but also adaptable to various scenarios. 

“I am excited to be part of this effort that promises to give users better tools to indicate priorities, obtain explanations of the consequences, and reap more benefits from AI technologies,” says Zilberstein. 

Zilberstein joined the CICS faculty in 1993 and currently serves as the director of the Resource-Bounded Reasoning Lab and as the lead for smart transportation in the Center for Smart and Sustainable Society. His research in artificial intelligence focuses on reasoning techniques that allow intelligent systems to operate autonomously. In collaboration with industry, he has applied his research to developing service robots and self-driving cars.