ICAPS 2024 Keynotes

Talk: TBD


Short Bio
Julie Shah is the H.N. Slater Professor of Aeronautics and Astronautics, faculty director of MIT's Industrial Performance Center, and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She is expanding the use of human cognitive models for artificial intelligence and has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. She was also the recipient of the 2018 IEEE RAS Academic Early Career Award for contributions to human-robot collaboration and transition of results to real world application. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT and is co-author of the book, What to Expect When You're Expecting Robots: The Future of Human-Robot Collaboration (Basic Books, 2020).

Learning Representations to Act and Plan

Recent progress in deep learning and deep reinforcement learning (DRL) has been truly remarkable, yet two important problems remain: structural policy generalization and policy reuse. The first is about getting policies that generalize in a reliable way; the second is about getting policies that can be reused and combined in a flexible, goal-oriented manner. The two problems are studied in DRL but only experimentally, and the results are not clear and crisp. In our work, we have tackled these problems in a slightly different manner, developing languages for expressing general policies, and methods for learning them using combinatorial and DRL approaches. We have also developed languages for expressing and learning lifted action models, general subgoal structures (sketches), and hierarchical polices. In the talk, I'll present the main ideas and results, and open challenges. This is joint work with Blai Bonet, Simon Stahlberg, Dominik Drexler, and other members of the RLeap team at RWTH and LiU.

Short Bio
Hector Geffner is an Alexander von Humboldt Professor at RWTH Aachen University, Germany and a Guest Wallenberg Professor at Linköping University, Sweden. Before joining RWTH, he was an ICREA Research Professor at the Universitat Pompeu Fabra, Barcelona, Spain. Hector obtained a Ph.D. in Computer Science at UCLA in 1989 and then worked at the IBM T.J. Watson Research Center in New Work, and at the Universidad Simon Bolivar in Caracas. Distinctions for his work and the work of his team include the 1990 ACM Dissertation Award and three ICAPS Influential Paper Awards. Hector currently leads a project on representation learning for acting and planning (RLeap) funded by an ERC grant.

Short Bio
Dale Schuurmans is a Research Director at Google DeepMind, Professor of Computing Science at the University of Alberta, a Canada CIFAR AI Chair, and a Fellow of AAAI. He has served as an Associate Editor-in-Chief for IEEE TPAMI, an Associate Editor for JMLR, AIJ, JAIR and MLJ, and as a Program Co-chair for AAAI-2016, NeurIPS-2008 and ICML-2004. He has published over 250 papers in machine learning and artificial intelligence, and received paper awards at NeurIPS, ICML, IJCAI, and AAAI.