Dr Vaishak Belle

Belle Lab

TL;DR: Here’s a one-slide summary of our recent work:


The Lab carries out research in artificial intelligence, by unifying learning and reasoning, with a recent emphasis on neuro-symbolic AI, causality, explainability and ethics.

We are motivated by the need to augment learning and perception with high-level structured, commonsensical knowledge, to enable systems to learn faster and more accurate models of the world. We are interested in developing computational frameworks that are able to explain their decisions, modular, re-usable, and robust to variations in problem description. Here is a slightly more granular view of the research that we do, where neurosymbolics AI sits at the intersection of learning and reasoning + our emphasis on expressive languages.

A non-exhaustive list of topics include:

  • probabilistic and statistical knowledge bases
  • ethics and explainability in AI
  • exact and approximate probabilistic inference
  • statistical relational learning and causality
  • unifying deep learning and probabilistic learning methods
  • probabilistic programming
  • numerical optimization
  • automated planning and high-level programming
  • reinforcement learning and learning for automated planning
  • cognitive robotics
  • automated reasoning
  • modal logics (knowledge, action, belief)
  • multi-agent systems and epistemic planning
  • integrating causality and learning

For example, our recent work has touched upon:

Faculty: Vaishak Belle

Postdoctoral fellows and PhD students:

  • Miguel Mendez Lucero, interested in neuro-symbolic AI
  • Daxin Liu (Postdoctoral fellow with Royal Society), interested in probabilistic modal logics
  • Xue Li (Postdoctoral fellow with Bjorn Ross), interested in counterfactuals and language models
  • Antonio Miceli (Postdoctoral fellow with Cisco), interested in large language models, logic and vision
  • Jessica Ciupa (Masters by Research), interested in RL and ethics
  • Nijesh Uperti, interested in abstraction and logic-based learning
  • Ruta Tang, interested in large language models and logic

Alumni:

  • Benedicte Legastelois (Postdoctoral fellow with TAS project collaborators), interested in explainability
  • Jonathan Feldstein (PhD 2024), interested in neuro-symbolic AI
  • Andreas Bueff (PhD 2023), interested in tractable learning and reinforcement learning
  • Giannis Papantonis (PhD 2023), interested in causality
  • Ionela-Georgiana Mocanu (PhD 2023), interested in PAC learning
  • Paulius Dilkas (PhD 2022), interested in model counting
  • Xin Du (Postdoctoral fellow with TAS project collaborators), interested in explainability
  • Amélie Levray (Postdoctoral fellow 2018-2019), interested in tractable learning with credal networks
  • Eleanor Platt (research associate), interested in explainability
  • Amit Parag (MscR 2019), interested in tractable models and cosmological simulations
  • Rafael Karampatsis (Postdoctoral fellow 2019-2021), interested in ML interpretability

Associates:

  • Sandor Bartha (with James Cheney, PhD 2023), interested in program induction
  • Gary Smith (with Ron Petrick), interested in epistemic planning
  • Xue Li (Postdoctoral fellow with ELIAI), interested in misinformation
  • Eddie Ungless (with Bjorn Ross), interested in NLP and bias
  • Samuel Kolb (PhD 2019, KU Leuven, with Luc De Raedt), interested in inference for hybrid domains
  • Davide Nitti (PhD 2016, KU Leuven, with Luc De Raedt), interested in machine learning for hybrid domains

Visitors:

  • Esra Erdem, Sabanci University
  • Yoram Moses, Technion
  • Brendan Juba, Washington University in St. Louis
  • Loizos Michael (via the Alan Turing Institute), Open University of Cyprus
  • Till Hoffman, RWTH Aachen University
  • Xenia Heilmann, Mainz University
  • Chiara Manganini, University of Milan

MSc Students:

If you are an Informatics MSc student at the University of Edinburgh, I supervise a number of theses on above topics. See, for example, the publications of other MSc students, including Stefanie Speichert (ILP in hybrid domains), Laszlo Treszkai (generalized planning), Lewis Hammond (moral responsibility), and Michael Varley (Fairness).