

Applied Artificial Intelligence
(AAI) research group
Research
theme
Our research concerns artificial intelligence (AI),
particularly conversational AI and natural language processing
(NLP), mobile robots (including humanoid robots), automated
transportation (road, sea, and rail), and computational
finance.
Interpretable AI:
Background
A central
theme in our work is interpretable artificial intelligence
(IAI), i.e. models and systems whose components (and ideally
the entire system) are human-interpretable, which is a necessary
prerequisite for safe and accountable AI, especially in applications
involving high-stakes decisions (for example, in medical
applications).
Currently, much work in AI is
focused on black box models, especially deep neural networks
(DNNs). Such models have been very successful in many different applications, for
example image processing and speech processing. However, black
box models, including DNNs, also have several drawbacks. For example: (1) Their
decision-making is opaque, due to the non-linear nature of their
computation and their sheer size; (2) While well-trained DNNs can give excellent results on
average, they are also prone to occasional catastrophic
(and unpredictable) failures; and (3) DNNs are typically
trained on massive
amounts of data, meaning that it is difficult or impossible to
curate the data, so as to remove unwanted biases in the data.
Therefore, during
training, DNNs may pick up such biases.
In order to get to terms with
the opaque decision-making in black box models, many researchers
have considered what is known as Explainable AI, in which one typically builds a secondary
model, which is simpler and ideally human-interpretable, and
which approximates the black box model. However, it is not always clear to what
degree the secondary model is a faithful representation of the
(much more complex) original black box model. Moreover, the secondary model,
being much simpler than the model it is supposed to explain, may
not be able to provide a useful explanation of what the original black box
actually does (or how it does it).
By contrast, in interpretable
AI, we strive to generate models and system that are
human-interpretable by construction, i.e. that consist of human-interpretable
primitives. An interpretable model need not be small or simple,
but it should consist of components that can be individually
understood by a human,
and also a allow a human observer to follow the model's
reasoning, step-by-step, through a chain of actions taken by
such human-interpretable
components. An example is our dialogue manager DAISY
that not only consists of human-interpretable primitives, but
also (by design) is able to generate a human-interpretable explanation
of its actions. It should be noted that (unfortunately) many
researchers tend to use the (wholly different) terms interpretability and
explainability interchangeably, something that creates quite a
bit of confusion.
Why is interpretability
important? In many cases, it is not. For example, when
processing speech, as long as the system is able to accurately
determine what a
person says, it might not matter very much how the system does
so. However, in other applications, especially those that
involve high-stakes
decision-making, interpretability is key. Such applications
occur, for instance, in medicine and healthcare, automated
driving, credit scoring, and so on. Despite the importance of
interpretability, much current research is entirely focused on
black box models, perhaps partly because of a (mythical) belief that only such models
can reach top performance. That is not true, but even if it were
true, the performance of a model must generally be weighed against other aspects,
such as its safety and accountability, two aspects that
naturally occur in interpretable models, but are completely alien to DNN-based models.
Now, if interpretability is a
key aspect, why is so much research focused exclusively on
DNN-based models? There can be many reasons, one being the ease with which one
can set up DNN-based systems (e.g. classifiers) due to the
availability of many ready-made code libraries for DNNs, primarily in Python.
This, in turn, has led to a situation in which alternative
models are sometimes not even considered. Even their existence is not always known
to end users. Moreover, since the DNN-based models somehow
represent the state-of-the-art in (the use of) computer science, they tend
perhaps to get more media coverage than other types of models.
Also, for industrial applications, there might be strong financial incentives
for using a completely opaque, giant black box, rather than a
(non-patentable) much simpler system. It is safe to say that the
importance of interpretability has not yet been emphasized
sufficiently, even though one of the key aspect of AI-based systems (as
measured in interviews with potential end users) is indeed accountability.
Finally, interpretability (and related aspects such as
accountability, safety, fairness, and so on) is also a central
concept in proposed legislation related to AI, both in the EU ("The right to an
explanation") and in the US ("The algorithmic accountability
act).
Ongoing research projects (Note: These
are examples - the page is under construction)
DAISY
The aim of this project is to develop a fully
interpretable, general-purpose dialogue manager for
conversational AI.
Publications:
Wahde, M. and
Virgolin, M. "DAISY: An implementation of five
core principles for transparent and accountable
conversational AI", International Journal of
Human-Computer Interaction, pp. 1-18, 2022, https://doi.org/10.1080/10447318.2022.2081762
Wahde, M. and Virgolin, M. "The five Is: Key principles
for interpretable and safe conversational AI", in Proc. of
the 4th international conference on Computational
Intelligence and Intelligent Systems (CIIS2021), pp.
50-54, 2021, https://doi.org/10.1145/3507623.3507632
Tranzport
In this
project, we are developing a method for
automated trajectory planning for a fleet of
autonomous vehicles.
Publications:
Wahde, M., Bellone,
M., and Torabi, S. "A
method for real-time
dynamic fleet mission
planning for autonomous
mining", Autonomous Agents
and Multi-agent Systems,
vol. 33, pp. 564-590,
2019,
https://doi.org/10.1007/s10458-019-09416-y
Courses
Stochastic optimization methods (FFR105, FIM711), 1st
quarter (Aug. - Oct.)
Intelligent Agents (TME286), 3rd quarter (Jan. -
March) (The next course starts in January 2020)
Autonomous Robots (TME290), 4th quarter (March - May)
Introduction to Artificial Intelligence, 4th quarter
(March-May)
Humanoid robotics (TIF160, FIM800), 1st quarter (Aug. -
Oct.)
Current group
members
Mattias Wahde, PhD, Professor, Group leader
Krister Wolff, PhD, Docent, Vice Head of department
Peter Forsberg, Adjunct Associate Professor
Ola Benderius, PhD, Docent,
Associate Professor
Marco
Della Vedova, PhD, Associate Professor
Björnborg
Nguyen, PhD student,
Krister
Blanch, PhD student
Contact
person: Prof. Mattias Wahde, mattias.wahde@chalmers.se