Alan Dix
Explaining Ourselves -
people, computers and
AI
https://alandix.com/academic/talks/Bath-2025-explaining-
ourselves/
today I am not talking about …
• qualitative–quantitative reasoning
• deep digitality and digital thinking
• next generation UX tools
• long tail of small data
• physicality
• now
• digital light
• walking round Wales
• virtual crackers and slow time
• digital humanities and community heritage
• modeling dreams, regret and the emergence of self
plugs
2nd edition
out
now!
2
nd
edition
2026
plus …
AI for HCI
HCI the Basics
AI for Social Justice
This project has received funding from the European Union’s Horizon Europe research and innovation programme
under Grant Agreement No. 101120763.
Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the
European Union or HaDEA. Neither the European Union nor the granting authority can be held responsible for them.
it takes two to tango
a synergistic approach to
human-machine decision making
https://tango-horizon.eu/
AI and User Interaction
H
CI for AI
Rich
System
s
Intelligent
Interfaces
Ethics
S
o
c
i
a
l
J
u
s
t
i
c
e
G
o
v
e
r
n
a
n
c
e
Human
like
Computing
I
n
t
e
r
f
a
c
e
s
f
o
r
A
I
d
e
v
e
l
o
p
e
r
s
Big
D
ata
and
Evaluation
H
CI for AI
Rich
System
s
Intelligent
Interfaces
Ethics
S
o
c
i
a
l
J
u
s
t
i
c
e
G
o
v
e
r
n
a
n
c
e
Human
like
Computing
I
n
t
e
r
f
a
c
e
s
f
o
r
A
I
d
e
v
e
l
o
p
e
r
s
Big
D
ata
and
Evaluation
AI
helping
UI
UI
helping
AI
front-end user-facing
back-end developer-facing
start human
human – human
communication
human–{X} interactions
• human–human
• common ground, collaborative principle ,
accountability
• human–world–human
• feedthrough, onomatopoeic action
• human–object
• ecological psychology, affordance, epistemic action
• human–technology
• technology evolution, designed affordances,
Norman’s loop
traditional UI/UX/CSCW
design
designer
& developer
AI design
algorithm choice
rules, policies
data
AI
designer
& developer
epistemic interaction
good AI+HCI
not about developing the most accurate AI
but creating the most effective and enjoyable
overall human-technical system
synergy
cooperation and adaptation
adapting AI for human interaction
adapting user interfaces for AI
AI
synergy
cooperation and adaptation
adapting AI for human interaction
adapting user interfaces for AI
AI
adapting interaction for AI
‘best’ UI – short term gain
epistemic interaction – long term gains
epistemic action
wondering
what’s inside?
just
peek
epistemic interaction
redesign user interactions
to make more information
available
for machine adaptation
document results – option 1 – scroll
scroll down
to see
more results
document results – option 2 –
accordion
press ‘+’
to expand
interesting
items
which option? – epistemic
interaction …
scroll accordion
A-B user testing
scroll a little better
accordion gives more data
for AI adaptation
coherence
human–AI communication
training data
advice / prediction
explanation (XAI)
gender and
ethnic bias
in black-box ML
Query by
Browsing
coherence
humans expect human
explanations to be consistent
through time
AI explanations should be too
training data
advice / prediction
explanation (XAI)
S. Myers, N. Chater, Interactive explainability: Black boxes, mutual understanding and what it would
really mean for AI systems to be as explainable as people, 2024. doi:10.31234/osf.io/ha37x
… but how?
Types of coherence
training decision and
explanation
new decision
model
M
M′
X, dX, eX Y, dY, eY
updated training and model
X, d′X, e
′X
potential
incoherence
dx d′
≁ X
eX e
≁ Y
dY e
≁ X[Y]
ex e′
≁ X
Partial patch model
set of all
possible
inputs
L2 R
⇢ 2
L3 R
⇢ 3
L4 R
⇢ 4
L1 R
⇢ 1
L5 R
⇢ 5
size and shape
of patches
dealing
with overlap
areas of greater
or lower patch
density
talking back
communication
training data
advice / prediction
explanation (XAI)
critique
explanation
QbB with user explanation – global
features
QbB with user explanation – local feature
XUI
explainable user
interfaces
why XUI?
https://vimeo.com/user78028525/xui-bhci2025
the problem
what just happened?
what did I do?
how do I do this again?
all of us, but especially elderly, less IT-literate,
poor motor control, after interruptions
we need explainable user interfaces
why needed – locus of control
early models – user in control
CLI & GUI, Norman’s execution–evaluation cycle
never just the user
other people, physical sensors, timing
now
notifications, AI
who is in control?
why needed – complexity and
scale
ever smaller
screens
every larger
data and
computatio
why needed – shrinking gaps
theories of language – clarity => distinctions
binary opposition (Saussure), information transmission
(Goodman)
UI – reducing boundaries
spatial
fat finger, two/three finger swipe
temporal
click vs double-click, changing meanings, appearing targets
semantic
spelling correction, autocomplete
can explanations help?
can explanations help?
understanding, control, empowerment
when things go wrong
undo and repair
when things worked
replay/redo
memory in a post ‘recognition rather than recall’ world
providing mutual help
third party helpers – what happened here?
sharing knowledge – here’s what I did
what XUI might be like?
what XUI might be like?
commands and scripts
what happened when – scripts and undo
factoring
applications
what XUI might be like?
commands and scripts
what happened when – scripts and undo
what happened here?
focus on
object
e.g. WS2
how to do it
how to do it
• within application
• UI architecture crosses boundaries:
• monolithic (e.g. Seeheim) – lexical/syntactic/semantic
• components (e.g. MVC) – maybe more difficult?
• cross-application
• OS standards and protocols?
• model-based user interfaces
• in theory
• and practice e.g. React
lexical level
Apple Events
(Numbers)
take aways …
learning from human communication
… but different!
• epistemic interaction
• implicitly revealing intentions
• coherence
• can AI be as (more?) consistent as people :-/
• talking back
• explicitly explaining ourselves to AI
• explainable UI
• busting the myth of ease of use
Explaining Ourselves
people, computers and AI
https://alandix.com/academic/talks/Bath-2025-explaining-ourselves/

Explaining ourselves – people, computers and AI

Editor's Notes

  • #16 Lawrence Alma-Tadema's water-colour of an ambivalent Pandora, 1881 https://en.wikipedia.org/wiki/Pandora%27s_box#/media/File:Lawrence_Alma-Tadema_10.jpeg https://www.publicdomainpictures.net/en/view-image.php?image=82495&picture=cat-peeking-around-corner
  • #17 Lawrence Alma-Tadema's water-colour of an ambivalent Pandora, 1881 https://en.wikipedia.org/wiki/Pandora%27s_box#/media/File:Lawrence_Alma-Tadema_10.jpeg https://www.publicdomainpictures.net/en/view-image.php?image=82495&picture=cat-peeking-around-corner
  • #43 Qurren, CC BY-SA 3.0 <http://creativecommons.org/licenses/by-sa/3.0/>, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:5.25-inch_floppy_disk.jpg Eric Gaba, Wikimedia Commons user Sting, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Seagate_ST33232A_hard_disk_inner_view.jpg BalticServers.com, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:BalticServers_data_center.jpg Arielinson, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Control_Room_DSC0028.jpg Raralu4440, CC BY-SA 4.0 <https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons https://commons.wikimedia.org/wiki/File:Apple_Watch_5_40mm_on_my_desk.jpg