DONATE

Publications

by Keyword: Theory of mind

Demirel, B, Moulin-Frier, C, Arsiwalla, XD, Verschure, PFMJ, Sánchez-Fibla, M, (2021). Distinguishing Self, Other, and Autonomy From Visual Feedback: A Combined Correlation and Acceleration Transfer Analysis Frontiers In Human Neuroscience 15, 560657

In cognitive science, Theory of Mind (ToM) is the mental faculty of assessing intentions and beliefs of others and requires, in part, to distinguish incoming sensorimotor (SM) signals and, accordingly, attribute these to either the self-model, the model of the other, or one pertaining to the external world, including inanimate objects. To gain an understanding of this mechanism, we perform a computational analysis of SM interactions in a dual-arm robotic setup. Our main contribution is that, under the common fate principle, a correlation analysis of the velocities of visual pivots is shown to be sufficient to characterize the self (including proximo-distal arm-joint dependencies) and to assess motor to sensory influences, and the other by computing clusters in the correlation dependency graph. A correlational analysis, however, is not sufficient to assess the non-symmetric/directed dependencies required to infer autonomy, the ability of entities to move by themselves. We subsequently validate 3 measures that can potentially quantify a metric for autonomy: Granger causality (GC), transfer entropy (TE), as well as a novel “Acceleration Transfer” (AT) measure, which is an instantaneous measure that computes the estimated instantaneous transfer of acceleration between visual features, from which one can compute a directed SM graph. Subsequently, autonomy is characterized by the sink nodes in this directed graph. This study results show that although TE can capture the directional dependencies, a rectified subtraction operation denoted, in this study, as AT is both sufficient and computationally cheaper.

JTD Keywords: agency, attention, autonomy, cognitive development, computational cognition, developmental psychology, sensorimotor learning, Agency, Attention, Autonomy, Cognitive development, Computational cognition, Developmental psychology, Model, Sensorimotor learning, Theory of mind


Arsiwalla, X. D., Freire, I. T., Vouloutsi, V., Verschure, P., (2019). Latent morality in algorithms and machines Biomimetic and Biohybrid Systems 8th International Conference, Living Machines 2019 (Lecture Notes in Computer Science) , Springer, Cham (Nara, Japan) 11556, 309-315

Can machines be endowed with morality? We argue that morality in the descriptive or epistemic sense can be extended to artificial systems. Following arguments from evolutionary game-theory, we identify two main ingredients required to operationalize this notion of morality in machines. The first, being a group theory of mind, and the second, being an assignment of valence. We make the case for the plausibility of these operations in machines without reference to any form of intentionality or consciousness. The only systems requirements needed to support the above two operations are autonomous goal-directed action and the ability to interact and learn from the environment. Following this we have outlined a theoretical framework based on conceptual spaces and valence assignments to gauge latent morality in autonomous machines and algorithms.

JTD Keywords: Autonomous systems, Ethics of algorithms, Goal-directed action, Philosophy of morality, Qualia, Theory of mind


Freire, I. T., Arsiwalla, X. D., Puigbò, J. Y., Verschure, P., (2018). Limits of multi-agent predictive models in the formation of social conventions Frontiers in Artificial Intelligence and Applications (ed. Falomir, Z., Gibert, K., Plaza, E.), IOS Press (Amsterdam, The Netherlands) Volume 308: Artificial Intelligence Research and Development, 297-301

A major challenge in cognitive science and AI is to understand how intelligent agents might be able to predict mental states of other agents during complex social interactions. What are the computational principles of such a Theory of Mind (ToM)? In previous work, we have investigated hypotheses of how the human brain might realize a ToM of other agents in a multi-agent social scenario. In particular, we have proposed control-based cognitive architectures to predict the model of other agents in a game-theoretic task (Battle of the Exes). Our multi-layer architecture implements top-down predictions from adaptive to reactive layers of control and bottom-up error feedback from reactive to adaptive layers. We tested cooperative and competitive strategies among different multi-agent models, demonstrating that while pure RL leads to reasonable efficiency and fairness in social interactions, there are other architectures that can perform better in specific circumstances. However, we found that even the best predictive models fall short of human data in terms of stability of social convention formation. In order to explain this gap between humans and predictive AI agents, in this work we propose introducing the notion of trust in the form of mutual agreements between agents that might enhance stability in the formation of conventions such as turn-taking.

JTD Keywords: Cognitive Architectures, Game Theory, Multi-Agent Models, Reinforcement Learning, Theory of Mind