DONATE

Publications

by Keyword: Cognitive Architectures

Guerrero-Rosado O, Verschure P, (2021). Robot regulatory behaviour based on fundamental homeostatic and allostatic principles Procedia Computer Science 190, 292-300

Animals in their ecological context behave not only in response to external events, such as opportunities and threats but also according to their internal needs. As a result, the survival of the organism is achieved through regulatory behaviour. Although homeostatic and allostatic principles play an important role in such behaviour, how an animal's brain implements these principles is not fully understood yet. In this paper, we propose a new model of regulatory behaviour inspired by the functioning of the medial Reticular Formation (mRF). This structure is spread throughout the brainstem and has shown generalized Central Nervous System (CNS) arousal control and fundamental action-selection properties. We propose that a model based on the mRF allows the flexibility needed to be implemented in diverse domains, while it would allow integration of other components such as place cells to enrich the agent's performance. Such a model will be implemented in a mobile robot that will navigate replicating the behaviour of the sand-diving lizard, a benchmark for regulatory behaviour. © 2020 Elsevier B.V.. All rights reserved.

JTD Keywords: Action selection, Allostasi, Allostasis, Animal brain, Animals, Behavior-based, Brainstem, Central nervous systems, Cognitive architecture, Cognitive architectures, Elsevier, Homeostasis, Homoeostasis, Magnetorheological fluids, Regulatory behavior, Regulatory behaviour, Reticular formation, Robots


Freire, Ismael T., Urikh, D., Arsiwalla, X. D., Verschure, P., (2020). Machine morality: From harm-avoidance to human-robot cooperation Biomimetic and Biohybrid Systems 9th International Conference, Living Machines 2020 (Lecture Notes in Computer Science) , Springer International Publishing (Freiburg, Germany) 12413, 116-127

We present a new computational framework for modeling moral decision-making in artificial agents based on the notion of ‘Machine Morality as Cooperation’. This framework integrates recent advances from cross-disciplinary moral decision-making literature into a single architecture. We build upon previous work outlining cognitive elements that an artificial agent would need for exhibiting latent morality, and we extend it by providing a computational realization of the cognitive architecture of such an agent. Our work has implications for cognitive and social robotics. Recent studies in human neuroimaging have pointed to three different decision-making processes, Pavlovian, model-free and model-based, that are defined by distinct neural substrates in the brain. Here, we describe how computational models of these three cognitive processes can be implemented in a single cognitive architecture by using the distributed and hierarchical organization proposed by the DAC theoretical framework. Moreover, we propose that a pro-social drive to cooperate exists at the Pavlovian level that can also bias the rest of the decision system, thus extending current state-of-the-art descriptive models based on harm-aversion.

JTD Keywords: Morality, Moral decision-making, Computational models, Cognitive architectures, Cognitive robotics, Human-robot interaction


Moulin-Frier, C., Puigbò, J. Y., Arsiwalla, X. D., Sanchez-Fibla, M., Verschure, P., (2018). Embodied artificial intelligence through distributed adaptive control: An integrated framework ICDL-EpiRob 2017 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics , IEEE (Lisbon, Portugal) , 324-330

In this paper, we argue that the future of Artificial Intelligence research resides in two keywords: integration and embodiment. We support this claim by analyzing the recent advances in the field. Regarding integration, we note that the most impactful recent contributions have been made possible through the integration of recent Machine Learning methods (based in particular on Deep Learning and Recurrent Neural Networks) with more traditional ones (e.g. Monte-Carlo tree search, goal babbling exploration or addressable memory systems). Regarding embodiment, we note that the traditional benchmark tasks (e.g. visual classification or board games) are becoming obsolete as state-of-the-art learning algorithms approach or even surpass human performance in most of them, having recently encouraged the development of first-person 3D game platforms embedding realistic physics. Building on this analysis, we first propose an embodied cognitive architecture integrating heterogeneous subfields of Artificial Intelligence into a unified framework. We demonstrate the utility of our approach by showing how major contributions of the field can be expressed within the proposed framework. We then claim that benchmarking environments need to reproduce ecologically-valid conditions for bootstrapping the acquisition of increasingly complex cognitive skills through the concept of a cognitive arms race between embodied agents.

JTD Keywords: Cognitive Architectures, Embodied Artificial Intelligence, Evolutionary Arms Race, Unified Theories of Cognition


Freire, I. T., Arsiwalla, X. D., Puigbò, J. Y., Verschure, P., (2018). Limits of multi-agent predictive models in the formation of social conventions Frontiers in Artificial Intelligence and Applications (ed. Falomir, Z., Gibert, K., Plaza, E.), IOS Press (Amsterdam, The Netherlands) Volume 308: Artificial Intelligence Research and Development, 297-301

A major challenge in cognitive science and AI is to understand how intelligent agents might be able to predict mental states of other agents during complex social interactions. What are the computational principles of such a Theory of Mind (ToM)? In previous work, we have investigated hypotheses of how the human brain might realize a ToM of other agents in a multi-agent social scenario. In particular, we have proposed control-based cognitive architectures to predict the model of other agents in a game-theoretic task (Battle of the Exes). Our multi-layer architecture implements top-down predictions from adaptive to reactive layers of control and bottom-up error feedback from reactive to adaptive layers. We tested cooperative and competitive strategies among different multi-agent models, demonstrating that while pure RL leads to reasonable efficiency and fairness in social interactions, there are other architectures that can perform better in specific circumstances. However, we found that even the best predictive models fall short of human data in terms of stability of social convention formation. In order to explain this gap between humans and predictive AI agents, in this work we propose introducing the notion of trust in the form of mutual agreements between agents that might enhance stability in the formation of conventions such as turn-taking.

JTD Keywords: Cognitive Architectures, Game Theory, Multi-Agent Models, Reinforcement Learning, Theory of Mind