by Keyword: Morality

By year:[ 2021 | 2020 | 2019 | 2018 | 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 ]

Freire, Ismael T., Urikh, D., Arsiwalla, X. D., Verschure, P., (2020). Machine morality: From harm-avoidance to human-robot cooperation Biomimetic and Biohybrid Systems 9th International Conference, Living Machines 2020 (Lecture Notes in Computer Science) , Springer International Publishing (Freiburg, Germany) 12413, 116-127

We present a new computational framework for modeling moral decision-making in artificial agents based on the notion of ‘Machine Morality as Cooperation’. This framework integrates recent advances from cross-disciplinary moral decision-making literature into a single architecture. We build upon previous work outlining cognitive elements that an artificial agent would need for exhibiting latent morality, and we extend it by providing a computational realization of the cognitive architecture of such an agent. Our work has implications for cognitive and social robotics. Recent studies in human neuroimaging have pointed to three different decision-making processes, Pavlovian, model-free and model-based, that are defined by distinct neural substrates in the brain. Here, we describe how computational models of these three cognitive processes can be implemented in a single cognitive architecture by using the distributed and hierarchical organization proposed by the DAC theoretical framework. Moreover, we propose that a pro-social drive to cooperate exists at the Pavlovian level that can also bias the rest of the decision system, thus extending current state-of-the-art descriptive models based on harm-aversion.

Keywords: Morality, Moral decision-making, Computational models, Cognitive architectures, Cognitive robotics, Human-robot interaction

Arsiwalla, X. D., Freire, I. T., Vouloutsi, V., Verschure, P., (2019). Latent morality in algorithms and machines Biomimetic and Biohybrid Systems 8th International Conference, Living Machines 2019 (Lecture Notes in Computer Science) , Springer, Cham (Nara, Japan) 11556, 309-315

Can machines be endowed with morality? We argue that morality in the descriptive or epistemic sense can be extended to artificial systems. Following arguments from evolutionary game-theory, we identify two main ingredients required to operationalize this notion of morality in machines. The first, being a group theory of mind, and the second, being an assignment of valence. We make the case for the plausibility of these operations in machines without reference to any form of intentionality or consciousness. The only systems requirements needed to support the above two operations are autonomous goal-directed action and the ability to interact and learn from the environment. Following this we have outlined a theoretical framework based on conceptual spaces and valence assignments to gauge latent morality in autonomous machines and algorithms.

Keywords: Autonomous systems, Ethics of algorithms, Goal-directed action, Philosophy of morality, Qualia, Theory of mind