A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior

Publication at an international conference (AIES'21).

Available online at the ACM Digital Library.

This paper proposes to use symbolic (logic-based) judging agents to reason about and judge the behavior of learning agents. These judging agents represent a reward function, and encode the moral values that learning agents must respect, by sending rewards corresponding to the correctness of their actions with respect to the moral values.

Abstract:

The recent field of Machine Ethics is experiencing rapid growth to answer the societal need for Artificial Intelligence (AI) algorithms imbued with ethical considerations, such as benevolence toward human users and actors. Several approaches already exist for this purpose, mostly either by reasoning over a set of predefined ethical principles (Top-Down), or by learning new principles (Bottom-Up). While both methods have their own advantages and drawbacks, only few works have explored hybrid approaches, such as using symbolic rules to guide the learning process for instance, combining the advantages of each. This paper draws upon existing works to propose a novel hybrid method using symbolic judging agents to evaluate the ethics of learning agents' behaviors, and accordingly improve their ability to ethically behave in dynamic multi-agent environments. Multiple benefits ensue from this separation between judging and learning agents: agents can evolve (or be updated by human designers) separately, benefiting from co-construction processes; judging agents can act as accessible proxies for non-expert human stakeholders or regulators; and finally, multiple points of view (one per judging agent) can be adopted to judge the behavior of the same agent, which produces a richer feedback. Our proposed approach is applied to an energy distribution problem, in the context of a Smart Grid simulator, with continuous and multi-dimensional states and actions. The experiments and results show the ability of learning agents to correctly adapt their behaviors to comply with the judging agents' rules, including when rules evolve over time.

Citation:

Rémy Chaput, Jérémy Duval, Olivier Boissier, Mathieu Guillermin, Salima Hassas. A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior. AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, May 2021, Virtual Event USA, United States. pp.13-23, ⟨10.1145/3461702.3462515⟩

BibTeX:

@inproceedings{chaput:emse-03318195,
  TITLE = {{A Multi-Agent Approach to Combine Reasoning and Learning for an Ethical Behavior}},
  AUTHOR = {Chaput, R{\'e}my and Duval, J{\'e}r{\'e}my and Boissier, Olivier and Guillermin, Mathieu and Hassas, Salima},
  URL = {https://hal-emse.ccsd.cnrs.fr/emse-03318195},
  BOOKTITLE = {{AIES '21: AAAI/ACM Conference on AI, Ethics, and Society}},
  ADDRESS = {Virtual Event USA, United States},
  PUBLISHER = {{ACM}},
  SERIES = {AIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society},
  PAGES = {13-23},
  YEAR = {2021},
  MONTH = May,
  DOI = {10.1145/3461702.3462515},
  KEYWORDS = {Ethics ; Machine Ethics ; Multi-Agent Learning ; Reinforcement Learning ; Hybrid Neural-Symbolic Learning ; Ethical Judgment},
  HAL_ID = {emse-03318195},
  HAL_VERSION = {v1},
}