Acceler-AI Project

Acceler-AI (Adaptive Co-Construction of Ethics for LifElong tRrustworthy AI) is an ongoing research project funded by French Agency ANR, (2023-2027). The project's main goal is to enable the adaptive co-construction of Ethics in and for a long-lived intelligent system.

As many applications involving Artificial Intelligence (AI) may have a beneficial or harmful impact on humans, there is a societal and a research debate about the way to incorporate ethical capabilities into them. This urges AI researchers to develop more ethically-capable agents, shifting from ethics in design to ethics by design with “explicit ethical agents” able to produce ethical behaviour thanks to the integration of reasoning on and learning of ethics. To do so, one has to address non-functional requirements: diversity, to cope with the richness and non-monolithic approaches to ethics in human societies (e.g., knowledge, values); long-lived, to cope with the long-lasting evolution of intelligent systems and their running environment; trustworthiness, to be acceptable by society.

Hence, the project's main goal is to enable the adaptive co-construction of Ethics in and for a long-lived intelligent system addressing these requirements. This raises 3 challenges:

  • Human-centric AI: how the system achieves its goal while following ethical principles and human values;
  • Safe AI : how to ensure that the system operates within specified boundaries while being sufficiently autonomous to learn ethics and adapt in response to evolution of the context and objectives;
  • Adaptive AI: AI systems should adapt both on a technical side (e.g., openness) and on a societal side (e.g., lifelong learning, shifting expectations and contemporary accepted moral values and norms).

Accelerer-AI proposes a multi-disciplinary research and adopts a human-centric perspective to investigate a co-construction of “ethics”. It proposes a systemic approach that dynamically couples 3 adaptive processes: 1) top-down injection of “human ethical preferences/values/moral rules” via continuous non-invasive human-agent interaction; 2) bottom-up lifelong learning and co-evolution (between humans and AI systems) of adaptive ethical behaviour; 3) normative regulation process that allows bounding the ethics learning process.

Partners

UCBL1/LIRIS

Mines Saint-Etienne/LIMOS

Lyon Catholic University (UCLy)

Students

  • Remy Chaput (Post-Doc student from 04/2023 to 04/2024). Subject: multi-agent reinforcement learning for the co-construction of ethical behaviors
  • Timon Deschamps (PhD student from 11/2023 ). Subject: Multi-objective and multi-agent reinforcement learning for the co- construction of ethical behaviors
  • Marceau Nahon (Master 1 Internship in 2023). Subject: Multi-agent system learning ethical behavior: Tools for the study of users’ ethical preferences

Publications

  • Rémy Chaput, Laëtitia Matignon & Mathieu Guillermin (2023). « Learning to identify and settle dilemmas through contextual user preferences ». IEEE International Conference on Tools with Artificial Intelligence (ICTAI), 8 novembre 2023, Atlanta (États-Unis). doi : 10.1109/ICTAI59109.2023.00075. HAL : hal-04349804.
  • Marceau Nahon, Aurélien Tabard, and Audrey Serna (2024). « Capturing stakeholders values and preferences regarding algorithmic systems ». IHM ’24 : 35e conférence Francophone sur l’Interaction Humain-Machine, mars 2024.

Methodology

Kitten

Funding

This project was funded by ANR (Agence National de la Recherche) under call AAPG 2022: ANR ANR-22-CE23-0028-01.