Skip to main content
  1. Axes/

ECIÉA

  • Scientific lead: CHAPUT Rémy
  • Is SyCoSMA the project lead? Yes
  • Start date: 2026-01-01
  • End date: 2027-12-31
  • Scientific partner: LIMOS

With the increasing use of Artificial Intelligence (AI), it is more important than ever to ensure that these systems behave in a way that is aligned with our moral values, to guarantee ben- eficial use. This requires several capabilities from the AI system: being able to represent moral values; being able to exhibit a behavior that takes these values into account; and being able to ex- plain the behavior to humans (regulators, users, or even stakeholders) to verify that the moral values are indeed respected.

The ECIÉA project (Explication du Comportement d’un agent IA Éthiquement Aligné – Explaining the behavior of an ethically-aligned agent) proposes to implement these capabilities through a hybrid combination of abstract argumentation and Reinforce- ment Learning (RL). RL will provide adaptability to agents, but requires a reward function to guide the learning towards the desired behavior. Argumentation is an interesting framework for representing these moral values, as it is fairly close to human reasoning.

Using argumentation to design the reward functions of learning agents will allow to include non-AI experts in the design process, and explain the resulting behavior in terms of rewards (i.e., what drives the agent towards this specific behavior).