The IDEAL project consists of implementing a mechanism of early-stage developmental Learning (Piaget, 1937) in an artificial agent situated in a simulated environment. More precisely, we will implement a mechanism for the agent to autonomously engage in a bottom-up process of hierarchical organization of behavioral schemes, as the agent interacts with its environment.
This work seeks to confirm both an emergentist and a constructivist hypothesis of cognition. According to these hypotheses, an observer can attribute cognitive phenomena (e.g., knowledge, emotion, intention, anticipation) to the agent while observing its activity, provided that the agent's behavior can appropriately self-organize. These hypotheses have often been related to Heidegger's (1927) philosophy of mind, e.g. cited by Sun (2004). These hypotheses also relate to the constructivist epistemologies (Le Moigne, 1995), and to the situated (Suchman, 1987) and embodied (Wilson, 2002) theories of cognition.
To situate our technical approach in the field of artificial intelligence, we can refer to the Newell and Simon's (1975) physical symbol hypothesis. We subscribe to the hypothesis's weak sense. We will use computation to generate intelligent behavior. We nevertheless do not subscribe to the hypothesis's strong sense. We will not be implementing symbolic computation based on symbols having a pre-attributed denotation. Instead, we will study how knowledge appears to emerge from the agent's activity, and appears to become meaningful to the agent because it is grounded in the agent's activity (Harnad, 1990).
Although we do not follow a symbolic computational modeling approach, we plan to implement our model in a cognitive architecture, because cognitive architectures have proven efficient for implementing behavior organization mechanisms. Typically, our preliminary study (Georgeon, Morgan, & Ritter, 2010) used Soar (Laird & Congdon, 2009) and was inspired by Chaput's (2004) Constructivist Learning Architecture.
The emergentist and constructivist hypotheses are well supported by the philosophical and the cognitive sciences literature. There is still, however, a need for computer implementations that attempt to validate them. This project addresses this need. Our implementation will either confirm these hypotheses or tell us where they are incorrect.
In either case, our work will inform the developmental approach to artificial intelligence. So doing, it will contribute to applications in the novel field of developmental robotics (Weng et al., 2001). In addition, our implementation will contribute to modeling learning phenomena exhibited by natural organisms. As a model, our implementation will support our understanding of these organisms.
If the hypotheses are confirmed, we will produce online demonstrations of agents capable of self-development in a simulated environment. These demonstrations will fuel the ethical debate regarding the status of future self-motivated agents that will appear sensible and will exhibit increasingly elaborated behavior.
The Ernest Project. See the little demo for an example.
We are especially addressing the following challenges:
Last updated September 17th 2010.