ICREA, Institut de Biologia Evolutiva (UPF-CSIC) Barcelona
It is now generally recognized that human mental development is a long process that leads to the gradual construction of extraordinary complex structures in interaction with the environment, tutors, and other individuals, including peers. Many of these structures can only be acquired when other structures are already in place. For example, fine-grained grasping with the fingers is only possible when rudimentary control of arm movements has been established. A central challenge for emulating development on (robotic) agents is to orchestrate the ordering in which skills and competences are acquired. There are several methods. For example, tutors can carefully scaffold the complexity of the environment for learning and then gradually increase the challenge. Here we investigate mechanisms in which learners themselves regulate the complexity of the challenges they tackle in harmony with skills they already acquired. I look in particular at mechanisms inspired by Csikszentmihalyi's flow theory and focus on how this theory suggests way to orchestrate autonomous language learning.
As noted by Poincaré, Helmholz and Nicod, the only way our brains can know about the existence, dimensionality, and structure of physical space is by sampling the effects of our actions on our senses. In this talk we show how a simple algorithm based on coincidence detection will naturally extract the notion of space. It can do this without any a priori knowledge about how the brain is connected to the sensors or body, and for arbitrary sensors and effectors. Such a mechanism may be the method by which animals’ brains construct spatial notions during development, or it may have evolved over evolutionary time to allow animals to act in the world. The algorithm has applications for self-repairing robotics and sensor calibration in unknown hostile environments.
Research Director at Inria, and head of the Inria/Ensta-ParisTech FLOWERS team (France) (more).
A great mystery is how human infants develop: how they progressively discover their bodies, how they learn to interact with objects and social peers, and accumulate new skills all over their lives. Constructing robots, and building mechanisms that model such developmental processes, is key to advance our understanding of human development, in constant dialog with human and living sciences.
I will present examples of robotics models of curiosity-driven learning and exploration, and show how developmental trajectories can self-organize, starting from discovery of the body, then object affordances, then vocal babbling and vocal interactions with others. In particular, I will show that the onset of language spontaneously forms out of such sensorimotor development. I will also explain how such developmental learning mechanisms can be highly efficient for robot learning of motor skills in high-dimensions, such as learning omnidirectional legged locomotion or object manipulation.
Lund University Cognitive Science.
In research on human communication and child development, the role of attention has become central. In the lecture, I will present some of this research and discuss its implications for how to develop human-robot communication that is as natural as possible. I focus on two questions: (1) How can a robot use the attention of a human to understand what the human wants to communicate? (2) How can a robot control the attention of a human in its communication? For the first question, following human gaze or pointing is required and joint attention should be achieved. The results will be improved if the robot has a model of the interests or goals of the human. For the second question, there are three main methods: speaking, looking and pointing. I will present some results from an ongoing project involving linguistic communication between an iCub and a human and show the importance of attention in the process. Finally, I will present some experiments concerning how humans interpret robot pointing, something which turns out to be quite dependent on the bodily configuration of the robot.
Department of Neuroscience, Rappaport Faculty of Medicine and Research Institute, Technion - Israel Institute of Technology, Haifa.
Recently there have been major leaps in the scientific understanding of the brain's internal navigation system. Several related cell types have been discovered in the brain: Place cells, grid cells, head-direction cells and border cells. These cells are believed to be part of a cognitive map responsible for representation of the brain's internal sense of space. This brain system exemplifies one of the rare cases in which the internal algorithm of a mammalian neural network could be deciphered. While the phenomenology of these cells is now quite well understood, many questions remain: How are these cells connected into a network? How are they generated? How could they be read out? In this lecture I will describe these major questions and suggest some avenues connecting between the theory of these cells and the growing bulk of experimental evidence about them.
Goal reasoning actors are highly autonomous; they can decide for themselves what goals to pursue. This requires substantial interpretation about the actor's recent observations. In this talk, I will describe recent work on behavior recognition, plan recognition, and explanation generation in support of goal deliberation, along with applications of goal reasoning that our group is pursuing concerning the control of autonomous unmanned vehicles.
Piotr Boltuc, The Louise Hartman Schewe and Karl Schewe Professor of Liberal Arts and Sciences, professor of philosophy, University of Illinois Springfield (more).
BICA philosophy is the idea that there is nothing in human and animal cognitive architectures that cannot be instantiated (not just merely replicated, whatever the difference) in a sufficiently advanced biologically inspired cognitive architecture. This radical claim may follow from the physical interpretation of the Church-Turing thesis. Here are two examples of philosophical problems in BICA:
A. All cognitive architectures have merely an indirect ontological access to empirical reality but levels of such access differ. This is true of biological, biologically inspired architectures as well as AI. Systems that are purely reactive are empirically the closest to ontology. The more complex mind-maps a system creates, the further from direct interactions with reality it becomes. This is the problem of empirical access. This problem is well known in human epistemology but it is even clearer in robotics.
Olivier Georgeon, in his recent work, points out to the opposite problem. If we use a cognitive architecture “to solve problems that we model a priori (e.g., playing chess etc). Then the model of the problem constitutes a reality as such and the cognitive architecture receives a representation of the current state of the problem as input data. In this case, the architecture has access to its noumenal reality (the problem space).” I would call it the Platonic scale of ontological access where mathematical equations are identical with reality but the problem is their fit as a model of the empirical world they are supposed to describe and predict, an old problem in philosophy of science.
B. If we come to understand how a human brain operates, we should also know how it operates first-person stream of consciousness. To understand anything at a BICA level is to be able to reverse engineer it. Hence, we should be able to reverse engineer first person consciousness.
This claim has philosophical as well as engineering implications. Most people today think that computation of complex data is the gist of first person consciousness; this is in part because they view first-person stream of consciousness as spooky (a dualistic remnant of religious notions of the soul). But a simpler hypothesis is that the stream of consciousness is more like hardware (a stream of light generated by a light bulb or a reflection generated by a mirror); nothing spooky about those. Information is just the content engrafted in such stream. Hence, to preserve one's conscious self is to preserve the stream – to preserve the content of such stream is to preserve information about it.
National Research Council of Italy. http://www.pa.icar.cnr.it/cossentino/ .
Holons are the basis for building very scalable yet simple architectures. They spring from the observation made by Koestler that the concepts of 'whole' and 'part' have no absolute meaning in the reality. A whole or a part can be easily identified in many contexts but at the same time they can be seen as opposite. This philosophical concept has a perfect correspondence with software architecture. Nowadays, it is very diffused to approach complex systems as systems of systems. They can be seen as intrinsically recursive when considering that each of the composing systems may be decomposed into its components that in turn may be individually addressed or regarded as an assembly of (sub-) systems/components/classes. Each of the parts at whatever level of abstraction has the dignity of a complete entity (a whole) but at the same time it may be further exploded at finer level of details (as parts). Holons offer a great way for representing complex systems and solving several real-world problems but their recursive, dynamic nature may be a challenge at design time. In this talk, holons will be the common denominator of a path that discusses the design of holonic systems and their great contribution in achieving runtime system-level adaptation of cognitive multi-agent systems, for instance during the execution of norm-constrained workflows. The presented contribution of holons towards system adaptation lies in the hierarchical self-similar structure of the holonic architecture. They allow the decomposition and representation of intentional systems that achieve effective goal-oriented solutions, at the same time they become a proficient structure to be learnt for future reuse.
Applied Cognitive Science Lab. College of IST. Penn State University. http://www.frankritter.com/.
There is a need for high level languages to help create low level BICA behaviour. I'll present an example compiler for creating ACT-R models from hierarchical task analyses for a non-iterative, 30 min. task, where we created models of 11 levels of expertise in an afternoon. The models start with about 600 rules each, and learn out to 100 trials about another 600 rules. We compared these models to human data over four trials (N=30) and both the aggregate and individual data fit the novice best (or nearly best). This work shows that high level compilers can help manage the complexity of large models. I'll then note some future work including microgenetic analysis and modeling of learning curves on the individual subtasks and also look at forgetting of these tasks after delays ranging from 6 to 18 days.