Robotic semantics based on conceptual spaces – Peter Gärdenfors

Robotic semantics based on conceptual spaces

Peter Gärdenfors

Present-day chatbots, such as GPT-3, show an impressive capacity to have a sensible dialogue with a human about almost any topic. However, a closer inspection of the dialogues reveals that the chatbots are unable to talk about topics requiring embodiment. In this sense the chatbots are not semantically grounded. In order to get around these problems for robots and other artificial systems, a semantics that is grounded in perception and action is required.

I present the theory of conceptual spaces as a theoretical framework that can be used for such a semantics. The spaces consist of domains most of which are grounded in perception and action. They also allow a simple way of representing natural concepts by assuming that such concepts are represented by convex regions of a conceptual space. The semantics of major word classes can be described with the aid of conceptual spaces in a way that is amenable to computer implementations.

The key semantic idea is then to use events as the fundamental structures for the semantic representations of an artificial system. Events are modeled in terms of conceptual spaces and mappings between spaces. An event is represented by two vectors, one force vector representing an action and one result vector representing the effect of the action. The two-vector model is then extended by the thematic roles so that an event is built up from an agent, an action, a patient, and a result. It is shown how the components of an event can be put together to semantic structures that represent the meanings of sentences. It is argued that a semantic framework based on events can generate a general representational framework for human-robot communication. An implementation of the framework involving communication with an iCub robot will be described.

 

Lecture and Workshop

Lecture

 

 

Peter Gärdenfors