Context is Knowing Who You Are Talking To

A few years ago I ran across some very interesting research into the origins of language performed by Luc Steels at the Artifical Intelligence Laboratory of Virje Universiteit Brussel (See “Synthesising the Origins of Language and Meaning Using Co-Evolution, Self-Organisation and Level Formation“, Luc Steels, July 26, 1996. ). He basically set up some robots and had them play what he called “language games”.  Here’s part of the Abstract:

The paper reports on experiments in which robotic agents and software agents are set up to originate language and meaning. The experiments test the hypothesis that mechanisms for generating complexity commonly found in biosystems, in particular self-organisation, co-evolution, and level formation, also may explain the spontaneous formation, adaptation, and growth in complexity of language. Keywords: origins of language, origins of meaning, self-organisation, distributed agents, open systems. 1 Introduction A good way to test a model of a particular phenomenon is to build simulations or artificial systems that exhibit the same or similar phenomena as one tries to model. This methodology can also be applied to the problem of the origins of language and meaning. Concretely, experiments with robotic agents and software agents could be set up to test whether certain hypothesised mechanisms indeed lead to the formation of language and the creation of new meaning.

Interestingly enough, I ran into this work a couple years after I had written down an early musing about context (see my earlier post “The origin of a context“). At the time that I ran into Luc Steels research, I was struck with how similarly I had framed the issue. While his experiments were about much more than context, it certainly was encouraging to me that the results of the experiments he carried out corroborated my naive expressions.

Apparently, a few years later (1999-2000), the Artificial Intelligence Laboratory at Vrije University continued the experiment, including a much richer experimental and linguistic setup than the original work. The introduction to this further research (apparently funded in part by Sony) even depicts a “robot” conversation very much like the conversation I describe in my post (only with better graphics…)

The basic setup of the experiment was as follows. Two computers, two digital cameras, and microphones and speakers permitting the software to “talk”. The cameras had some sort of pointing mechanism (laser pointers, I think) and faced a white board on which various shapes of different colors were arrayed randomly. The two software agents took turns pointing their laser pointers at the shapes and then generating various sounds. As the game continued, each agent would try to mimic the sounds they heard each other make while pointing at specific objects. Over time, the two agents were able to replicate the sounds when pointing at the same objects.

In terms of what I consider to be context, these experiments showed that it was possible for two “autonomous agents” to come to agreement on the “terminology” they would mutually use to refer to the same external perceptions (images of colored objects seen through a digital camera). Once trained, the two agents could “converse” about the objects, even pointing them out to each other and correctly finding the objects referred to when mentioned.

These experiments also showed that if you take software agents who have been trained separately (with other partner agents) and put them together, they will go through a period of renegotiation of terms and pronunciations. The robot experiments show a dramatic, destructive period in which the robots almost start over, generating an entirely new language, but finally the two agents again converge on something they agree on.

I’m not sure if the study continued to research context, per se. The later study included “mobile” agents and permitted interactions with several agents in a consecutive fashion. This showed the slow “evolution” of the language (a convergence of terminology amongst several agents) among a larger group of agents. I suspect, that unless the experimenters explicitly looked for this, they may have missed this detail (I’d be interested in finding out).

What would have been terrific is if the agent kept track of WHO it was talking to as well as what was being talked about. It is that extra piece of information which makes up a context. If an agent were able to learn the terminology of one agent, then learn the language of another, it could act as a translator between the two by keeping track of who it was talking with (and switching contexts…). Under my view, context is just the recognition of who I’m talking to and thus the selection of the correct variant of language and terminology to adapt to that audience.

Human ability to switch contexts so easily is due to our ability to remember how various concepts are described amongst specific communities. I’ve always said, until the computer can have a conversation with me and then come up with its own data structures to represent the concepts we discuss, I’ll be employed… Now I’m getting a little worried…

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: