Just What Is Meaning? A Lay Perspective


The Origin of Symbols, Code and Meaning

Memories are NOT CODED. They are ANALOG recordings, not unlike phonograph records and the old photographs before the invention of digital cameras. There is some evidence that memories are stored in a manner similar to holographs within the medium of the brain. Memories may include recordings of coded information, this would be how symbols are recognized.

 Only when communication between brains is needed does CODE come into play. One brain must create appropriate SYMBOLS which represent the information. These symbols must be physicalized in some manner because the only input mechanism available to the other brain are the five senses of the body. Information is packaged and lumped, nuances and unimportant details are necessarily removed, symbols are selected and generated. If the other brain is receptive, then the symbols are sensed by the body, evoking the memory centers of the second brain. Communication is completed if the second brain understands the code and “remembers” the meaning in its own analog memory.

The Origin of Language

The brain records sensory inputs as memory. The mind constructs an internal symbol system describing the sensory information in ways the human body can communicate or relate the information. Details of the input which the mind cannot put a name to may be remembered or memorable, but cannot be communicated. Have you ever experienced something that you were unable to describe to someone else who had not experienced it?

Two people who have experienced the same or similar types of events can have a conversation about it and begin to form a language. Language is a shortcut to memory. It is the human capacity for the invention of vocabulary that sets them apart from other creatures (and from computers). If two people share a new experience, they’ll be able to talk about it by recognizing the same features in the sensory record and describing it in terms that evoke the same memory in the other person. Eventually, they’ll form a unique vocabulary of short hand symbolic terms and phrases to permit efficient communications. This is how strangers who meet at 12-Step Meetings are able to express and understand each other.

 But if only one of the two persons has experienced the events, there is no referent memory in one of the two. Think of the old saw “a picture is worth a thousand words”. Have you ever heard a new musical piece and tried to explain it to someone who hasn’t heard it? It takes a lot of explanation and yet is ultimately a failure.

 Consider another example: Wine tasting connoisseurs

 These people have an intense sensitivity to subtle features of taste and smell making their experience of wine very rich with information. More importantly, they have been able to attach vocabulary to these differences in unique ways that allows them to communicate with other wine experts. Of course, their success at communicating is predicated on the existence of other individuals with similar talents and experiences. When they try to explain to someone without the sensitivity of taste, their words merely confuse or sound hilariously out of place.

 This is one example of how “context” arises in human communication.

 What does this suggest for our major theme? 

  1. The features that are recognized in the sensory record are dependent first on the individuals whose senses recorded them
  2. The features that are chosen for communication are dependent on the interests and needs of the individuals doing the communicating. Other features that at first do not seem to contribute to the remembrance of the experience are often ignored or discounted.
  3. The vocabulary describing and naming these features is dependent on both the individual who sensed and on the people to whom they try to explain the sensation. Thru trial and error, the person who is trying to communicate will hit upon terms that find resonance in their audience.

How Community Changes The Artist’s Conception

The Artist and the Standard Interpretation

The Artist and the Standard Interpretation

  • The Artist creates her artwork, with a particular symbolic meaning in mind.
  • The Art Dealer/Gallery Owner tries to explain what the artist had in mind.
  • The Art Critic sees something somewhat different by projecting his own notions on the work.
  • The Art Historian synthesizes what she’s heard, and unwittingly, and unbenownst guesses some of the original intent.
  • Ultimate truth is the one written by History, so over time, this final interpretation becomes the accepted meaning.


Types of Information Flow

In a previous post a week or so ago, I riffed on an example of communication between two mountain hikers suggested by Barwise and Seligman (authors of a theory of “information flow”). I made the initial distinction between information flowing within a shared context (in the example, this was the context of Morse Code and flashlight signals), and information flowing from observations of physical phenomenon.
Both types of information movement is covered by Barwise and Seligman’s theory. I propose a further classification of various examples of information flow which will become important as we discuss the operations of individuals across and within bridging contexts.

Types of Flows

Symbols are created within a context for various reasons. There’s a difference between generic information flow and symbollic communication.
Let’s consider a single event whereby information has flowed and been recognized by a person. There are three possible scenarios which may have occurred.

1. Observation/Perception: the person experiences some physical sensation; the conditions of some physical perception leads the person within the context of that perception (and his mental state) to recognize the sensation as significant. In this case, the person recognizes that something has occurred that was important enough to become consciously aware of it’s occurrence. This is new information, but is not necessarily symbollic information.

2. Inference/Deduction: A person within the mental state corresponding to a particular context applies a set of “rules of thumb” over a set of observations (of the first type, likely, but not necessarily exclusively). Drawing on logical inference defined by his current context, he draws a conclusion which follows from these observations to generate new information. This is new information in the sense that without the context to define the rules of inference, those particular perceptions would not have resulted in the “knowledge” of the inference conclusion. They would remain (or they would dissipate) uninterpretted and unrelated forever.

3. Interpretation/Translation: This is the only type of information flow that happens using exclusively symbollic mechanisms. In this type of flow, the person receiving the flow recognizes not only the physical event, but also that the observed phenomenon is symbollic: in other words, that some other person has applied additional meaning to the phenomenon (created a symbol or symbols from the physical media by attaching an additional concept to it). In this type of flow, the perceiving person doesn’t simply register the fact of the physical event, but also recognizes that the physical phenomenon satisfies some context-driven rules of material selection and construction indicating that some other person intentionally constructed it. From this knowledge, the perceiver concludes, assuming they are familiar with the encoding paradigm of the sender’s context, that there is an intended, additional message (meaning) associated with the event. The perceiving party is said to share the context of the sending party if they are also able to interpret/translate the perceived physical sign to recognize the concepts placed there by the sender. In this scenario, the person recieving the message is NOT creating new information. All of the information of this flow was first realized and generated by the message’s sender. (This will be an important detail later as we apply this trichotomy to the operation of software.)

In all three types of information flows, as described by Barwise and Seligman, the flow is dependent on the regularities of the physical world. This regularity requirement applies from the regularity of physical phenomenon, to the reliability of the perceptual apparatus of the perceiver, all the way to the consistency of the encoding paradigm defined by the sender’s context.

Peirce’s Modes of Relationship

According to a terrific survey book on semiotics by Daniel Chandler that I’m reading now, Charles Peirce defined types of signs by whether they were symbollic, iconic, or indexical. If I understand Chandler’s summary, the first two examples of information flow I’ve described are at minimum dependent on Peirce’s indexical signs, alternatively called “natural signs”, because these are the natural perception of reality independent of context. Both the iconic and symbollic signs are only recognizable within a context making both fall under my “interpretation” type of information flow.

For the most part, I will treat the iconic and symbollic signs as the same sort of thing for now.

A Concept is Born: Sense Memory and Name Creation

June 24, 1988

Experience is characterized by memory of sensual information in all its detail. Analysis of this data can be retroactively applied. I can remember that:

“Yes, the sky was grey and windy just prior to the tree falling behind me.”

and therefore come to understand a set of events later, in some other context. Using this sensual memory aids abstraction and analysis because it acts as the raw material out of which abstractions can be built. Thus it is possible at a later date to reflect on past events and discover related occurences where before there was unorganized memory.

Learning of patterns is continuous:

“What was that?”

This question initially gets very simplistic answers when asked by toddlers and children. It takes nearly 20 years for humans to talk about philosophy in a formal way. But as slight variations to the simple occurences of events are experienced, the agent (learner) begins to organize subclasses of the same general event, especially if the social world provides him a useful distinction to use to characterize the subclass. In doing so, the subclass name becomes a synonym for the general idea.

Creative research by the agent (learner) is characterized by the creation of new distinguishing marks and the choosing of a class name for those marks. Communication with others regarding the subclass then becomes a matter of describing those marks, providing the short hand name, and obtaining agreement from the others that both the marks and the name are apropos.

And thus a concept is born…

Context is Knowing Who You Are Talking To

A few years ago I ran across some very interesting research into the origins of language performed by Luc Steels at the Artifical Intelligence Laboratory of Virje Universiteit Brussel (See “Synthesising the Origins of Language and Meaning Using Co-Evolution, Self-Organisation and Level Formation“, Luc Steels, July 26, 1996. ). He basically set up some robots and had them play what he called “language games”.  Here’s part of the Abstract:

The paper reports on experiments in which robotic agents and software agents are set up to originate language and meaning. The experiments test the hypothesis that mechanisms for generating complexity commonly found in biosystems, in particular self-organisation, co-evolution, and level formation, also may explain the spontaneous formation, adaptation, and growth in complexity of language. Keywords: origins of language, origins of meaning, self-organisation, distributed agents, open systems. 1 Introduction A good way to test a model of a particular phenomenon is to build simulations or artificial systems that exhibit the same or similar phenomena as one tries to model. This methodology can also be applied to the problem of the origins of language and meaning. Concretely, experiments with robotic agents and software agents could be set up to test whether certain hypothesised mechanisms indeed lead to the formation of language and the creation of new meaning.

Interestingly enough, I ran into this work a couple years after I had written down an early musing about context (see my earlier post “The origin of a context“). At the time that I ran into Luc Steels research, I was struck with how similarly I had framed the issue. While his experiments were about much more than context, it certainly was encouraging to me that the results of the experiments he carried out corroborated my naive expressions.

Apparently, a few years later (1999-2000), the Artificial Intelligence Laboratory at Vrije University continued the experiment, including a much richer experimental and linguistic setup than the original work. The introduction to this further research (apparently funded in part by Sony) even depicts a “robot” conversation very much like the conversation I describe in my post (only with better graphics…)

The basic setup of the experiment was as follows. Two computers, two digital cameras, and microphones and speakers permitting the software to “talk”. The cameras had some sort of pointing mechanism (laser pointers, I think) and faced a white board on which various shapes of different colors were arrayed randomly. The two software agents took turns pointing their laser pointers at the shapes and then generating various sounds. As the game continued, each agent would try to mimic the sounds they heard each other make while pointing at specific objects. Over time, the two agents were able to replicate the sounds when pointing at the same objects.

In terms of what I consider to be context, these experiments showed that it was possible for two “autonomous agents” to come to agreement on the “terminology” they would mutually use to refer to the same external perceptions (images of colored objects seen through a digital camera). Once trained, the two agents could “converse” about the objects, even pointing them out to each other and correctly finding the objects referred to when mentioned.

These experiments also showed that if you take software agents who have been trained separately (with other partner agents) and put them together, they will go through a period of renegotiation of terms and pronunciations. The robot experiments show a dramatic, destructive period in which the robots almost start over, generating an entirely new language, but finally the two agents again converge on something they agree on.

I’m not sure if the study continued to research context, per se. The later study included “mobile” agents and permitted interactions with several agents in a consecutive fashion. This showed the slow “evolution” of the language (a convergence of terminology amongst several agents) among a larger group of agents. I suspect, that unless the experimenters explicitly looked for this, they may have missed this detail (I’d be interested in finding out).

What would have been terrific is if the agent kept track of WHO it was talking to as well as what was being talked about. It is that extra piece of information which makes up a context. If an agent were able to learn the terminology of one agent, then learn the language of another, it could act as a translator between the two by keeping track of who it was talking with (and switching contexts…). Under my view, context is just the recognition of who I’m talking to and thus the selection of the correct variant of language and terminology to adapt to that audience.

Human ability to switch contexts so easily is due to our ability to remember how various concepts are described amongst specific communities. I’ve always said, until the computer can have a conversation with me and then come up with its own data structures to represent the concepts we discuss, I’ll be employed… Now I’m getting a little worried…

The Origin of a Context

On this blog and in the writings of many other people through history, the idea of “context” as a component of the definition, interpretation and usage of symbols plays a large role. Be it called “situational” or “cultural” or any of a number of sometimes more and sometimes less academic notions, context provides the key (just as a cipher is a key to an encryption code) to interpreting any message. Without knowing the context, many messages will be uninterpretable, or even worse, unrecognizable.

But what is “context” really? Where does it come from? Here is my decidedly informal discription.


Two people thinking their own thoughts meet for the first time

A conversation starts and one tells a story.

The other listens, interpreting silently what she hears into her own experiences.

She then responds, reflecting what she thought she heard, but with a variation or two. 

Conversation Begins

Conversation Begins

The first person agrees with some of her response. He hadn’t at first thought of the variation, but now that she’s mentioned it, he knows she’s on to something.


Conversation Ends Context Begins

Conversation Ends Context Begins

The two part company, carrying a memory of their conversation.

When they meet again, they will reinforce and reiterate their common perceptions on the matter. This is the origin point of CONTEXT: the set of principles and concepts that the two agree about, and the shared vocabulary they have used to describe them.

Good Summary on How Engineers Define Symbols

An interesting summary of how software engineers are constrained to develop data structures based on their locality is presented in a comment by “katelinkins” at this blog discussing a book about how “information is used“. I think, however, it ends on a note that suggests a bit of wishful thinking, in suggesting that engineers don’t really

…KNOW and UNDERSTAND the code…

and implying that additional effort  by them will permit

validating the representations upfront to aid in development of common taxonomy and shared context

I wasn’t sure whether the comment was suggesting that only software engineers “continually fall short” in this effort, or if she was suggesting a greater human failing.

While software developers can be an arrogant lot (I saw a description of “information arrogance” earlier in this discussion stream, and we can definitely fall into those traps, as anyone else can too), it is not always arrogance that causes our designs not to fit exactly everyone’s expectations.

Software developers do define symbols based on their regional context. But it gets even more constrained than that, because they must define the “symbology” based on what they know at a particular point in time and from a very small circle of sources, even if the software is intended for broad usage.

The fundamental problem is that there is ALWAYS another point of view. The thing that I find endlessly fascinating, actually, is that even though a piece of software was written for one particular business context (no matter how broad or constrained that is), someday, somewhere, a different group of users will figure out how to use the system in an entirely different context.

So, for example, the software application written for the US market that gets sold overseas and is used productively anyway, if not completely or in the same fashion, is a tremendous success, in my mind. This is how such applications as SAP (the product of German software development) has had such great success (if not such great love) worldwide!

I don’t believe there is such thing as a “universal ontology” for any subject matter. In this I think I’m in agreement with some of the other posts on this discussion thread, since the same problem arises in organizing library indexes for various types of the “information seeker” in any search. While having different sets of symbols and conceptions  among a diverse set of communicating humans can muddy the  space of our discourse, we at least have a capacity to compartmentalize these divergent views and switch between them at will. We can even become expert at switching contexts and mediating between people from different contexts.

One of the big problems with software is that it has to take what can be a set of fuzzy ideas, formalize them into a cohesive pattern of structure and logic that satisfies a certain level of rigor, and then “fix in cement” these ideas in the form of bug-free code. The end result is software that had to choose between variations and nuance which the original conceptions may not have ever tried to resolve. Software generally won’t work at all, at least in the most interesting parts of an ontology, if there is a divergence of conception within the body of intended users.

So in order to build anything at all, the developer is forced to close the discussion at some point and try their best to get as much right as is useful, even while they recognize there are variations left unhandled. Even in a mature system, where many of these semantic kinks have been worked out through ongoing negotiations with a particular user community, the software can never be flexible enough to accomodate all manner of semantic variation which presents itself over time without being revised or rewritten.

In the software development space, this fundamental tension between getting some part of the ontology working and getting all points of view universally right in a timely fashion has been one of the driving forces behind all sorts of paradigm shifts in best practices and architectures.  Until the computer software can have its own conversation with a human and negotiate its own design, I don’t see how this fundamental condition will change.

%d bloggers like this: