You Can’t Store Meaning In Software

I’ve had some recent conversations at work which made me realize I needed to make some of the implications of my other posts more obvious and explicit. In this case, while I posted awhile ago about How Meaning Attaches to Data Structures I never really carried the conversation forward.

Here is the basic, fundamental mistake that we software developers make (and others) in talking about our own software. Namely, we start thinking that the data structure and programs actually and directly hold the meaning we intend. That if we do things right, that our data structures, be they tables with rows and columns or POJOs (Plain Old Java Objects) in a Domain layer, just naturally and explicitly contain the meaning.

The problem is, that whatever symbols we make in the computer, the computer can only hold structure. Our programs are only manipulating addresses in memory (or disk) and only comparing sequences of bits (themselves just voltages on wires). Now through the programming process, we developers create extremely sophisticated manipulations of these bits, and we are constantly translating one sequence of bits into another in some regular, predictable way. This includes pushing our in-memory patterns onto storage media (and typically constructing a different pattern of bits), and pushing our in-memory patterns onto video screens in forms directly interpretable by trained human users (such as displaying ASCII numbers as characters in an alphabet forming words in a language which can be read).

This is all very powerful, and useful, but it works only because we humans have projected meaning onto the bit patterns and processes. We have written the code so that our bit symbol representing a “1” can be added to another bit symbol “1” and the program will produce a new bit symbol that we, by convention, will say represents a value of “2”.

The software doesn’t know what any of this means. We could have just as easily defined the meaning of the same signs and processing logic in some other way (perhaps, for instance, to indicate that we have received signals from two different origins, maybe to trigger other processing).

Why This Is Important

The comment was made to me that “if we can just get the conceptual model right, then the programming should be correct.”  I won’t go into the conversation more deeply, but it lead me to thinking how to explain why that was not the best idea.

Here is my first attempt.

No matter how good a conceptual model you create, how complete, how general, how accurate to a domain, there is no way to put it into the computer. The only convention we have as programmers when we want to project meaning into software is that we define physical signs and processes which manipulate them in a way consistent with the meaning we intend.

This is true whether we manifest our conceptual model in a data model, or an object model, or a Semantic Web ontology, or a rules framework, or a set of tabs on an Excel file, or an XML schema, or … The point is the computer can only store the sign portion of our symbols and never the concept so if you intend to create a conceptual model of a domain, and have it inform and/or direct the operation of your software, you are basically just writing more signs and processes.

Now if you want some flexibility, there are many frameworks you can use to create a symbollic “model” of a “conceptual model” and then you can tie your actual solution to this other layer of software. But in the most basic, reductionist sense, all you’ve done is write more software manipulating one set of signs in a manner that permits them to be interpreted as representing a second set of signs, which themselves only have meaning in the human interpretation.

The Syntactics of Speech: What a Language Permits You to Say Is Less Than What You Know

I found this article intensely interesting. It corroborates and validates some of my own ideas about how language and symbols are used in communication. Namely, it suggests that even though a language does not contain structures and syntactic rules allowing for precise designation of a concept, that does not mean that such a concept cannot be communicated and understood by someone who uses that language. It just may take a lot more time to convey the thought. It may also be difficult to confirm the listener’s understanding because the language they have available to respond is the same one as the original message (which we said could not directly convey the meaning).

NY Times article

Context As Observer

Consider a context as a reflection of one point of view. As a frame or lense through which the external environment is observed. The “things” that “matter” to the context are the events or features which are both:

  • VISIBLE – or otherwise perceptible, and
  • NAMEABLE – or describable/categorizable

If something is imperceptible, then obviously there will be nothing to notice – no “referent”. In this case, imagined perceptions will be included as “perceptible”. If the thing which could be perceived is not nameable or otherwise describable within the context, then the context hasn’t noticed it and it does not exist.

That is to say, that a reality exists independent of any particular context, but in terms of the point of view of the context, that which the context has no expression for lies outside of the context. If context is the perceiver, then the indescribable reality outside of the context may as well not exist, for all the benefit the context gains from it.

Every context that exists is limited to the perception of  only a subset of reality. Is there a limit to the perception of reality if we take into account the sum total of all contexts in existence today, and all those which existed in the past? Yes, else one would expect that invention and discovery would cease.

Context is a feature of communication. It is not reality, which is the referent of the communication.

An example comes to mind from the physical world. One context may be the one in which the speed of a particle is important. Another context may be the one in which the position of the particle in space and time is important. Then there’s the context of Quantum Mechanics which is the one which first recognized that there were two other contexts (although it did not call them this) and that one interferes with the other. In QM, due to the known limitations of the physical world and our ability to perceive it at a particular level, these two contexts can never observe the same exact phenomenon. An observer in one context that observes one aspect of the particle necessarily changes the condition of the particle so that the other condition is no longer perceptible.

This seems really trivial, until we broaden the idea out to more complex contexts. The world is an analog, continuous place. Even the most complex context however can only perceive and name certain aspects, and is unaware of or finds inexpressible other aspects.

This is the place where poets and artists find creative expression and energy, between the lines of the necessarily constrained contexts of their own ability to communicate.

Out of the whole continuity of experience and phenomena which is the world about us, we are selective about the things we notice and think and speak about. Why one observation is made instead of another is based wholly on the things we find “remarkable”.

We remark on the things that are remarkable to us. By this I mean, the things we wish to convey or communicate are the things we find words to express. This “finding of words” includes inventing words and turns of phrase. After all, we each bring to the human table a uniqueness of vision commensurate to our talents, proclivities and experience.

Those to whom we successfully impart our observations, thru the act of their understanding the message, enter into the context of discourse of those observations. Once in that context, they may corroborate or elaborate on my original observations, broadening and enriching the context. Over time our collective observations become codified and regular, our terminology more richly evocative and concise, such that we may begin to speak in a shorthand.

Where a paragraph once was needed, now a sentence – where once a sentence now a single term…

As we start recognizing more and more examples of a phenomena, we invent a sublanguage which, when used within the context (and with the proper participants – see definition of context – i.e., with other people who share this context) is perfectly understandable.

An extreme example of differences in contexts would be the contrast of elementary school arithmetics versus obscure branches of mathematics research. The concepts which matter in the one are inconceivable in the other, the notation and terminology of the one are indecipherable in the other.

Consider the origin and usage of the term “ponzi scheme”. The original of the type was perpetrated by a man named “Ponzi”. Anyone who has operated a similar scheme since can now be referred to using the name of one notorious example. In recent years, the largest ponzi scheme ever perpetrated was the brainchild of Bernie Madoff. Time will tell if future outrageously immense ponzi schemes will be given a new moniker.

We might ask: “In what sense do we say that a “context” is an “observer”? There are a few ways we can use this analogy. First, a context is the product of communication among indidivual humans. It is the participation in the communication, in sending and receiving message, that creates the scope of the communication. What is communicated is the shared observations of the participant community.

Context Is:

Communication == Community == Communication

Information transfer among a group of individuals who share a common interest.

The language used is necessarily constrained, at first informally but later perhaps more rigidly as communication becomes more focused. Difficult observations require lots of talk. Once the idea has been grasped, however, less and less is needed to evoke the memory of the original idea, until a single term from the original description can be used as a stand-in.

It is not the abstract notion of a context that actually does the observing. Rather it is the community members themselves, the humans, who do the observing. The subject of communication is necessarily the things of interest to the community. But an individual who observes something is not necessarily participating in the context. Only the observations that are shared and received are part of the context.

A second sense in which the context can be described as the observer at an abstract level. While the context is formed from the collective interests and communication of the group of humans, eventually, the context becomes prescriptive. The extent and content of the shared sublanguage then defines the type and content of the observations that can be made by the members of the context. An observation that falls outside of the context’s prescriptive rules for content and structure is likely not to be understood (received). If it is not received, it may as well not have happened, hence such messages fall out of context.  The more constrained and formalized the context, the more explicit and succinct the observations that can be carried by that context, but also the fewer the variety of observations.

Successful study of the constraints and observations within a context occurs in much of the “social sciences”. Much can be deduced about what is important within a community by analysing the rules and limits of the communication that community’s context permits. In particular, a sense of the portion of existence important to the context can be deduced from the study of the observations communicated within that context.

Good Summary on How Engineers Define Symbols

An interesting summary of how software engineers are constrained to develop data structures based on their locality is presented in a comment by “katelinkins” at this blog discussing a book about how “information is used“. I think, however, it ends on a note that suggests a bit of wishful thinking, in suggesting that engineers don’t really

…KNOW and UNDERSTAND the code…

and implying that additional effort  by them will permit

validating the representations upfront to aid in development of common taxonomy and shared context

I wasn’t sure whether the comment was suggesting that only software engineers “continually fall short” in this effort, or if she was suggesting a greater human failing.

While software developers can be an arrogant lot (I saw a description of “information arrogance” earlier in this discussion stream, and we can definitely fall into those traps, as anyone else can too), it is not always arrogance that causes our designs not to fit exactly everyone’s expectations.

Software developers do define symbols based on their regional context. But it gets even more constrained than that, because they must define the “symbology” based on what they know at a particular point in time and from a very small circle of sources, even if the software is intended for broad usage.

The fundamental problem is that there is ALWAYS another point of view. The thing that I find endlessly fascinating, actually, is that even though a piece of software was written for one particular business context (no matter how broad or constrained that is), someday, somewhere, a different group of users will figure out how to use the system in an entirely different context.

So, for example, the software application written for the US market that gets sold overseas and is used productively anyway, if not completely or in the same fashion, is a tremendous success, in my mind. This is how such applications as SAP (the product of German software development) has had such great success (if not such great love) worldwide!

I don’t believe there is such thing as a “universal ontology” for any subject matter. In this I think I’m in agreement with some of the other posts on this discussion thread, since the same problem arises in organizing library indexes for various types of the “information seeker” in any search. While having different sets of symbols and conceptions  among a diverse set of communicating humans can muddy the  space of our discourse, we at least have a capacity to compartmentalize these divergent views and switch between them at will. We can even become expert at switching contexts and mediating between people from different contexts.

One of the big problems with software is that it has to take what can be a set of fuzzy ideas, formalize them into a cohesive pattern of structure and logic that satisfies a certain level of rigor, and then “fix in cement” these ideas in the form of bug-free code. The end result is software that had to choose between variations and nuance which the original conceptions may not have ever tried to resolve. Software generally won’t work at all, at least in the most interesting parts of an ontology, if there is a divergence of conception within the body of intended users.

So in order to build anything at all, the developer is forced to close the discussion at some point and try their best to get as much right as is useful, even while they recognize there are variations left unhandled. Even in a mature system, where many of these semantic kinks have been worked out through ongoing negotiations with a particular user community, the software can never be flexible enough to accomodate all manner of semantic variation which presents itself over time without being revised or rewritten.

In the software development space, this fundamental tension between getting some part of the ontology working and getting all points of view universally right in a timely fashion has been one of the driving forces behind all sorts of paradigm shifts in best practices and architectures.  Until the computer software can have its own conversation with a human and negotiate its own design, I don’t see how this fundamental condition will change.

%d bloggers like this: