Functions On Symbols

Data integration is a complex problem with many facets. From a semiotic point of view, quite a lot of human cognitive and communicative processing capabilities is involved in the resolution. This post is entering the discussion at a point where a number of necessary terms and concepts have not yet been described on this site. Stay tuned, as I will begin to flesh out these related ideas.

You may also find one of my permanent pages on functions to be helpful.

A Symbol Is Constructed

Recall that we are building tautologies showing equivalence of symbols. Recall that symbols are made up of both signs and concepts.

If we consider a symbol as an OBJECT, we can diagram it using a Unified Modeling Language (UML) notation. Here is a UML Class diagram of the “Symbol” class.

UML Diagram of the "Symbol" Object

UML Diagram of the "Symbol" Object

The figure above depicts how a symbol is constructed from both a set of “signs” and a set of “concepts“. The sign is the arrangement of physical properties and/or objects following an “encoding paradigm” defined by the members of a context. The “concept” is really the meaning which that same set of people (context) has projected onto the symbol. When meaning is projected onto a physical sign, then a symbol is constructed.

Functions Impact Both Structure and Meaning

Symbols within running software are constructed from physical arrangements of electronic components and the electrical and magnetic (and optical) properties of physical matter at various locations (this will be explained in more depth later). The particular arrangement and convention of construction of the sign portion of the symbol defines the syntactic media of the symbol.

Within a context, especially within the software used by that context, the same concept may be projected onto many different symbols of different physical media. To understand what happens, let’s follow an example. Let’s begin with a computer user who wants to create a symbol within a particular piece of software.

Using a mechanical device, the human user selects a button representing the desired symbol and presses it. This event is recognized by the device which generates the new instance of the symbol using its own syntactic medium, which is the pulse of current on a closed electrical circuit on a particular wire. When the symbol is placed in long term storage, it may appear as a particular arrangement of microscopic magnetic fields of various polarities in a particular location on a semi-metalic substrate. When the symbol is in the computer’s memory, it may appear as a set of voltages on various microscopic wires. Finally, when the symbol is projected onto the computer monitor for human presentation, it forms a pattern of phosphoresence against a contrasting background allowing the user to perceive it visually.

Note through all of the last paragraph, I did not mention anything about what the symbol means! The question arises, in this sequence of events, how does the meaning of the symbol get carried from the human, through all of the various physical representations within the computer, and then back out to the human again?

First of all, let’s be clear, that at any particular moment, the symbol that the human user wanted to create through his actions actually becomes several symbols – one symbol for each different syntactic representation (syntactic media) required for it to exist in each of the environments described. Some of these symbols have very short lives, while others have longer lives.

So the meaning projected onto the computer’s keyboard by the human:

  • becomes a symbol in the keyboard,
  • is then transformed into a different symbol in the running hardware and operating system,
  • is transformed into a symbol for storage on the computer’s hard drive, and
  • is also transformed into an image which the human perceives as the shape of the symbol he selected on the keyboard.

But the symbol is not actually “transforming” in the computer, at least in the conventional notion of a thing changing morphology. Instead, the primary operation of the computer is to create a series of new symbols in each of the required syntactic media described, and to discard each of the old symbols in turn.

It does this trick by applying various “functions” to the symbols. These functions may affect both the structure (syntactic media) of the symbol, but possibly also the meaning itself. Most of the time, as the symbol is copied and transferred from one form to another, the meaning does not change. Most of the functions built into the hardware making up the “human-computer interface” (HCI) are “identity” functions, transferring the originally projected concept from one syntactic media form to another. If this were not so, if the symbol printed on the key I press is not the symbol I see on the screen after the computer has “transformed” it from keyboard to wire to hard drive to wire to monitor screen, then I would expect that the computer was broken or faulty, and I would cease to use it.

Sometimes, it is necessary/desirable that the computer apply a function (or a set of functions called a “derivation“) which actually alters the meaning of one symbol (concept), creating a new symbol with a different meaning (and possibly a different structure, too).

Bridge Contexts: Meaning in the Edgeless Boundary

Previously, I’ve written about the idea of the “edgeless boundary” between semiospheres for someone with knowledge of more than one context. This boundary is “edgeless” because to the person perceiving it, there is little or no obvious boundary.

In software systems, especially in situations where different software applications are in use, the boundary between them, by contrast, can be quite stark and apparent. I’ll describe the reasons for this in other postings at a later time. The nutshell explanation is that each software system must be constrained to a well-defined subset of concepts in order to operate consistently. The subset of reality about which a particular application system can capture data (symbols) is limited by design to those regularly observable conditions and events that are of importance to the performance of some business function.

Often (in an ideal scenario), an organization will select only one application to support one set of business functions at a time. A portfolio of applications will thus be constructed through the acquisition/development of different applications for different sets of business functions. As mentioned elsewhere on this site, sometimes an organization will have acquired more than one application of a particular type (see ERP page). 

In any case, information contained in one application oftentimes needs to be replicated into another application within the organization.  When this happens, regardless of the method by which the information is moved from one application to another, a special kind of context must be created/defined in order for the information to flow. This context is called a “bridging context” or simply a “bridge context”.

As described previously, an application system represents a mechanized perception of reality. If we anthropomorphize the application, briefly, we might say that the application forms a semiosphere consisting of the meaning projected onto its syntactic media by the human developers and its current user community, forming symbols (data) which carry the specifically intended meaning of the context.

Two applications, therefore, would present two different semiospheres. The communication of information from one semiosphere to the other occurs when the symbols of one application are deconstructed and transformed into the symbols of the other application, with or without commensurate changes in meaning. This transformation may be effected by human intervention (as through, for example, the interpretation of outputs from one system and the re-coding/data entry into the other), or by automated transformation processes of any type (i.e., other software).

“Meaning” in a Bridging Context

Bridging Contexts have unique features among the genus of contexts overall. They exist primarily to facilitate the movement of information from one context to another. The meaning contained within any Bridging Context is limited to that of the information passing across the bridge. Some of the concepts and facts of the original contexts will be interpretable (and hence will have meaning) within the bridging context only if they are used or transformed during this flow.  Additional information may exist within the bridge context, but will generally be limited to information required to perform or manage the process of transformation.

Hence, I would consider that the knowledge held or communicated by an individual (or system) operating within a bridging context which is otherwise unrelated to either of the original contexts, or of the process of transference, would existing outside of the bridging context, possibly in a third context. As described previously, the individual may or may not perceive the separation of knowledge in this manner.

Special symbols called “travellers” may flow through untouched by transformation and unrecognized within the bridging context. These symbols represent information important in the origin context which may be returned unmodified to the origin context by additional processes. During the course of their trip across the bridging context(s) and through the target contexttravellers typically will have no interpretation, and will simply be passed along in an unmodified syntactic form until returned to their origin, where they can then be interpreted again. By this definition, a traveller is a symbol that flows across a bridge context but which only has meaning in the originating context.

Given a path P from context A to context B, the subset of concepts of A that are required to fulfill the information flow over path P are meaningful within the bridging context surrounding P. Likewise, the subset of concepts of B which are evoked or generated by the information flowing through path P, is also part of the content of the bridge context.  Finally, the path P may generate or use information in the course of events which are neither part of context A nor B. This information is also contained within the bridge context.

Bridge contexts may contain more than one path, and paths may transfer meaning in any direction between the bridged contexts. For that matter, it is possible that any particular bridging context may connect more than two other contexts (for example, when an automated system called an “Operational Data Store” is constructed, or a messaging interface such as those underlying Service Oriented Architecture (SOA) components are built).

An application system itself can represent a special case of a bridging context. An application system marries the context defined by the data modeller to the context defined by the user interface designer. This is almost a trivial distinction, as the two are generally so closely linked that their divergence should not be considered a sign of separate contexts. In this usage, an application user interface can be thought of as existing in the end user’s context, and the application itself acts to bridge that end user context to the context defining the database.

Indicators of Semiospheric Boundary

I was reading an entry at Sentence First on the surprise the author experienced when the word “razzed” was used in the body of a book on semantics, and drew a connection between the reported experience of the blog’s author upon finding the term and the notion of “semiospheres” described by Yuri Lotman.

Lotman describes the following thoughts in his paper “On the Semiosphere”, (Sign Systems Studies 33.1, 2005):

The division between the core and the periphery is a law of the internal organisation of the semiosphere. The dominant semiotic systems are located at the core.

This suggest to me that one would expect the most frequent usage of terms important to a semiosphere to occur in the dominant core.  Since a semiosphere is a continuum all the way to its edges (sharing an edge with another semiosphere), these symbols from the core should eventually and occasionally appear in the edges.

Lotman also said:

The border of semiotic space is the most important functional and structural position, giving substance to its semiotic mechanism. The border is a bilingual mechanism, translating external communications into the internal language of the semiosphere and vice versa.

When the semiosphere identifies itself with the assimilated “cultural” space, and the world which is external to itself .. then the spatial distribution of semiotic forms takes the following shape in a variety of cases: a person who, by virtue of particular talent … or type of employment … belongs to two worlds, operates as a kind of interpreter, settling in the territorial periphery …whilst the sanctuary of “culture” confines itself to the deified world situated at the centre.

The author of the post at Sentence First described himself as residing at the “elbow of Europe” and his blog has as a catch phrase “An Irishman’s blog about the Engish language. Mostly.” The book he was reading was written in the 1930’s by an American for an American lay audience, the subject being the esoteria of semiotics.

What I found ironic/interesting is how these two semiotic events (the book and the blog post) illustrate Lotman’s points so nicely.

The book on semiotics was purposefully written by one of these special people with connections to two different worlds of semiosis (one of semiotic academia, and the other of the larger American culture of his time) and is a purposeful attempt to relate the esoteric concepts of the one in the vernacular of the other. From this, to the point raised by the Sentence First author, I judge that the use of the term “razzed” to actually be a great example of the book’s attempt to reach that core audience.

The second semiotic act provides a narrative of the experience of someone sitting on the boundary between two semiospheres. While I hesitate to try to label and define this second pair of semiospheres, I think the fact that the author had to pause when he encountered the term in the context of the book shows this boundary clearly.

What I liked about the posting was that, while Lotman’s discussion is a good illustration of the semiotic landscape, and how semiotic boundaries tend to operate, Sentence First’s posting tells the story of the experience of someone actually sitting on and experiencing one of those boundaries.

Putting these two things together shows, I think, one more thing as well. As an individual person, through my life experience, I gather and collect “semiotic technique” – in other words, I learn how to communicate – in many different contexts. Being human, I tend to distill, fuse, combine, contrast, and ultimately integrate all of these semiotic capabilities into my own personal arsonal of communication. Thus armed, I am able to translate one semiotic act from one context into any of a thousand others. It is my human condition to do so.

Thus while Lotman seems to imply that a special sort of person is needed to act at the semiotic boundary, I think that it takes no special talent in particular to do so. Rather, I think it is merely dependent on having a person who is immersed in both contexts to find someone who will translate between them.

%d bloggers like this: