Overlapping Context and Fuzzy Edges

Parent-Child Context Relationships: Intersection/Union

3/1/2005

The following figures depict some notional ideas for how to graphically describe some of the interesting relationships among contexts as they occur in a large, formal organization. The idea occurred to me that there must be some way of describing the similarities and differences in the concepts and discourse of the various subgroups of an organization (any organization). In the diagram, each oval represents a defined organizational group established by the business to allocate and accomplish all of the work necessary for the business to function. Each oval within another oval represents a specific group of individuals working in that business, until we reach the largest oval representing all employees in all groups. Even this largest oval exists in a larger context, that of the culture at large.

The discussion which follows touches on some incomplete ideas about how the concepts, signs and symbols within a given context relate to those of both smaller child and larger parent contexts.

Graphical depiction of Parent Child Contexts

Above: A Bird's Eye View of Nested Contexts; Below: Cross Section View of Nested Contexts

“Inheritance” of concept flows down from the broadest context down to the lowest context. This is not like the inheritance of properties in an object oriented paradigm, so the term may need to be changed. The idea really is that in the absence of an explicit statement of a concept in a lower level context, the members of the community may defer to the definition of that concept from one of the broader contexts that exist above them. In other words, the larger community of humans may have defined the concept and the more detailed context may neglect to reiterate the concept, preferring instead to use the larger context’s definition.

On the other hand, any concept defined in a broader context may be re-defined at a more detailed level. This may or may not be intentional, or even noticed by either members of the larger context or the more insular context. When noticed, it still doesn’t typically cause a problem in normal human discourse, as the humans are able to translate between each context, and hold in their minds each definition.

Contexts at different levels that do not share the same lineage may define a concept in different ways. If their members do not interact under normal circumstances, then there is still not a problem of communication or data integration. However, problems arise out of this layering and locality-driven conceptualization when the information must be shared, either tete-a-tete through direct interface (as happens in workflow integration problems) or through some roll-up to a common conceptual, parent context (as happens in reporting and business intelligence problems). This is the origin of the “single version of the truth” goal that many organizations now take as a given, best practice.

“Inheritance” of concepts flows down. What this means is that concepts defined in the parent’s broader context may still hold meaning in the more narrow child context. Exceptions/replacements are not limited to replacing concepts from the immediate parent, but can happen with any concept above. Each context layer, almost by definition, will define concepts that are uniquely their own, as well. This is one of the sources of intra-organization argument and confusion, as the same terms (syntactic medium) may be used to refer to two slightly (or even grossly) divergent ideas within the same corporate context.

Not every symbol will be meaningful in every child context, the process of transference of concepts can filter out concepts as well as borrow them. At each contextual layer, shared structure may be given different meanings. Lack of specificity/explicitness of definition at a layer does not imply automatic inheritance from above, as it can also reflect a vagueness of thought or lack of agreement about a fringe aspect.

The vacuum created, however, tends to favor the wholesale borrowing of the concept from the parent context.

Each context layer is complete in its own right. The sizes shown in the diagram suggest a size of content but this is just an artifact of the notation. A child context may define an infinite number of concepts over time, just as its parent context does. Theoretically, each context could be depicted or described in full without reference to the broader parent contexts.

Not every concept defined within any particular layer will wind up represented within some application software used by the humans participating in that context. However, if the humans in that context have acquired software to support their activities, the concepts within that system will naturally conform to the context, although they may force the context to be changed to reflect limitations and capabilities that the software imposes.

The reality is of course much more complicated than the diagram suggests. Since the context at each level is defined by the humans who inhabit and communicate within it, new members may introduce or adapt concepts from other contexts that are unrelated to the hierarchy of autonomy and control. Rather than attempt to trace the origin point of concepts across all contexts, it is recommended that these few concepts be considered  either of local origin, or as part of a bridging context between the context and the context of origin. This will have to be chosen only based on the value to be gained from either point of view.

Bridging contexts are new contexts established to bridge between some subset of concepts from each of two different contexts. These are established when new information communication between the two contexts is required. The bridging context can be recognized by the relative sparseness of the conceptual inventory, and by the fact that the lineage of the concepts is limited to two (or perhaps a handful at most) otherwise disjoint contexts.

Most transaction oriented interfaces, as well as any data interface between two functionally disparate systems (of any type) are defined within a bridging context limited to just the mediating symbols.

Advertisements

Brass Tacks and Comparability

So I thought I should try to explain “comparability” very simply. Reading my previous posts, which were derived from larger texts, I spend a lot of time saying a lot of generalities, and I think the main point is getting missed. So here’s me getting down to brass tacks on the subject.

A computer CPU is a very basic electrical device. Send it a stream of electrons and a command to “add”, and it returns another stream of electrons representing a purely “mechanical” (i.e., unintelligent) electrical result. That CPU doesn’t know anything about semantics, or whether the switches and gates it opens and closes should appropriately be applied to those particular data streams. It just does what it was designed to do given that particular sequence of electron streams. If the streams are comparable before they get to the CPU, then the output will be meaningful. If they are not comparable, then the output (and being a CPU, there will be some output) will not be meaningful.

So the job of the software is to manipulate each symbol before presenting it to the CPU. In particular, the software needs to take each symbol and replace it with one that MEANS the same as the original symbol, but which will present itself to the CPU as COMPARABLE to the other symbols.

Comparability has to be put into the computer, through the software, by a human being. In particular, it is the human who understands when one data stream is not comparable to another, and it is the human being who writes the code to change one stream so that it becomes comparable to the other.

So what really are we talking about? Let me make a non-computer example to show the point.

2 + 00000010 = IV

If I take a pencil and write the above string of characters on a piece of paper, and show it to another computer programmer, after a few moments, I would expect that person to agree that this is a correct mathematical statement

 two plus two equals four

Part of the success of the person in understanding the original statement is that they are able to parse each symbol in the string, interpret the MEANING of each symbol, then translate each into COMPARABLE numeric ideas.

If the computer CPU could experience each symbol as I’ve written it (let’s agree that each of the symbols depicted here would have similar diversity of structure in the computer as they do here on the page), then we can immediately grasp what comparability is. The CPU does not know what the symbols mean, it cannot make the interpretation just by looking at the symbols as they are presented and come to the same conclusion as the human. 

If we look at what I, the human did, to provide you, the reader, with a more readable version of the equation, I replaced each symbol with another one that meant the same, but which appeared as mutually comparable symbols:

  • 2   –>  two
  • +  –>  plus
  • 00000010  –>  two
  • =  –>  equals
  • IV  –>  four

Before the CPU can compare the symbol “2” to the symbol “00000010”, they must both be replaced with two other symbols, each with the standard interpretation of “two”. These new symbols must be structured to flow through the CPU in such a way that their very structure is modified by the CPU to create a third symbol whose standard interpretation has the meaning “four”. The “plus” symbol must be translated into the CPU’s “ADD” instruction, and the “equals” symbol is represented by the stream of electricity leaving the CPU with the resulting symbol.

Functions On Symbols

Data integration is a complex problem with many facets. From a semiotic point of view, quite a lot of human cognitive and communicative processing capabilities is involved in the resolution. This post is entering the discussion at a point where a number of necessary terms and concepts have not yet been described on this site. Stay tuned, as I will begin to flesh out these related ideas.

You may also find one of my permanent pages on functions to be helpful.

A Symbol Is Constructed

Recall that we are building tautologies showing equivalence of symbols. Recall that symbols are made up of both signs and concepts.

If we consider a symbol as an OBJECT, we can diagram it using a Unified Modeling Language (UML) notation. Here is a UML Class diagram of the “Symbol” class.

UML Diagram of the "Symbol" Object

UML Diagram of the "Symbol" Object

The figure above depicts how a symbol is constructed from both a set of “signs” and a set of “concepts“. The sign is the arrangement of physical properties and/or objects following an “encoding paradigm” defined by the members of a context. The “concept” is really the meaning which that same set of people (context) has projected onto the symbol. When meaning is projected onto a physical sign, then a symbol is constructed.

Functions Impact Both Structure and Meaning

Symbols within running software are constructed from physical arrangements of electronic components and the electrical and magnetic (and optical) properties of physical matter at various locations (this will be explained in more depth later). The particular arrangement and convention of construction of the sign portion of the symbol defines the syntactic media of the symbol.

Within a context, especially within the software used by that context, the same concept may be projected onto many different symbols of different physical media. To understand what happens, let’s follow an example. Let’s begin with a computer user who wants to create a symbol within a particular piece of software.

Using a mechanical device, the human user selects a button representing the desired symbol and presses it. This event is recognized by the device which generates the new instance of the symbol using its own syntactic medium, which is the pulse of current on a closed electrical circuit on a particular wire. When the symbol is placed in long term storage, it may appear as a particular arrangement of microscopic magnetic fields of various polarities in a particular location on a semi-metalic substrate. When the symbol is in the computer’s memory, it may appear as a set of voltages on various microscopic wires. Finally, when the symbol is projected onto the computer monitor for human presentation, it forms a pattern of phosphoresence against a contrasting background allowing the user to perceive it visually.

Note through all of the last paragraph, I did not mention anything about what the symbol means! The question arises, in this sequence of events, how does the meaning of the symbol get carried from the human, through all of the various physical representations within the computer, and then back out to the human again?

First of all, let’s be clear, that at any particular moment, the symbol that the human user wanted to create through his actions actually becomes several symbols – one symbol for each different syntactic representation (syntactic media) required for it to exist in each of the environments described. Some of these symbols have very short lives, while others have longer lives.

So the meaning projected onto the computer’s keyboard by the human:

  • becomes a symbol in the keyboard,
  • is then transformed into a different symbol in the running hardware and operating system,
  • is transformed into a symbol for storage on the computer’s hard drive, and
  • is also transformed into an image which the human perceives as the shape of the symbol he selected on the keyboard.

But the symbol is not actually “transforming” in the computer, at least in the conventional notion of a thing changing morphology. Instead, the primary operation of the computer is to create a series of new symbols in each of the required syntactic media described, and to discard each of the old symbols in turn.

It does this trick by applying various “functions” to the symbols. These functions may affect both the structure (syntactic media) of the symbol, but possibly also the meaning itself. Most of the time, as the symbol is copied and transferred from one form to another, the meaning does not change. Most of the functions built into the hardware making up the “human-computer interface” (HCI) are “identity” functions, transferring the originally projected concept from one syntactic media form to another. If this were not so, if the symbol printed on the key I press is not the symbol I see on the screen after the computer has “transformed” it from keyboard to wire to hard drive to wire to monitor screen, then I would expect that the computer was broken or faulty, and I would cease to use it.

Sometimes, it is necessary/desirable that the computer apply a function (or a set of functions called a “derivation“) which actually alters the meaning of one symbol (concept), creating a new symbol with a different meaning (and possibly a different structure, too).

A Concept is Born: Sense Memory and Name Creation

June 24, 1988

Experience is characterized by memory of sensual information in all its detail. Analysis of this data can be retroactively applied. I can remember that:

“Yes, the sky was grey and windy just prior to the tree falling behind me.”

and therefore come to understand a set of events later, in some other context. Using this sensual memory aids abstraction and analysis because it acts as the raw material out of which abstractions can be built. Thus it is possible at a later date to reflect on past events and discover related occurences where before there was unorganized memory.

Learning of patterns is continuous:

“What was that?”

This question initially gets very simplistic answers when asked by toddlers and children. It takes nearly 20 years for humans to talk about philosophy in a formal way. But as slight variations to the simple occurences of events are experienced, the agent (learner) begins to organize subclasses of the same general event, especially if the social world provides him a useful distinction to use to characterize the subclass. In doing so, the subclass name becomes a synonym for the general idea.

Creative research by the agent (learner) is characterized by the creation of new distinguishing marks and the choosing of a class name for those marks. Communication with others regarding the subclass then becomes a matter of describing those marks, providing the short hand name, and obtaining agreement from the others that both the marks and the name are apropos.

And thus a concept is born…

Bridge Contexts: Meaning in the Edgeless Boundary

Previously, I’ve written about the idea of the “edgeless boundary” between semiospheres for someone with knowledge of more than one context. This boundary is “edgeless” because to the person perceiving it, there is little or no obvious boundary.

In software systems, especially in situations where different software applications are in use, the boundary between them, by contrast, can be quite stark and apparent. I’ll describe the reasons for this in other postings at a later time. The nutshell explanation is that each software system must be constrained to a well-defined subset of concepts in order to operate consistently. The subset of reality about which a particular application system can capture data (symbols) is limited by design to those regularly observable conditions and events that are of importance to the performance of some business function.

Often (in an ideal scenario), an organization will select only one application to support one set of business functions at a time. A portfolio of applications will thus be constructed through the acquisition/development of different applications for different sets of business functions. As mentioned elsewhere on this site, sometimes an organization will have acquired more than one application of a particular type (see ERP page). 

In any case, information contained in one application oftentimes needs to be replicated into another application within the organization.  When this happens, regardless of the method by which the information is moved from one application to another, a special kind of context must be created/defined in order for the information to flow. This context is called a “bridging context” or simply a “bridge context”.

As described previously, an application system represents a mechanized perception of reality. If we anthropomorphize the application, briefly, we might say that the application forms a semiosphere consisting of the meaning projected onto its syntactic media by the human developers and its current user community, forming symbols (data) which carry the specifically intended meaning of the context.

Two applications, therefore, would present two different semiospheres. The communication of information from one semiosphere to the other occurs when the symbols of one application are deconstructed and transformed into the symbols of the other application, with or without commensurate changes in meaning. This transformation may be effected by human intervention (as through, for example, the interpretation of outputs from one system and the re-coding/data entry into the other), or by automated transformation processes of any type (i.e., other software).

“Meaning” in a Bridging Context

Bridging Contexts have unique features among the genus of contexts overall. They exist primarily to facilitate the movement of information from one context to another. The meaning contained within any Bridging Context is limited to that of the information passing across the bridge. Some of the concepts and facts of the original contexts will be interpretable (and hence will have meaning) within the bridging context only if they are used or transformed during this flow.  Additional information may exist within the bridge context, but will generally be limited to information required to perform or manage the process of transformation.

Hence, I would consider that the knowledge held or communicated by an individual (or system) operating within a bridging context which is otherwise unrelated to either of the original contexts, or of the process of transference, would existing outside of the bridging context, possibly in a third context. As described previously, the individual may or may not perceive the separation of knowledge in this manner.

Special symbols called “travellers” may flow through untouched by transformation and unrecognized within the bridging context. These symbols represent information important in the origin context which may be returned unmodified to the origin context by additional processes. During the course of their trip across the bridging context(s) and through the target contexttravellers typically will have no interpretation, and will simply be passed along in an unmodified syntactic form until returned to their origin, where they can then be interpreted again. By this definition, a traveller is a symbol that flows across a bridge context but which only has meaning in the originating context.

Given a path P from context A to context B, the subset of concepts of A that are required to fulfill the information flow over path P are meaningful within the bridging context surrounding P. Likewise, the subset of concepts of B which are evoked or generated by the information flowing through path P, is also part of the content of the bridge context.  Finally, the path P may generate or use information in the course of events which are neither part of context A nor B. This information is also contained within the bridge context.

Bridge contexts may contain more than one path, and paths may transfer meaning in any direction between the bridged contexts. For that matter, it is possible that any particular bridging context may connect more than two other contexts (for example, when an automated system called an “Operational Data Store” is constructed, or a messaging interface such as those underlying Service Oriented Architecture (SOA) components are built).

An application system itself can represent a special case of a bridging context. An application system marries the context defined by the data modeller to the context defined by the user interface designer. This is almost a trivial distinction, as the two are generally so closely linked that their divergence should not be considered a sign of separate contexts. In this usage, an application user interface can be thought of as existing in the end user’s context, and the application itself acts to bridge that end user context to the context defining the database.

Packaged Apps Built in Domains But Used In Contexts

Packaged applications are software systems developed by a vendor and sold to multiple customers. Those applications which include some sort of database and data storage especially are built to work in a “domain”.

The “domain” of the software application is an abstract notion of the set of contexts the software developers have designed the software to support. While the notion of “domain” as described here is similar to and related to the notion of “context”, the domain of the software only defines the potential types of symbols that can be developed. In other words, the domain defines a syntactic medium (consisting of physical signs, functions and transformations on those signs, and the encoding paradigm).

But the software application domain is NOT its context. Context, when applied to software applications, is defined by the group of people who use the software together.

There’s a difference, therefore, between how developers and designers of business software think about and design their systems, and how those systems are used in the real world. No matter how careful the development process is, no matter how rigorous and precise, no matter how closely the software matches the business requirements, and no matter how cleanly and completely the software passed its tests, the community using the software will eventually be forced to bend it to a purpose for which it was never intended.

This fact of life is the basis of several relatively new software development paradigms, including Agile and Extreme Programming, and the current Service-Oriented Architecture. In each of these cases, the recognition that the business will not pause and wait while IT formally re-writes and re-configures application systems.

One of the shared tenets of these practices is that because the business is so fluid, it is impossible to follow formal development methods. In SOA, the ultimate ideal is a situation where the software has become so configurable (and so easy to use), that it no longer requires IT expertise to change the behavior. The business users themselves are able to modify the operation of the software daily, if necessary.

The Context Continuum

So my previous post about the “Origins of a Context” was grossly simplistic. That is however, a good way to get a basic idea out there. Obviously there are many complex factors and layers of influence that affect the extent and content of a context.

One way to look at context is as a continuum from the very small to the very large. This “size” measurement is a reflection of the number of people who share the context, not necessarily the size of the population of concepts and symbols within it.

As I’ve said in other places, a context is defined by its membership first, and its content second.

Hence, by my definition, the smallest context is defined by a single human being. That person would create contexts of a private nature: mementos of their life and personal mnemonics. If the person were artistic, they might create art and artifacts of personal importance. These personal symbols would remain private until the person shares them with someone else.

As soon as they have been shared, even if only with one other person, these artifacts take on additional meaning and become community symbols. Once they have been placed into a larger community, further refinement and re-enforcement of the symbol becomes a community activity. For the original “artist”, their conception can take on a life of its own, and they may lose control over it.

As more and more people become aware of a symbol, the broader the context becomes. But in addition, the symbol itself will begin to change its meaning, either becoming much more generic and broad, or tightening up to some exclusively minimized idea. As soon as this happens (and it happens almost immediately after it begins to be shared) correct interpretation of the symbol must, by definition, take into account which context’s version of the symbol is being considered. Other writers have referred to this issue as one of identifying the “situational” meaning of the symbol, while others talk about the symbol’s “frame”. In my mind these are the same thing as what I’m calling “context”.

So what does this continuum of contexts look like? I’ve drawn a first draft diagram of the smooth transition from personal symbol to the “semiosphere”. It identifies the types and relative sizes of contexts and presents some of the names of their various features. It also shows where in the continuum various types of study and research fall.

I make no claims of absolute accuracy here, and invite comments from experts in these fields (and any others who want to project onto my template).

 

Continuum of Context from Single Person to Semiosphere

Continuum of Context from Single Person to Semiosphere

 

%d bloggers like this: