Software Applications As Perception

“The agent has a scheme of individuation whereby it carves the world up into manageable pieces.”                    K. Devlin, “Situation Theory and Situation Semantics”, whitepaper, 2004, Stanford University.

A software application creates and stores repeated examples of symbols defined within the context of a particular human endeavor, representing a perceived conceptual reality, and encoded into signs using electro-magnetic syntactic media. While the software may be linked through automated sensors to an external environment, it is dependent on human perception and translation to capture and create these symbols. Business applications are almost entirely dependent on human perception to recognize events and observations. That said, while the original “perceptions” are made by human agents, the software, by virtue of the automation of the capture of these perceptions, can be said to “perceive” such events (although this should be considered a metaphor).

Application design is in large part the crystallization of a particular set of perceptions of the world for purposes of providing a regular, repeatable mechanism to record a set of like events and occurrences (data). In essence, the things important to be perceived (concepts) either for their regularity or their utility by some human endeavor (context) will determine the data structures (signs) that will be established, and therefore the data (symbols) that can be recorded by the software system.

The aspects important to the recognition and/or use of these repeated events (e.g., the inferences and conclusions to be derived from their occurence) determines the features or qualities and relationships that the application will record.

Good application design anticipates the questions that might be usefully asked about a situation, but it also limits the information to be collected to certain essentials. This is done purposefully because of the fundamental requirement that the attributes collected must be perceived and then encoded into the symbology within the limited power of automated perceptual systems (relative to human perceptual channels).

In other words, because a human is often the PERCEIVER for an application, the application is dependent on the mental and physical activity of the person to capture (encode) the events. In this role, while the human may perceive a wealth of information, the limits of practicality imposed by the human-computer interface (HCI) guarantees that the application will record only a tiny subset of the possible information.

This does not pose any particular problem, per se (except in creating a brittleness in the software in the face of future contextual change), but just illustrates further how the context of the application is more significantly constrained than either the perceived reality or even the boundaries formed from the limits of human discourse of the event. This inequality can be represented by this naive formulation:

Μ(Ac) << Μ(Hc)

The meaning contained in the Application A defined by the context c is much less than the meaning (information) contained in the Human H perception of the context.

It is important also to note that:

Μ(Ac) is a subset of Μ(Hc)

The meaning contained in the Application A is a subset of the meaning contained in the Human H.

No aspect of the application will contain more information than what the human can perceive. This is not to imply that the humans will necessarily be consciously aware of the information within the application. There are whole classes of applications which are intended to collect information otherwise imperceptible to the human directly. In this manner, applications may act as augmentations of human perceptual abilities. But these applications do not of themselves create new conceptions of reality posteriori to their development, but rather are designed explicitly to search for and recognize (perceive) specific events beyond the perception of natural human senses. Even in these situations, the software can only recognize and record symbols representing the subset of possible measurements/features that their human designers have defined for them.

Hence, while software applications may be said to perceive the world, they are limited to the perceptions chosen a priori by their human designers.


EXAMPLE: Syntactic Medium in an Anchor State

Just what is an “Anchor State“? An example will explain this better.

Take an “extract-transform-load” (ETL) process in a Data Warehouse application that copies data from one system (a database) to another based on some criteria. In particular, the example organization needs to capture customer’s names for use in a business intelligence application measuring the success of marketing mass-mailings. An ETL process will be defined (in terms used within the Metamorphic Modeling convention) as a transformation from a source Anchor State (source) to a target Anchor State(target). The syntactic medium of the source application contains a table called “EMPLOYEE”. This data structure has been co-opted by the user organization to include customer information. The organization has chosen to use this table to represent customers since it is the only data structure available in their system that associates a person’s name to an address, telephone number and e-mail account, and it has no other means of recording this information about its customers.

 The source Anchor State has been constrained, therefore, to the “EMPLOYEE” data structure, and to the set of symbols within that medium which represent customers. That same medium, in a different Anchor State, may have been constrained to the set of “managers”.

 So, how does the ETL process recognize the set of symbols within the “EMPLOYEE” data structure that represent customers? The user organization realized that the application containing this data structure also contained a table called “EMPLOYEETYPE” which contains user-defined codes for defining types of employees. This table’s primary key is a coded value stored in a field named “EMPTYPE”, which also appears as a foreign key reference in the “EMPLOYEE” table. The organization decided to create a symbol, namely a code in this EMPLOYEETYPE table to represent the “customer type”. Then, whenever they want to record information about a customer in the EMPLOYEE table, they assign this code value to the “EMPTYPE” column on the row representing this customer.

 The following figure depicts a portion of an “Entity Relation Diagram” which defines the “EMPLOYEE” and “EMPLOYEETYPE” tables in this application. It also shows a subset of the values contained within the “EMPLOYEETYPE” table, as defined by this organization.

Example Employee Table Data Model

Example Employee Table Data Model

As can be seen in the figure, there are actually three different “EMPLOYEETYPE” codes defined to represent the concept of “customer”. These are EMP_TYPE values 5, 6, and 7, representing “Customers”, “Premier Customers” (which the organization has defined as important customers), and “Good Customers”. Asside from the “business practice” that these three types can be used to differentiate “customers” from other types of entities, there is nothing intrinsic to the structures that indicates this. Hence, from an application standpoint, all types are equal and will be manipulated in the same way.

 From the point of view of the ETL under development, however, the significance of the usage of these three codes is critical to its proper operation. The source Anchor State for the ETL is defined as the set of raw symbols within the “EMPLOYEE” table that have one of the “customer” type code values in their corresponding EMPTYPE column. For this ETL (transformation), the EMPTYPE column and its values represent the semantic marker for “customer” in this source Anchor State. The Anchor State therefore consists of the data structures, “EMPLOYEE” and “EMPLOYEETYPE”, and the constraint that only the rows of the “EMPLOYEE” table where the EMP_TYPE value is 5, 6, or 7 define what the ETL should consider to be “customers”.


All pages are Copyright (C) 2004, 2009 by Geoffrey A. Howe
All Rights Are Reserved

Software as Semantic Choice

When I design a new software system, I have to choose what parts of reality matter enough to capture in the data (data is little bits of information stored symbollically and in great repetitive quantities). I can’t capture the entirety of reality symbollically, software is another example in life of having to divide an analog reality into discrete named chunks, choosing some and leaving others unmentioned.

This immediately sets the system up for future “failure” because at some point, other aspects of the same reality will become important. This is what in artificial intelligence is called “brittleness”. A quality which bedeviled the expert system movement and kept it from becoming a mainstream phenomenon. This is also a built in constraint on semantic web work, but I’ll leave that for another post.

Taking the example of quantum physics research as an example, there’d be no point in writing one application to capture both the speed and position of a quantum particle in a database, because as we all know, only one or the other data points is available to us to measure at one time. Thus we choose to capture the one that’s important to our study, and we ignore the other.

This is why a picture is worth a thousand words: because it is an analog of reality and captures details that can remain unnamed until needed at a future time.

This is also why we say that in communication we must “negotiate reality”. We must agree together (software developer and software user) what parts of reality matter, and how those parts are named, recognized, and interact.

In reading a recent thread on Library Science, it sounds like in the “indexing and abstracting” problem (used to set up a searchable space for finding relevant documents), a choice has to be made on what we think the searcher will most likely bring with him in order to find the information they seek. But by virtue of making one choice, we necessarily eliminate other choices we might have made which may have supported other seekers better.

This is an interesting parallel, and I must assume that I’ll find more as this dialog continues.

Good Summary on How Engineers Define Symbols

An interesting summary of how software engineers are constrained to develop data structures based on their locality is presented in a comment by “katelinkins” at this blog discussing a book about how “information is used“. I think, however, it ends on a note that suggests a bit of wishful thinking, in suggesting that engineers don’t really

…KNOW and UNDERSTAND the code…

and implying that additional effort  by them will permit

validating the representations upfront to aid in development of common taxonomy and shared context

I wasn’t sure whether the comment was suggesting that only software engineers “continually fall short” in this effort, or if she was suggesting a greater human failing.

While software developers can be an arrogant lot (I saw a description of “information arrogance” earlier in this discussion stream, and we can definitely fall into those traps, as anyone else can too), it is not always arrogance that causes our designs not to fit exactly everyone’s expectations.

Software developers do define symbols based on their regional context. But it gets even more constrained than that, because they must define the “symbology” based on what they know at a particular point in time and from a very small circle of sources, even if the software is intended for broad usage.

The fundamental problem is that there is ALWAYS another point of view. The thing that I find endlessly fascinating, actually, is that even though a piece of software was written for one particular business context (no matter how broad or constrained that is), someday, somewhere, a different group of users will figure out how to use the system in an entirely different context.

So, for example, the software application written for the US market that gets sold overseas and is used productively anyway, if not completely or in the same fashion, is a tremendous success, in my mind. This is how such applications as SAP (the product of German software development) has had such great success (if not such great love) worldwide!

I don’t believe there is such thing as a “universal ontology” for any subject matter. In this I think I’m in agreement with some of the other posts on this discussion thread, since the same problem arises in organizing library indexes for various types of the “information seeker” in any search. While having different sets of symbols and conceptions  among a diverse set of communicating humans can muddy the  space of our discourse, we at least have a capacity to compartmentalize these divergent views and switch between them at will. We can even become expert at switching contexts and mediating between people from different contexts.

One of the big problems with software is that it has to take what can be a set of fuzzy ideas, formalize them into a cohesive pattern of structure and logic that satisfies a certain level of rigor, and then “fix in cement” these ideas in the form of bug-free code. The end result is software that had to choose between variations and nuance which the original conceptions may not have ever tried to resolve. Software generally won’t work at all, at least in the most interesting parts of an ontology, if there is a divergence of conception within the body of intended users.

So in order to build anything at all, the developer is forced to close the discussion at some point and try their best to get as much right as is useful, even while they recognize there are variations left unhandled. Even in a mature system, where many of these semantic kinks have been worked out through ongoing negotiations with a particular user community, the software can never be flexible enough to accomodate all manner of semantic variation which presents itself over time without being revised or rewritten.

In the software development space, this fundamental tension between getting some part of the ontology working and getting all points of view universally right in a timely fashion has been one of the driving forces behind all sorts of paradigm shifts in best practices and architectures.  Until the computer software can have its own conversation with a human and negotiate its own design, I don’t see how this fundamental condition will change.

Chasing the Chimera: Searching for Universal Truth in the Data Center

There’s a widespread belief in the data community (sometimes stated and sometimes just implied) that not only does the pursuit of the definition of a universal Single Version of Truth have “obvious technical merits”, but that it is crucial to our collective success. Having spent an entire career helping customers in many different industries codify and fabricate business systems, including participating in more than a few attempts at establishing a single version of truth by standardizing data, I have been surprised by my own revelation in recent years that we, as an industry, have been chasing an unreachable, and possibly an undesirable, chimera.

It’s like the old riddle about how to swallow an elephant. The solution is to take small bites, and just keep at it. This is a common metaphor used whenever a large project to standardize an enterprise’s data is begun. The problem is, trying to create that all-encompassing, single standard for all of the data in the organization is not really comparable to eating a rotting elephant corpse. You’re not really eating a finite mass of elephant at all! A more appropriate metaphor would be to consider that you are actually chewing the grass on the edge of a vast plain, and it just keeps growing faster than you can chew!

The value of some data standardization cannot be denied. Re-engineering selected areas can result in better data quality, timeliness and actual value. Certainly we have seen that the wheels of e-commerce can be sped up by careful selection of the right standard. For some practitioners, however, taking this “piecemeal” approach, they feel, is insufficient, and may even detract from the ultimate goal. These practitioners have seen how much good came from a little standardization and rationalization, and then conclude that taking the practice to its logical conclusion should reap the ultimate benefit.

The problem with this logic is that it fails to take into account the cost of completion. My point is that no matter how valuable the end point is expected to be, the number of systems that come on and off line, the number of changes to the business, the number of external business partners, the number of external standards bodies, the number of mergers and acquisitions, means that they will never reach that end state.

Some people may agree with me on this point, and others may not. However, even those who might agree with me on the ultimate likelihood of success, may still take the same old approach to the problem: convening a steering committee of diverse end users, locking them in a room for weeks on end, and forcing them to define an abstract, but universal data dictionary. Only to find that major portions are already out of date, or that major subject areas are still missing, or worse still, that most people outside of this pressure-cooker committee disagree with or do not understand the result!

An alternative approach to this search for the universal would be to recognize that diversity of meaning and representation will be a given in any sufficiently large organization of humans, and to address this inevitability directly. This can be accomplished by creating a “federated data dictionary” following these rules:

  1. Don’t attempt to “swallow the elephant” – try “mapping the terrain” instead by creating well-documented data dictionaries of each context.
  2. Document the context that defined a concept in the first place.
  3. Only standardize as much as is necessary to knit together those portions of the enterprise that must work together, and do no more.
  4. Create a “data thesaurus” in addition to the data dictionaries that describes and documents the equivalence of meaning between the data structures of the different contexts, but only for those which must touch each other across the enterprise.
  5. Focus on the points of integration between the contexts first, where data flows from one context to another.

Isn’t it time we recognize that diversity exists? Maybe if we stop the never-ending chase for the universal, we’ll realize that diversity has its value too, and start trying to do a better job accommodating it.

Looking For The Semiotic Layperson

In searching for kindred spirits out there, I found a number of individual posts which I thought I could use to elucidate some of my own opinions. The following are mini-quotes from some of the people I’ve noticed online who appear to be thinking about symbols, meaning and communication in some fashion. I know there are lots of others, these just struck me as particularly interesting.

kristof28 has the same idea that I do about how symbols work:

Semiotics deals with the production of meaning. A perfectly sensible view of meaning would say that as I am the writer of this sentence so I put the meaning into it and that you, the reader, are the receiver so you take the meaning out. Semiotics is the science of understanding how signs work and how meaning emerges from the relationship between the sender and receiver.

What I would add to their basic statement is that the meaning that the receiver takes out of the message may not be exactly the same as the meaning that the sender put in. The more closely the two communicators share a common context, the more closely aligned will be their understanding. The less sharing before the message, the more likely that the message received will be different than intended.

cjc89 focuses on semiotics as the study of a larger societal process:

it is important to keep in mind that the key to semiotics is an attempt to define how meaning is socially produced (and not individually created). In this light, it will always be subject to power relations and struggles. Furthermore, meaning is always negotiated – it is never static.

In my mind, what “society” does with a symbol is to reinforce it, repeat it, and in this way amplify it. The most commonly shared concepts packaged in the most commonly recognized symbols will tend to get the most use and hence will tend toward relatively more people receiving the same message. But “society” is really a set of individual people. So it is through the popularity among a large set of people that certain symbols and concepts hold sway. I know I’m nit-picking a little here.

iheartunswjourno seems to share a worry about the power of the media:

Choosing to suppress or engage certain arbitrary relations that exist between the signifier and the signified, effectively oppressing or supporting the political agendas of their society. It is quite a scary reality to realize that the media is subtly constructing how we perceive the world.

While I agree that the bombardment of the majority conception of meaning through mass-produced symbols can be hard to counteract, I actually hold out the hope that we as individuals do have power to create meaning, at least within a sphere of influence.

 (The “semiotic” term for this would be “semiosphere“, apparently)

I don’t believe in the existence of “meaning” living outside of the individual. I recognize the volume of symbollic detritous – the notion of our being surrounded by other people’s messages – certainly. And, yes, I recognize that the most powerful will control what is said in the most official channels, but none of us have to merely succomb and accept the message.

The notion of meaning being negotiated is spot on. That’s how it works between two people, and that’s how it works within a society. The miracle of it is that we humans are able to shift between points of view (contexts) with such ease that we don’t often notice ourselves that we have done so.  So while we might disagree with the consensus opinion of our countrymen, we are able to reach common ground with our next door neighbors.

And that’s just the thing that gets the larger process moving, talking with your neighbors and coming to agreement on some aspect of reality.

Every individual can choose to accept or reject the overwhelming flow, or to create their own discourse.  And that is part of our heritage as human beings.

How to Emculturate

This post is really about the basic pre-conditions needed for two people to communicate. This is really a naive, basic description, and I know that. However, it can be a useful way to think about and discuss in lay terms the technical aspects of acts of communication.

When I think about semantics and symbology, I focus on how meaning flows from one person to another. There are several components that have to come together in order for meaning to transfer between people.

First of all, two people must share the same context, even if it is not an exact fit. Without having some commonality of experience, however tenuous, there can be no communication. Now this context may be based on shared experience (e.g., attending the same event, reading the same book) or parallel experience (e.g., becoming a parent, learning to drive a car).

With that precondition established, then the next element that must exist is that some physical mechanism (i.e., a syntactic medium) must be available that can both be manipulated and sensed by both individuals.

There would be no sense in writing on posters to communicate with a blind person across a great distance, or whispering a song to a deaf person from behind them, unless a second medium is also employed (such as having a third person read the poster aloud, or sign the song).

With a medium chosen that satisfies both conditions for both persons, then one person has to put the meaning into the medium using an established convention. In other words, the intended meaning of the message must be “encoded” onto the medium in such a way that both the sender of the message and the intended receiver of the message agree on the meaning conveyed.

These are the three minimal conditions required for communication between any two or more parties. In summary:

  1. Shared Context
  2. Physical Media that can be manipulated and sensed by both
  3. Agreed Upon Encoding

The only other elements required are that there be something to communicate and that the two individuals have the volition to try.

%d bloggers like this: