<Book Index>


Chapter 7

Measurement, Categorization and Bi-level Computational Memory



A model of human memory and situational logics is being presented in this book. It is argued that this model may properly ground computational knowledge management technology through what is an extension of formalism.  In 2005 we began to see that pure and applied mathematics is a type of ontological modeling.  Before moving into formalism for duplication and similarly analysis, in the next two chapters, this chapter will deepen the arguments for this extension already put forward in previous chapters.


We often take for granted the human brain’s evaluation of context, completeness and inferential consistency. However; in spite of determined effort, evaluations of this type are not yet made by computational systems. We feel that the artificial nature to formalism, as well as the mechanical nature of computer processors,  limit the current generation of computer architecture. As though only one kind of computation existed, we worship the serial computer, leaving optic computing and quantum computing aside. The limitation of today's formalism has its effect even outside of the theory of computing. In previous chapters, we have argued that this nature of closed formalism has bounded what science can accomplish. As a result, our artificial intelligence, in spite of costing perhaps a trillion dollars, is missing most features clearly available to living systems. For example, living systems understand their environments by acting on the world and observing the consequences of these acts. Learning is, in this sense, directly from experience. Computational systems have, as yet, no way to directly interact with the complex nature of human experience. With optical computers the story is a different one (Farhat, 1996; 1998).

Humans use natural language to support our understanding of the rich world we live in. This "knowledge source" differentiates humans from other biological entities. For humans, learning can be mediated by these natural "knowledge artifacts". In this sense, language allows humans to experience in an indirect fashion. We build the same structure for the computing device using substructural and ultrastructural artifacts, in the form of symbol systems, to achieve a similar result for the next generation computing system. These artifacts introduce a type of memory and anticipation function to the device. From memory and anticipation come an internal language and then the steady states produced by a system image when confronted by external stimulus.

In fact, natural language can be seen as one form of human cultural knowledge. In the sense expressed by the philosopher of science, Karl Popper; language is one of the artifacts that captures and preserves social knowledge of the world, externalizing personal knowledge and creating a common means to communicate between individuals. Though this is true, the individual words and sentences are not the container of consciousness. They are content that are used by the brain system to manage individual perceptions. This distinction between content and awareness has suggested the possibility that computational systems can be designed to understand its own form of natural language, and more generally a class of cultural artifacts that can even be used outside of computer science, including having impact on art and architecture. The computing devices will collectively and individually make a direct and independent contribution to the social and personal knowledge of humans. To act on this suggestion, we need not speak about the mechanisms of awareness, but rather those mechanisms that create the contents of awareness. The machine we wish to create is not one with awareness, but one with an almost infinite memory and a means to anticipant.

Given the economic rewards that would be concomitant with emergent language understanding technology, were it to be available, we can expect incremental movement towards a hoped for validation of the suggestion that a new computational architecture will bring new rewards. In this chapter we will exam the consequences of this possibility and re-describe a proper computational architecture. The viewpoints presented before in this book are thus re-presented here so that the reader can have yet another opportunity to see the reasoning behind the tri-level architecture.

From neuropsychology and neurochemistry we know that the structural architecture for human reasoning is multi-level. In spite of what the strong Artificial Intelligence (AI) community will tell us, the evidence from outside the AI literature cannot be dismissed as uninformed. We have conjectured that reasoning works through categorical assignments at each of these levels. Research contained in (Edelman, 1987; Pribram, 1991; Schacter & Tulving, 1994; and Levine, Parks & Prueitt, 1993) is consistent with this conjecture and reflects how our views fit within an extensive scholarship.

In the development of these views, we have been motivated by the hope that computers might use a procedure to perform a computational analog to cognitive production structures, such as those in the biological mechanisms of human memory. I have been helped along the path to this conjecture through discussions with many scholars, whom I often think of as the BCN Group (see Appendix B).

Because of the nature of our discussions, we have invented the term "stratified categorization" to map out the requirement that the architecture we develop have a theory of emergence. We have felt that this is a necessary step towards proper models of the physical processes involved in perceptual measurement, categorization, and memory. To support the development of stratified procedures, notation is introduced in this chapter, and in the preceding two chapters, that allows some discussion of measurement, stratified categorization, and multi-level memory within a distributed database.

The notation was developed as part of consulting work that I did on an architecture for the control and storage of information about the content of text and image collections. The collections are part of over two billion pages of classified materials residing, in a scattered about fashion, in government agencies. The notation, through simple as it is, was considered to be overly complex by the agencies. Thus the formalism that I developed was only partially adopted in the declassification projects. However, the full use tof the notation remains the core of how such a massive project could be organized.

The notation is intended to ground the practice of knowledge management including those aspects that rely on computers and those that do not. The separation of many of the problems that information systems are asked to address, into these two components, is vital to understanding what follows.

Defining the Problem

The management of 25 year old government classified materials was never our primary interest during the period 1995 - 1998. The declassification problem was one that needed attention due to an Executive Order (EO) that President Clinton signed in 1995. The BCN Group members realized that, if addressed completely, the EO could be the impetus for fundamental change in the nature of knowledge management. However, this impetus must challenge the traditional view of formalism, logic and computation. Thus the limited adoption of part of our work, by declassification offices, was an important step forward - even if only a small one and one made with great reluctance. The notion was to create the symbol systems for ultrastructure and produce situational logics that would aid in managing the nation's classified materials.

Two problems characterized the work on declassification. The first was the resistance of the Directors of the Offices of Declassification to the notion of declassification, as a first instance. This is still complemented with a profound incompetence. The second problem has to do with the undesirability of having a tri-level architecture existing only in the intelligence community. It was the second problem that lead to the publication of this book, and to posting of the early manuscript drafts on the web beginning in 1998.

Since 1998, a more generic problem has caught our interest. This problem is how to manage the special type of knowledge artifacts that might come to exist in computers, in the context of personal or corporate knowledge management. This is where our interest now lay, and out hope that the economics of tri-level technology will aid in its development and wide spread use. We feel that the development of personal knowledge management software is where a contribution can be given to society. Web technologies and virtual information management systems seems to be the most promising means to implement our ideals. So in 1998, the BCN Group began to observe the several classes of Multiple User Environments (MSE) and Multiple User Domains (MUDs) that have come to exist on the web.

So where are we in 1999? We know that commercial computational knowledge management requires a foundational theory where system builders can account for the natural granularity and overlap of passage boundaries in text, and concepts in the interpretation of text. Beyond this, we need also a theory of gestures and states, and an abstract (i.e., generalized) notion for state / gesture pairing. Both of these "features" must to tired to the tri-level architecture so that the decisions made by workers are encoded into a systemic language that produces the knowledge management. The foundational theory has been implemented as a tri-level process in prototypes that where built using Oracle ConText software to represent the thematic content of text. Some experimental work has been completed on the results of using the voting procedure as an adaptive message routing mechanism. However, the experimental work has not been fully developed due to the considerable costs associated with software development.

As BCN Group colleagues talked about the generic problem, we agreed that human interpretation-judgment was an essential feature that we had to account for. Our extensive review of the literature in cognitive neuropsychology, human factors, complex systems and Russian applied semiotics made this clear. In the architecture that we developed, the phenomenon of interpretation was accounted for by having the substructure of inferential aggregation fully enumerated and yet left without semantics. Semantics is added separately from substructural aggregation from user input of judgment, system image, and external affordance structures related to ultrastructure ad encoded in category policies (see Appendix A).

Using the prototype we developed, we take it as given that the descriptive enumeration of the kinds of things that are relevant, to a class of observations, will naturally separate into three relative levels:

{ substructure, middle, and contextual }

The enumeration of each of these three levels is a proper objective for knowledge management. This objective establishes the context of knowledge management technology development and use. Enumeration is observational and not based on theory.

The observational grounding for our enumeration methods is based on three important facts. First, empirical observation, by humans, determines a set of active observables needed to model and control a complex system, over a specific time period. We assume, however, that the number of observables is always small. This means that, in order to control the system, the properties that we must enumerate about middle level entities are small. The second fact is that the elements in the substructure of a complex system can periodically be altered radically. This radical substitution of substructural memory store allows a combination of substructure within context to model the elements of the middle level. The third empirical observation is that the context is defined by how middle level entities move around within an ordered ecosystem. This allows the context to be tokenized in the form of Petri nets, Bayesian nets, cognitive graphs (Sowa) or other types of graph constructs.

So if we are looking at concepts, then we need a concept taxonomy of elementary tokens and relationships between tokens. If we are looking at structural properties of page segments, as the substructural elements of scanned images of text, then the pages themselves become the middle level and these middle level elements are placed into the context of the document using the voting procedure.

Tokenization and binding

Human concepts are, of course, not easily described by tokenization of any type. However, it is not so difficult to imagine that a specific text, such as the Constitution of the United States, could be associated to a specific cognitive graph. The concepts are described one at a time, with due diligence made to by accurate and complete. Having meet with the pragmatic requirement of doing useful work we realized, as is often the case, that a good effort will produce reasonable value. Various representational methods exist. For example, if we use knowledge engineering methodology, each concept would be related to a location, or node, in the graph and the important relationships between the concepts would be designated by relationships between graph locations.

As an exercise, we imagine the mental state of a Constitutional scholar. We can imagine the scholar thinking about concepts brought forth by the interpretation of text. However, the full situation is not so simple, since we must mix notions of mental states (which require neuropsychology to discuss), cognitive graphs (which require natural language understanding tools and cognitive science to discuss), and deductive logic (which is used in artificial intelligence).

We can immediately see that the granularity of the mental state is not so easy to delineate, nor are the boundaries of concepts defined by crisp set theory. So how can we proceed? Our good efforts have given us some value, but the results are limited somehow. Again, the answer seems to be related to the nature of interpretation and emergence and the theorems, by Godel, Church and others, on completeness and consistency.

Continuing with our example, the Constitution can be consistently interpreted by assuming a specific point of view. Within any single view, one expects inferential consistency; however, one should also expect that the "largest" cognitive graph of the Constitution subsumes all of these views. In this way, all points of view are represented in a single cognitive graph, which is a virtual cognitive space containing symbolic reference to all of its possible interpretants.

It is reasonable to assume that this virtual space can have no "preferred" assignment of meaningfulness, since meaningfulness is generally regarded as a reduction of possibility to a single point of view. A single point of view has an implied situational logic that holds the reasoning together. Reasoning is about consequences in a world view and thus is bound together with any assignment of meaningfulness.

The binding, then, of meaningfulness into a otherwise not-bound bag of substructural elements is in the context that is supplied by the third ‘contextual’ level. Perhaps the binding of substructure and context occurs in the special circumstances coincident with the emergence of a mental image. This is clearly consistent with our understanding of the formation of chemical compounds from bags of atoms. Chemical production processes in our environment supply the context and the atoms supply the material.

In the human brain, global consistency is likely "sought" through local phase coherence in electromagnetic phenomenon in dendritic trees as well as through intersystem modulation and adaptive (evolutionary) resonance (Pribram, 1971; 1991; Grossberg, 1972a; 1972b). In biological neural systems, resonance works through structural invariance and reentrant signaling between amygdala and hypocampus, cerebellum and associational cortex, hypocampus and frontal lobes, etc. Thus, it is reasonable to suppose that situational logics must account for compartmentalized coherence and the emergence of invariant forms of behavior.

In the tri-level architecture, context, coherence and completeness must have proper roles in assembling a computational projection from a large "situational-less" virtual cognitive type structure. One suspects that the assignment should "project" onto specific cognitive structures an interpretation that is appropriate to the situation. But, it seems clear that, this projection must follow some distributed set of rules that accounts for these roles of context, coherence and completeness. This is indeed a very difficult problem, but one that is automatically solved each time any human reads a book or becomes involved in a conversation.


Using the tri-level architecture, universal (aggregated substructural compounds) semantic spaces can be built without fixed situational evaluation. Truth evaluation about situational aspects can be separated from an underlying representation of the components of prototypical concepts. Truth evaluation, about properties of aggregated constraints, can be distributed and will allow real time human perturbations of the aggregation rules. So, in a fashion that is similar to human implicit memory, the situational evaluation becomes emergent in real time. The mechanisms of this emergence have to be explained as the results of the voting procedure and the related management of categorization. This explanation begins with the architectural descriptions in Chapter 8. (editor's note; this chapter is to be expanded from it's 4/26/99 length of four pages.)

Component representation, in our architecture, allows one to address the need for pattern completion and inference. These features have important consequences. In real contexts, the nature of things change, and the completion and inference must stayed toned to these changes. Perhaps it is also true that only a little detail may be known about the nature of the components, even as they change. The Mill’s logic introduced in Chapter 9 allows a part-to-whole study of relationships between emergent situations, of a specific class, and the components that are partially known to make up the situation. Pattern completion and inference, then, occurs into the prototypes of situations. This is as it should be, for logic is the study of how thoughts "follows" and thoughts are complex natural processes.

The information management systems produce representations of patterns in the form of database records, indices of database structures, case logic, and stored query elements. In some cases, either Bayesian (statistical) analysis or AI produces some additional organization to the data structures. In rare cases, the Information Technology architecture supports a model of content which serves as a model for concept identification and retrieval by concept. It is via this model that one can use adaptive technology, including the use of event profiles.

The tri-level architecture identifies the mental image as a projection from a virtual cognitive space. The space is virtual since it exists only in the sense of possibility that are emergent from the memory substructure when the memory is activated in the temporal domain. The cognitive space is a model of the past, when mixed with anticipation. It is not about the present moment; since the present moment brings it’s own nature into that mix. But situational interpretations of cognitive substructures are clearly composed based on statistical profiles, of some sort, derived from past experience. In natural complex systems, the prototypes are well represented as statistical artifacts that can act in a formative fashion in a present moment. The assembly and binding process is distributed by nature across many instances in slightly different circumstances. This temporally distributed nature allows the formative character of the prototype to be instranciated. The period of time over which the distribution is made allows a statistical representation, as well as aligns the metabolic or ecological process into well established, but softly indeterminate, paths within processes that are well represented as switching network (Kauffman, 1993; Edelman, 1989).

Following this line of thought, in the voting procedure, the election of category membership is distributed. The process is also algorithmic and thus perturbation of the voting may allow appropriate contextual interpretation of some uniqueness about the present situation, where meaningfulness is found. This perturbation allows the injection of intentionality, for example from the user during decisions.

Context also involves a degree of expectation. For the tri-level architecture this presents a problem, since expectation must be built up over a long period of time. But expectations about the possibilities of the future are not to be explained in any simple way. Expectations must account for the ecological circuits that treat the individual as one of many interacting agents, that is to say as an atom of a particular type. Business process modeling is generally about an enumeration of client and production life cycles as defined in a modle of the company's business ecosystem.

In practice, circuit analysis does produce interesting, but fixed and static, cognitive spaces defined within a situational context. This may be partially useful in business process modeling, but it is not sufficient for text understanding. However, using the notation in the next two chapters, one can add to this static representational space by having available a data structure for encoding substructural valance (see also Chapter 8). The emergence of meaning is then via a combination, or aggregation, process, in context. Procedurally defined logical entailment, in the form of rules and data structures, is then brought to bear from various types of situational inference and within a logic produced to express interpretations.

The voting procedures combine some surface elements of QAT with the deep features of the connectionist architectures. These procedures synthesize local rules with global rules to produce a stratified categorization policy for represented objects. The categorizations form a collection in which the "cause" of the category structure can be assumed to be dynamic. This non-stationary feature is the primary one that we are exploring in this chapter and which leads to our definition of "semiotically open sets". In particular, the dynamic nature of these category policies will be seen in the way we develop out some elementary notions from topological logic. First we need to deconstruct a bit of logic.

The QAT languages

The QAT languages form a subsumptive relationship:

Loi ̀ Li ̀ Le ̀ Le ̀ L’’e .

Various types of evaluation functions can be triggered only during the extension of the second internal language Li to the first external language Le . It is at this point of extension that we introduce the non- von Neumann elements in two ways. First is by opening up the sequence of computations to user input as a perturbation of membership rules. The second is be considering elements of optical and quantum computing (a subject that is not discussed in this book). As already mentioned, in standard QAT this assignment of evaluation is via a multi-valued logic on a set of hypothesis that are defined in Li. The multiple values are necessary to encode judgments about plausibility and completeness.

QAT separates the formal treatment of logical atoms, in Loi, from both generalization of atoms and formation of compound statements, in Li, and evaluation of conjectures, in Le. By using the Russian QAT languages in this way, we have a formal means to separate the atomic structure of semantic graphs from various distinct regimes of situational evaluation. For example, a conservative scholar’s interpretations will likely define some part of a cognitive graph that can be judged to be contradictory to a liberal scholar’s interpretive viewpoint. Thus different parts of the cognitive graph and different sets of evaluation rules would be involved in the two interpretations. The two sets of atoms and two sets of evaluation functions lead to different inferences.

The results of the interpretive act are consequent to the structural relationships between atomic elements, in Loi; but subject to the evaluation that occurs in Le. In spite of the difficulty of situational methodology, defining quasi-formal relationships between scholarly view points allow the substructures of an emergent cognitive graph to serve as a situational language about mental states, thus providing the necessary parallel between cognitive science and the stratified computing architecture.

Our atoms and inference rules

In our system, elementary fact-like statements are given logical evaluations based on where a passage has been placed in a text collection. Generic conjecture-like statements are also given a computed evaluation based on the distribution of category placements. This computed evaluation is a deduction of truth value given specific hypothesis and assumptions about facts.

In the Russian form of QAT, deductive chains are carried out using, as logical compounds, the semi-lattice of subsets of substructures that "sign" or represent causes of situations. These substructures can be formed from thematic analysis of natural language, using any one of several commercial systems.

For example, one class of deductive chains identify Standard Query Language (SQL) statements over the set of themes and the Boolean operators. We feel that the concept of query can be expanded on through the use of situational logic and distributed processing. For example, QAT defined "blocks" and ‘covers" are derived as a general means to optimize deductive chains. For us, a block embodies the cumulative effect of negative knowledge in extended Mill’s logic, and a cover is a condition where so called minimal intersections occur in every representation of object instance.

The two notions of cover and block are basically notions that are topological and logical respectively, in nature. However, the complete implementation of block and cover methods from the QAT deductive core requires a more advanced understanding of Russian QAT than we have yet managed, or at least more than what I can write down at this time.

In place of these deductive chains we use a simpler set of one step declarative statements of the form:

Fact like statements:

·         theme ti is an element of the representational set Tk for a passage.

·         theme ti is an element of the representational set T*k for a category.

·         document di was placed into category q during a training period.

Conjecture like statements:

·         the most representative set of minimal intersections, mi, of object representational sets shared by category g and category r is {m1, . . . , mn }

·         the most representative set of meaningful intersections , mi, shared by category g and category r is {m1, . . . , mn}

·         the set of meaningful intersections, mi, unique to category g is {m1, . . . , mn }

These statements are described by and are evaluated "locally" based on representational systems derived from past examples.

The general framework for specifying evaluation functions based on these statements is the Process Compartment Hypothesis, PCH (Prueitt, 1995; Chapter 1) Each evaluation is modeled as an emergent "compartment", where the compartment is formed through an aggregation of basic elements into an interpretant of the theme representation as one belonging to the categories.

Using the two types of evaluation functions, a voting procedure may designate category membership and relationships between categories that were not defined in the substructures of the basic elements. The relationships are distributed and emergent properties of the voting. It may be that the category relationships are implicit, not explicit, in the natures of the substructural elements. However, user perturbation of the aggregation process may be a cause of emergent relationships that are not predicted from substructural properties.

Due to either new knowledge of fundamental changes in the target of investigation, it is possible that new observables will be introduced and that old observables modified or removed. This possibility is reflected in a dynamic reconfiguration of the category policy, as discussed shortly. The last sections of this chapter provides the proper notation for discussing this reconfiguration of category policy.

To motivate this issue a bit more, we point out again that context is situational and this means that observables should be relative. Using the conjectures of the PCH, the new observables can be recognized automatically and updates to category representations, substructure representations and computed conjectures (about category assignments), can be made while leaving open the process.

Such updates will often lead to non-monotonic effects. So in commercial knowledge management implementation, change management principles should be carefully employed in order to not produce community reactions to the nature of change itself. Moreover, the context itself may change over time and thus the old observables may no longer be valid. Re-measurement and change management is thus critical.

Primacy of measurement

In biological systems, the measurement process is more primitive than perception. This establishes an important parallel to formal logic. In logical systems the measurement process is often involved in the formation of axioms and inference rules, and forgotten afterwards. The same is true for the scientific awareness about the issue of biological measurement, and outside of a small literature the measurement problem is completely ignored. Because the logical treatment of non-static measurement is so difficult, we need some understanding of now biology handles the measurement problem.

Our intuition tells us that the human brain recomputes the full set of observables as a matter of habit. However, we must state that this intuition depends on an analogy between cognitive and algorithmic deductions. Self and system image is a key notion within this analogy that will be developed elsewhere (Prueitt, 1987).

A "self image", of concept of self, provides stability to human psychological system. So does the memory system. To achieve the same type of stability and control over our system, we feel that we can use the statistical artifacts that are stored in a database of substructure representation. But we also allow the voting to be influenced by external input.

The measurement process is more primitive than the evaluation of logical atoms. In the tri-level architecture, we build situational logics after a "transparent" measurement produces a fresh measure of object and substructure invariances. Only after this "proper measurement" and the development of a new situational logic, from the ground up, can we expect to find complete relevance from the derived axiomatic system.

The way in which the situational logic is built is not complicated, only complex. By this we mean that the logic is a consequence of the application of the voting procedure, which when written down is only three pages long. The voting procedure itself is distributed and, in our definition of complexity, complex if it has emergent features. The linkage between the elements of the category policy is emergent and is how one looks at the issue of reconfiguration of policy. Thus the voting procedure is complex, but not complicated.

The parallel to human mental induction

Our immediate purpose is to describe how one can take some set of signs referent to substructural invariants and impose a situational deductive apparatus that produces results similar to conceptual emergence.

This is not merely the "open systems" problem of encoding the non-monotonicity of situational inference in a deductive logic, but an open systems requirement for new situational logic every time the relevant set of observables change.

Russian QAT treats this problem in a unique way. It was realized that inferential non-monotonicity is essentially a property of the evaluation functions and these do not get introduced until after the measurement process has build fundamental observables and an unevaluated notation is available. Thus a great deal of formal symbol manipulation can be done on sets of basic symbols without asking whether or not something is "true" or not. For example, the manipulation can be in probability spaces, or it can occur as an evolutionary computation.

The relationship between things can be gauged at the substructural level to produce something akin to the atomic periodic table in chemistry. Probability distributions can be built, as is so often done in the methods of proprietary commercial full text retrieval systems, and thus used with Baysian analysis to produce a weak form of inference. The strong form of inference works from an encoding of tokens that mark the presence or absence of properties in middle level entities. This presumably must use something like the way we have represented the completion of Mill’s logic cannons in Chapter 8. This strong form of inference is "cross level" and thus complex.

Elemental notions for topological logic

Classical set theory assumes a fixed set membership rule. Fixed set membership rules cannot be powerful enough to model natural complex systems over time periods in which fundamental changes in causation occur. It can not be so due to arguments that are best put forth in the work of the theoretical biologist Robert Rosen (1985), and the theoretical physicist Roger Penrose (1993).

The observables that are needed to describe, control or merely model natural complex system are not static. Thus a primary distinction is made between complex and Newtonian systems, in which the observables are always reducible to mass, energy and momentum. So we need to have a way of talking about the class of observables that are relevant to a specific investigation or observation of a situation. I have coined the phrase "semiotically open set" to assist in our discussion

Let us again consider some notation. We have defined an semiotically open set to be of the form:

{ e1 , e2 , . . . , en , Ào }

where the special symbol Ào has the property of adding, deleting or modifying other elements of the set, and ei is an observable, for i = 1, . . . , n.

We know that the set of observables must be considered as an semiotically open set where, the possibility exists at any time, a new class of states may be measured from this construct. Conceivably, in the next generation machine human interfaces, the integer n will be dependent on the situational logic developed by the conversion of cognitive processes to deductive inference on data structures. In the voting procedure, n is dependant on the stress between the categorization policy and the user communities satisfaction with routing and retrieval outcomes.

Lets see how the notation works. Suppose, for example, the odd numbered observables where removed and some of the other observables modified.

{ e1 , e2 , . . . , en , Ào } à { e2 , e4 , . . . , em , Ào}.

m would be the largest even index number and

ei à ei

could be a non-trivial replacement based on similarity measures. Suppose then that three new observables where added:

{ e2 , e4 , . . . , em , Ào } à { e2 , e4 , . . . , em , em+1 , em+2 , em+3 , Ào }

The exercise above gives a sense of how category policies would be changed, and how one might model the categorical aspects of emergent mental images.

In order to catch fundamental changes in causation and linkage, it is absolutely essential that we identify the active causes, and properties, in situations through periodic measurement. Only after proper measurement can we evaluate the meaning of atoms, and only as meaningfulness is "re-established" does a new notion of monotonicity reassert itself. This is a "monotonicity of reasonableness" that is softer and less well defined than we find in the deductive syllogism of strong AI.

In most cases, we expect that some regular combination of substructures of graphs in the set of basic elements will model the interpretant. This provides a great utility, but one that we must use with an awareness of the non-stationarity of natural systems.

Other types of semiotically open sets are useful to our discussion.

The emergence of a computational interpretant must be open to human interaction, and thus an open class of computational procedures are required to support computational emergence of interpretants from a set of basic elements (also an open set.) These procedures can be selected, declaratively, by a human but must be executed algorithmically by a computer. The same problem arises that we addressed in representations. The procedures need to be adaptively constructed in real time to fit situations. Thus we need a substructure that produces a semiotically open set of procedures through emergence. In this way, the process of emergence can be perturbed by human interaction at the point of an (irreversible) objective decision in real time.

Wavelets, global analysis, and immune systems

A computer model of human interaction is likely be in the form of some declarative description of objectives in interaction with a stratified memory system. It should not be surprising that the representation of objectives might be managed by computational architectures based on current experimental neuropsychological evidence and on theoretical immunology. The model of human interaction, as conceived by the author, would use a Gabor function to encode wavelet representations into a temporal span.

Wavelet representations, when used as retrieval indexes, are computationally efficient. And wavelets can store the full representation of substructure and binding elements related to category relationships. Thus a paradigm is defined that would allow global indexing of the structural and conceptual content of very large text and image collections.

This use of wavelets, e.g. as a global index, does introduce some topics that can not be treated fully at this time. Much needs to be studied if the techniques introduced in this book are used to build large scale tri-level knowledge acquisition and management systems. However, even without the experimentation that is indicated, it seems clear from theoretical argument alone, that a "computational immune system" might use clonal selection based methods to produce a model of what is, and is not, in protected knowledge bases. The same argument suggests a means to produce interactive dynamic computational segmentation of areas in a traditional knowledge base, such as the one used by the software product, Oracle ConText.

It is thus natural that we might talk about conceptual antigens that protect the coherence of interpreted points of view (Prueitt, unpublished).

The selection of conceptual antigens might involve deselecting pattern recognizers where the patterns are the minimal intersections of representational sets. This same set of minimal intersections could serve as the basis elements of the Gabor transform for computational knowledge management in parallel with what is envisioned to occur in brain systems (Pribram, 1991).

Measurement processes

This section addresses the measurement problem within a paradigm motivated by models of brain function.

We propose to follow the guidelines published in the work of the Russian QAT school, but with certain modifications to fit the problem of natural language understanding. The guidelines suggest that, like QAT, an open deductive system could be constructed to reason about meaning, coherence, and categorization. However, our theory of categorization provides us a strategy to make real time changes to some essential data structures. These data structures are then interpreted and manipulated further by the rules of a symbol system or perhaps an artificial neural network or evolutionary programming. The guidelines call for a notion of signs that must be quite general since we wish to account for the phenomena of human memory and memory’s role in selective attention. Thus part of the sign system would be distributed and thus implicit in nature. The nature of the sign system must involve openness to new observables and to the assignment of meaningfulness by interpretants.

We believe that the biological model tells us how natural intelligence overcomes certain limitations seen in formal logic and deductive structures. Using metabolic type processes, the constructed system of signs supports an awareness system for interpretation of the invariants of a complex system, either directly or through the imagery of an intermediate logic composes memory substructures. The problem of understanding natural language is then taken as a special example of the problem of understanding complex systems.

To be successful, the system of signs must be directed at a situational grounding of logic about the unique characteristics of the object of investigation as well as its statistical properties. This is to be done using structures that are constructed episodically in parallel with perceptional measurement, the reorganization of the substructure of memories, and the dual composition of the contents of awareness.

Representation and measurement


The substructure may have representation in the form of explicit logical atoms and inference rules, as in the Mill’s logic, or implicit basins of attraction, as in the connectionist systems. Let us look at some notation again to set our concepts in a more permanent fashion, and see how far our theory might lead us.

Let O be a set of interpretations of the perceived signs of an object. If w is a measurement device, then the collection of pre-experiences, w-1(O), must be regarded as composed from some theoretical construct consisting of unmeasured states. The construct is modeled around some theoretical notions about un-measured quantum mechanical states.

The measurement process takes, as an object of reference, some ontology of spatial temporal events and produces a more or less well formed set of structural constraints. These events are observed as measured invariants. An example of structural constraints is the valance or other affordance characteristics of elementary atoms when combined into molecules. A second example is the set of relationships between locations in a semantic net such as Oracle’s Knowledge Catalog (OKC) or the cognitive graphics of TextWise Inc. These structural constraints are computational and are enforced when the database management system indexes text columns using linguistic analysis.

Using a OKC linguistic processor, O is regarded as a collection of well delineated passages or, if passages are not distinct and separated, simple written text or narrative. The measurement process w is a procedural computation of the theme phrase representations or, if available, syntagmatic units from the Oracle Knowledge Catalog. As an aside, OKC was the first "knowledge representational means" to be used by a voting procedure; however, there are other commercial taxonomy systems appearing in the market place and these can also be used with the procedure.

OKC was never able to deliver to the marketplace a method for situational representation. However, in the tri-level argumentation, the semantic rules for aggregation of logical atoms are specific to situational classes. Thus we could improve the categorical performance of OKC in full text routing tasks.

Using the tri-level architecture, the object of analysis is assumed to be in a context that comes from a model of the metabolic processes that use topological logic and lattice theory . Using QAT-like procedures a class of elemental atoms can, in theory, always be fully identified within the situational context. First, the set of subsets of the set of all atoms, called the power set on a universal set, defines a lattice. Connected and complete substructures of this lattice are semi-lattices.

These semi-lattices play an essential role in reducing the combinatorial explosion implicit in a blind application of Mill’s logic to representational sets. In the tri-level architecture the use of logic is not blind. QAT multi-valued and non-monotonic plausible reasoning is used to identify important substructures in this semi-lattice as related to a set of hypothesis about structural similarity between events. Similarity and equivalence are then operationally defined via these hypothesis. Similarity and equivalence are thus treated as generalizations of the topological notion of closeness. The details of this generalization is yet to be written out and published.


We feel that QAT-like procedures might identify and constitutionalize structural constraints on combinations of atoms. Thus extending the class of evaluation functions that have been developed via our experimental work on the voting procedure. These constraints identify aggregations of the substructural, logical, atoms [Pospelov, 1986, pg 37.] Again, we believe that semiotically open sets of atoms and rules of semantics, derived from structural constraints plus monotonic and non-monotonic deductive reasoning, should determine the substance of an intermediate logic in a context in which these atoms are combined into representational elements.

The proper extension of human memory to a machine based database must produce an emergent situational analysis, where stratified and entangled categories form as bi-products of computing. In human reasoning, these categories are emergent in real time when a situation is being interpreted. This emergence is necessary to handle the novelty and non-stationarity of the natural world, as well as to constrain the otherwise combinatorial explosion that is faced by standard expert systems.

Our voting procedure provide deductive extensions to the inductive capability of a knowledgeable user. This means that the experience of expert input and iterated refinement, encoded within the tri-level architecture and intermediate logics, is made available to users in a natural way.

Generalizing the model of human memory

In this book, a correspondence is being discussed between the current generation of models reported in human memory research literature and some unique developments in Russian open and situational logics. Chapter 5 attempts to unify these models in support of computer based management of knowledge artifacts.

The unified model brings statistical representations of the "components" of the past to the present moment, and parcels the measured states of the world into a category policy. Expectation and goal formation brings the anticipation of the future to this same present moment. The model allow a specification of mechanisms that are observed to create a decomposition of experience into minimal invariance that are stored in different regions of the brain and the recomposition of selected invariance as emergent mental states. Similar mechanisms are used in our machine version of tri-level deduction corresponding to human induction and cognition (see Figure 1.)

So how does emergence come to exist, and what substance is combined together? These questions have been treated in general within various scientific disciplines. However, we need to treat the issue of emergence in an applied semiotic fashion, with an eye on how sign systems might assist, through procedural means, the autonomous "understanding" of natural language text to the degree necessary to place passages of text into categories.

Figure 1: The process flow that we take as an accurate model of human memory formation, storage and use.

Our work has involved a rather deep situational analysis. We know that the quality of the computational resources, memory stores and situational logics, affects the placement and the very definition of the crisp or stratified categories needed for knowledge management.

So there are three steps,

1.     the identification of a set of minimal elements that could indicate causation of properties, and

2.     the development of a situational logic that predicts the properties of situations given a partial or complete list of minimal elements.

3.     the maintenance of a second order system for changing the intermediate language to accommodate new information.

The first of these steps are addressed using a version of bi-level logical argumentation, relating structure to property of functional whole.

Notation for Category Policies


C = { C1 , C2 , C3 , . . . , Cm }

be a set of categories, and

(C1 , C2 , C3 , . . . , Cm)

be an arbitrary ordering of C indexed by the set of natural numbers i = { 1, 2, 3, . . . , m }. If C has been defined by a training set, O, where exemplars from O have been ‘placed" into a category, then C is regarded as a categorization policy.

As we will see from the experimental write-up in Prueitt (in progress), a specific placement can be "crisp" where each exemplar is placed into only one category, or "stratified" where a set of categories are ordered by placement preference. This ordering, of a categorization policy, produces an assignment policy for each exemplar.

Figure 2: Bi-level architecture for categorization using memory

An ordering of the category sets is indexed by some permutation of the index set I. For example, if

C = { C1 , C2 , C3 , C4 }


C = ( C4 , C3 , C1 , C2 )

is a stratified placement policy in which category C4 is the first place assignment, category C3 is the second place assignment, category C1 is the third place assignment, and category C2 is the fourth place assignment.

Let Q be a membership function that forms an assignment policy for an event instance Ok. The assignment policy may be crisp or stratified. In Figure 2, Q is shown as a composition of a number of subprocesses, each which we assume can be defined procedurally or declaratively. The declarative definition is made by direct human judgment and recorded as a placement of training set event instances. In the procedural definition, the first process is one which forms a representational set for the event instance, though some "measurement device". These representational sets are placed into a data structure, but the procedural details of how this is done is left open (pending additional experimental work on the representation problem.)

Q performs a bi-level integration of two processes, I and S, that are linked together to provide situational logic based on a set of "observables" produced by the measurement process w, and organized into category representations in the data structures of O and substructure representations in the data structures of V. Two pairs of complementary cross scale phenomena occur (see Figure 2.) In each case, there is one movement from O to V as well as one movement from V to O (or into a situational analysis based on O). In both cases the movement, denoted as g, from O to V is to construct substructural representations. The movement, g-1, from V to O is an approximate inverse where a compositional closure, of semantic entailment, is to be made. Ideally, the search for closure constructively modifies both substructure representations and category representations.

The movement, g followed by S, will produce a deductive inference in the form of the assignment policy given a new event instance. In this case, the decomposition from O into substructural invariants is one that is specific to the event instance. This decomposition is followed by deductive inference using S. This composition is used in parallel with the interpretant I to produced an assignment of situational meaningfulness during emergence of wholes having parts that are the substructural invariants. The situational interpretation is about the assignment policy for this whole.

Generalization of our voting procedure

This section is purely theoretical, containing as it does some speculations about how the voting procedure might be extended.

In our private discussion and in unpublished work, we have found it necessary to develop the use of certain terms, such as "situational" and "emergent" logics, to reflect our need to embody cognitive type processes as computational procedures. This development of new language is necessary because classical terms and paradigms are simply not powerful enough to embody most of the processes thought to be necessary for autonomous text based situational analysis. It is our intention that this new language use should reflect analogy shared between the processes involved in natural intelligence and the required class of computer based algorithms.

The notion of a second order cybernetics is used to indicate the meta-rules that allow one to change the set of atoms, the rules of syntax and the rules of semantics within an axiomatic theory (Pospelov, 1986) Second order cybernetics is developed to convert various transitions to induction to well defined deductive procedures as new information is added. The openness of the overall system is controlled with the rules of this second order system.

The membership function Q is a complex transform that models the "ontological descent" from the unmeasured world of phenomenon to a "theory" of knowledge based on memory of the past experience. We feel that memory is stored at two levels, one being about the set of emergent wholes, in ecological context, and the other being about the set of substructural atoms. The descent is multi-leveled, because the memory in V is about substructural invariance, whereas the memory in O is about whole objects or whole categories.

The "theory" is an axiomatic theory that is extended to a quasi axiomatic theory by using two sets of internal QAT languages and three sets of external QAT languages. The languages are formal apparatus used to define algorithms as well as to provide required explanatory theory regarding new capabilities related to machine learning, machine understanding and knowledge management.

As noted before, the notation and the relationships between these languages are:

Loi ̀ Li ̀ Le ̀ Le ̀ L’’e

Our definition of these five languages is different from those described in Finn (1991). However, the spirit is the same. The first internal language, Loi , is a set of basic atomic symbols A, a set of connectives ( Ç, È, -, ), as well as the set of all well formed statements that can be formed with this set of logical atoms and this set of connectives. The second internal language, Li , includes, in addition to the combinatorial span of the first language, the quantifiers (" , $), the extension of the arithmetic on A to the algebraic notion of variables, and two specific inference connectives (̃1 , ̃2 ). These connectives were derived, by Victor Finn, from the logical cannons of J. S. Mill, see (Finn, 1991; Chapter 9).

In our logic, the external languages are separated from the internal languages through the notion of emergence.

Li ̀ Le

This follows the proscription by Finn that the inclusion relation between the second internal language and the first external language, L e, is one in which a "naive semantics" is realized in an evaluation of logical atoms (from Loi ), fact like statements and generic unevaluated hypothesis produced in L i.

The evaluation of fact like statements are localized to single statements of the form:

p ̃1 O,

where this is interpreted as "p is an empirical property of object O", and

s ̃2 O

where this is interpreted as "subobject s is an empirical cause of a property of O".

The evaluation function of the first kind is a local evaluation that is based on known data derived from a training set. As in standard QAT, the first external language makes a multi-valued assignment of degree of truth to these fact like statements, thus identifying facts and providing an evaluation of any conjectures that have been previously defined in the internal languages. This evaluations of the first kind are dependant on the data structures in O and V, as well as rules of deductive inference that are defined in the second external language.

However, we also have a second type of evaluation function, I, that is made using the procedural computations of Voting Procedures. The evaluation function of the second kind is a global evaluation that distributes evaluations of the first kind according to modifiable production rules. The evaluation is applied to a new event instance.

The evaluation of global hypothesis are conjecture-like statements of the forms:

p ̃1 O,

where this is interpreted as "p is an inferred property of object O", and

s ̃2 O

where this is interpreted as "subobject s is an inferred cause of a property of O".

For the purposes of text understanding experiments the local evaluations are facts directly derived from the training set of documents. The global evaluations are facts inferred about the test set of documents.

The second and third external QAT languages are used to describe the procedures that support evaluation functions of the first and second kind (respectively.)

The five QAT languages are related by set inclusion, but also by an extension of the syntax of sets to Peircean notions of interpretant and semantics. The second order cybernetics is defined in the second and third external QAT language, and these second order languages are used to change the theory of structural constraints and the semantics as a function of real time information acquisition.

The next chapter will develop the duplicate detection formalism. This formalism is a first step in realizing the tri-level architecture.