<Book Index>

Chapter 2

Is Computation Something New?

Revised last on February 9, 2000

 

 

Introduction

Some scholars make a distinction between a formal system and a natural system and hypothesize that no natural system is in fact computational in the formal sense (Rosen, 1995). These scholars suggest that the forces shaping the evolution of a natural system are not computations. An historical perspective is developed that reminds us that computers where invented less than 60 years ago. Of course, mathematical formalism and mechanical science has been around longer than this short period. Natural language has a much longer history. These are all knowledge constructs. The scholarship that makes the distinction between natural systems and formal system also must make this same distinction between any knowledge construct and a natural system. The knowledge construct exist only as part of an mental event. The knowledge construct may be signed by an artifact, use as a natural language token, but the construct has an unusual status. The construct is quite different from the chair one sits on. With computers we have done something new. The unusual status of the artifact has been implemented as information systems, and these systems have an existence all their own.

Computation may be a product of the laws of evolution, under which biological systems have shaped themselves to reflect natural order. Biological systems evolved specific "complex' mechanisms. The mechanisms include tools for measurement of new observables during a process cycle expressed as action followed by perception. They are the components of the neural and immune systems in mammals. These tools are directed at creating, from an explicit experience, an implicit representation of the world. Implicit representation is thought to be stored in multiple memories systems (Schacter & Tulving, 1994), and so it might be possible to ground an understanding of computation and cognition in the neuropsychological literature. The externalization of knowledge artifacts into computing machines is an extension of this evolution.

The distinction between formal and natural systems implies that computation is an evolved product of human introspection, and that the biological system has evolved specific capabilities for understanding and interacting with natural order in the world. For example, the concepts of "next to" and "succession", are derived from properties of the world that play a role in survival and propagation.

The hypothesis implies only that computation is a result of specific cognitive processes. It reflects a natural order in the world but it is absent some key features of natural systems related to complexity.

Having suggested the possible origin of computation, it is necessary to make clear what computation is not. Computation is coincident with strong formalization in the Peano axioms. Thus the precision of formal systems has limitations that are reflected in Godel's theorem and in the dialectic between constructivist and intuitionist schools of mathematical logic. Limitations on precision are also apparent from the application of natural language to problems of situational control and modeling and of logic to modeling natural systems. One consequence of limitations on computation is the poor Precision Recall graphs that measure the current generation full text retrieval technologies.

The limitation that we talk about is not something that need be with us for long. While exceedingly difficult to understand, the understanding does suggest a way to overcome aspects of the limitation. Again, we feel that the social need for web-based collaborative systems is a driving force that will define and overcome many aspects of the limitation to information technology that is currently directly inherited from the limitation of formally closed computational paradigms - such as data base design. One needs to introduce the notions of human memory, observation, and anticipation. This introduction is presented in the context of a theory of sign systems, semiotics, in this chapter and the following two chapters. The introduction is then re-presented in terms of the neuropsychology of perception in Chapter 5.

QAT and Plausible Reasoning

A history of formal and semiotic systems is given in D. Pospelov's book, Situational Control: Theory and Practice, published in Russian by Nauka in 1986. We have worked from a translation produced in 1991, and from a number of articles by Pospelov and Victor Finn and their students. Our effort has been to synthesize Russian applied semiotics with Western artificial intelligence. This synthesis addresses the distinction pointed out in the introduction.

The semiotic systems demonstrated to us by the Pospelov - Finn group have suggested that machine readable ontologies can be created using the extensive logical theory and heuristic methods developed by the Russian community. A machine readable ontology makes a mapping between nodes and links of a graph and human concepts. Relationships between concepts are identified as linkage between nodes. In very static situations such a representational schema can be easily built to represent the situation's ontic structure. Through this formal capture of structure in the world, it is possible to develop computational models of the situation's evolution in well understood contexts. However, in non-static situations, or even static situations in unknown contexts, it is only possible to represent a situation's evolution if a so called "second order cybernetic" system is available. Second order cybernetics are sets of rule modification transforms relative to a specific class of situations, and relative to a set of consistent axioms with complete deductive span. The deductive span is the first order system.

Let us examine this more closely. Following Pospelov, we use the term "formal system" to refer to a four-term expression:

M = < T, P, A, R>                (1)

where T = set of basic elements, P = syntactic rules, A = set of axioms, and R = semantic rules. This is called a first order cybernetic system. The '<>' is just a shorthand way to indicate that T, R, A, and R are used together as a formal system.

The rule modification transforms, of a second order cybernetic system, apply to the set of basic elements, T, to the syntactic rules. P, to the axioms, A, and to the semantic rules, R. This is what Pospelov proposed and what is basic to the methodology of the Russian situational control paradigm. A second order system is open in a way that is defined by what we have called, in Chapter 10, "semiotically open sets" , ie, open via changes made through an interface between the first order system and a human decision maker or perhaps an complex computing device. The first order system is also open in this same way. The openness is to a reconfiguration or re-nomination of observables through what Howard Pattie (or Peter Kugler) calls the measurement problem. For Pospelov and Finn, the second order system is an imperfect way to attempt to manage the transitions in the primary set of observables as well as in rules of inference and set membership.

In Pospelov's notation, a second order cybernetic system has the following form:

C = <M, C(T), C(P), C(A), C(R)>         (2)

Where X(.) is an operator capable of introducing change to the associated set. In comparing equation 1 and equation 2 the reader should note that time has been introduced, but the rates of change may or may not be time dependent. There may be sudden changes where the state space itself changes.

For the theoretical considerations, we assume that given a fixed time, the second order cybernetic system will produce a first order system cybernetic. The question is about the degree of formalism that can be observed with this first order system.

From theoretical consideration, as yet without empirical verification, we can make some observations about stability. The second order system must be assumed to be stable over the period under discussion, or else the temporal scale is not properly defined by the first order rates of change. The second order system is not meaningful without the notion of time, and this notion should have specific time-irreversible aspects. These aspects give to time a structure. The reader may note that the philosophically appealing notion of structured time is the basis for many aspects of the theory of process compartments and for specific properties of the tri-level architecture. For one thing, the tri-level architecture has three relative levels of organization, the middle one created, at least partially, through the interaction with a slower time scale and a faster time scale.

In work in progress we are grounding the discussion to follow in neuropsychological data (Pribram, 1991; Levine and Prueitt, 1989; Levine, Parks and Prueitt, 1993) and systems theory (Kowalski et al, 1989; Prueitt, 1995). The grounding is not merely theoretical, else this type of treatment would be subject to the criticism that is commonly applied to formally closed computational systems. A pragmatic grounding is to be made in experiments with the implementation of computational systems as ontology representations derived from the clustering of theme vector representation of written language (Chapter 3).

Experimental grounding would open up the computational knowledge processing paradigm. Establishing the notation, and the theory, for this experimental work is the central contribution of the current work.

Finn's formulation

The author acknowledges assistance in preparing some notes on the notation of Victor Finn, presented below:

Regular Axiomatic Theory (AT) is a system

G = < S, R >                             (3)

where

·              S - is a set of axioms

·              R is a set of inference rules.

              The inference of formula j is a sequence of formulas

j1, j2 , ... , jn                        (4)

where

1.           j = jn

2.        ji (I = 1,2,..., n) are either axioms, or are obtained from the preceding formulas applying the rules from the set of inference rules R

Quasi Axiomatic Theory (QAT) is a system

G = < S, S' , R >       (5)

where

·              S is an axioms set

·              S' is an set of elementary empirical statements

·              R is the set of inference rules. But now R consists of two parts:

1.        R0 is a set of reliable rules (reliable rules are the regular logical deductive rules, which we apply to truth formulas to obtain a truth formula)

2.        R' is a set of plausible rules

For QAT we have two new notations: S' and R' .

What is the difference between S and S'? First of all, S' is an open set, we can enrich it with new information from a current situation. This openness to new information is the critical differentiator between AT and QAT. New information may alter older information, so that it becomes necessary to have inference based on plausible logics as well as traditional logic.

From some point of view, G is a limit of sequence of systems G1, G2, ... Gk , where

 Gj = < S, Sj' , R >  (6)

for (j=1,2,...,k)

and Gj reflects a j-th level (state) of our obtained knowledge of the domain. It is important to note here that the sequences can be ordered temporally because new information is irreversibly added through an interface with human systems. If the second order cybernetic system does not manage this interface, then the inference steps are completely logical and computational in nature. However, the action of the second order system changes the basis for future computation in a way that is "exo-logical", e.g. outside the first order axiomatic system.

S' (in distinction from S) reflects empirical knowledge ; whereas S represents the invariant laws of the second order system (if fully known).

As one example, S might play the role of the United States Constitution, whereas, S' might play the role of regulations imposed by a government agency. As another example, S might play the role of the laws of Newtonian mechanisms; whereas, S' might play the role of human communication in a social collective. Again, the second order system appears to fit over a much longer period of observation.

S should have the property that every element in S' should be reachable from S through the inference rules. This is not a reductionist claim, but rather a claim that more complex systems are built through the action of universal laws. When the condition does not hold it is necessary to rebuild the set S using the tri-level architecture developed in Chapter 12 and in the preceding chapter. If this condition again fails to be achieved, as is often the case, then new data mining must be initiated.

The cases in which an emergent compartment would not provide a reachable path to one or more elements in S' would be suspect of being not completely modeled by the elements of S and the inference rules. This situation is equivalent to a scientific theory with a body of empirical evidence where some of the evidence can not be accounted for by the theory.

R' is a set of rules for plausible reasoning. Plausible reasoning is a process of constructing of a sequence of formulas

j1, j2 , ... , jn                                            (7)

where

1.        j= jn and j is the goal of reasoning

2.        ji (i=1,2,..., n) is either an axiom, or a formula from S', or obtained from the preceding formulas by applying the rules from R

The difference between our regular notion of inference and the notion of plausible reasoning consists of the following:

·              plausible reasoning is a process

·              S' is an open set, it means that S' is , actually, changeable (unlike, from an operational point of view, S)

·              plausible reasoning admits non-reliable rules: which is a weakening of classical inference rules

·              plausible reasoning admits meta-logical means, such as a test for consistency, a test for un-derivability, a test for completeness, etc.

More needs to be said, and will be said, on the notions of S' and R' . Briefly, we suggest that plausible reasoning can be transformed into reliable reasoning given a well behaved set of empirical statements and a method for constructing (compressing) the set S' into a set of axioms S with a specific inferential logic (and perhaps a specific geometry) to reflect the composition of axioms into statements about evidence.

It is also important that logical variables have more than two evaluation symbols, such as "T" and "F".. Not only is it important that logical inference be given more degrees of freedom than two values, but also that the second order cybernetic system be able to produce, and distinguish between, multiple formalisms. This is because the goal is to form a classical formal system, as nearly as possible without discounting the evidence from empirical measurements.

Classical Boolean systems may not have the rich number theoretical qualities necessary to reflect the transformation from plausible reasoning to reliable reasoning. However, when we weaken classical logic only by extending the evaluation set from {T, F} to a linearly ordered set of symbols, then we strengthen the likelihood of successfully making this transformation.

We also need to consider the possibility that a single formal system might adequately reflect some evidence, while not being able to reflect other evidence that remains significant. This may have to do with an unknown geometric or topological feature of S'. Even though unknown, that condition does not allow one single consistent axiomatic system to stand for the empirical data by itself. S' may partition into categories, each capable of producing a separate and quite different formal system

G i = < S i , R i >

The application of methodology to evidence forms a set of separate formal systems, each having different inferences. Each formalism, what we will call a "quasi formal system", may develop the essential set of properties that a classical logical or algebraic system has, i.e. properties of completeness and consistency. Each system has its own context.

These properties may be approximated, but always at the risk of equating a natural system with a formal system. This is a critical issue that must be understood to judge the Russian work in the light that it was intended. For the Soviet school, the privileged work in situational control was grounded in a pragmatism linking the non-stationarity of the world with observations about how transient systems arise and express coherent behavior.

Why do we need more than one ontology?

Quasi formal systems may, in fact, mirror the stable regime of a naturally occurring process compartment (Prueitt, 1995; Chapter 1). Experimental and theoretical science supports the conjecture that natural processes compartmentalize through the constrained assembly of components. This stratification and compartmentalization is a part of the phenomenon studied by cognitive science. Experimental evidence that the formation of process compartments occur within a larger system of constraints in the human brain can be found in Pribram (1991) and elsewhere.

A general theory of weakly linked oscillators has been under development by mathematicians and physicists for some years. Systems of these oscillators produce an emergent energy manifold that can transform many (subfeatural) systems into a single coherent system that is phase locked. The model given in the next section is an original use of the general theory of coupled oscillators and is presented in order to illustrate the principles of emergence..

To explore the analog between natural and formal systems, it might be of value to consider non-stationary relationships between two or more quasi-formal systems, and consider these relationships as an example of systemic differentiation of logical inference into multiple, but perhaps weakly linked, systems of logic. This is the generalization of both general theory of weakly linked oscillators and quasi axiomatic theory, and one that the author believes can be properly grounded in the neuropsychology of awareness and cognition, where shifts in context is often necessary to properly account for individual differences that are experimentally observed.

Each of several systems can be referred to as a specific first order cybernetic system S'i , each sharing a common second order cybernetic system - as reflected in the inferential logic for S. Moreover, it is conceivable that each first order system is undergoing transformations C( S'i) of the type that Pospelov anticipated in Equation 2. This model, combining the three areas of scientific work described, is a model of how the contents of mental awareness come into being and dissipate.

Knowledge representation systems must describe conditions for meaningful alignment under circumstances where the target and source text, for example, are both "interpreted" by correspondences to the combination of two or more, perhaps distinct, machine ontologies. Machine ontologies are token dictionaries that have rich relational information. The formal union of the two token sets is easy to accomplish, but now we need to recreate the relational algebra that governs expression by the mutual system. If we think about mental events, then we see that the problem of system fusion is related to three step process. First is the dissolution of the two systems. This dissolution effects ultrastructure placement (future anticipation) and substructure availability (memory). Ultrastructure placement is seen in an ordering of categories as an assignment policy (see appendix A).

Substructure availability has to do with the location centric distribution of available resources that are candidates for an aggregation (creation) process to draw from. Second, there is substructural processing of atom states (tokens) according to the laws of the substructure. The ultrastructure is likely external to a process of system fusion, though the ultrastructure, like Platonic Forms, may be changing slowly, or abruptly, due to other factors. Third, there is the mutual expression of the two systems now in a fused form. The two systems may maintain relative autonomy, but now the expression is entangled in an order fashion. The ordering should express the intentionality of the two individual systems as mutual formed. A new entity, in the form of a partnership, now exists along side the two original entities and the other realities of the middle world

What is required is the integration of two or more compartmentalized sets of rules and objects into a new compartment. This is like the joining of two opposing points of view into one that subsumes both. Various accommodations are required it this subsumption is to be successful in creating a new system where the values (affordances) from the self (system) image of the two subsumed systems are fully taken into account. The subsumption needs both competition and collaboration and a will to accept changes in the rules of expression. Integration is more likely to occur if it takes place within a larger context (the second order system) and the opposing views are entangled in this context; however, integration can play out in a number of ways. For natural systems, tracing the entanglement of two or more ontologies must occur within a specific situation. This entanglement in natural systems can be more fully modeled with the QAT formalism than with the traditional axiomatic systems that are used in the West.

For formal systems, the joining of distinct ontologies will predictably trigger a finite number of paradoxes that must be resolved by known facts about the situation. In some cases the paradoxes are resolved through the formation of competing compartments. This would hold for either logical entailment or dynamic (natural) entailment. The logical entailment is to be handled by AT and QAT systems, whereas the natural entailment can be modeled by stratified dissipative systems and transformations of energy spectrum (see Chapter 1 and Pribram, 1991, Appendix).

A system that resolves these paradoxes will produce information complementarily, because multiple viewpoints are accommodated, and the emergence of a new system for understanding both ontologies and their natural inter-relationships.

The nature of paradox, complementarity and emergence have physical correlates that are studied at the quantum level by physicists. We can borrow some of the formal tools, developed to study elementary particle interaction, and extend quantum mechanical analytical methods to address the hard problems found in computational document understanding (Chapter 3). First, one can borrow the notion that a finite and quantal distribution of possible interpretations is driven by an underlying, and knowable, organization of the world. This organization, in the world, is seen in the development of category theory at three levels; thematic substructure, semantic properties, and context.

The tri-level organization enables the disambiguation of meaning, in most cases. In cases where novel emergence must occur in order to find an appropriate container for representation; then one can use the notion of entanglement and the formation of new compartments through complementarity and observation.

Dmitri Pospelov was not alone in identifying, in the early 1960s, a flaw in control theory based on closed formal systems. Western architectures for commercial knowledge management and full text retrieval suffer from the same flaw. Formal systems require a closure on what can be produced from rules of inference about signs and relationships between signs. This means that the formal system, no matter how well constructed will not be able to perfectly model the changes in a non-stationary world. This flaw has been propagated by the mistaken assumption that a computational system is completely similar to a natural system. The tri-level architecture considers this flaw in order to develop the foundation for knowledge management when done right.

Cross scale stability

In this section, we consider an illustrative bi-level formalism based on a model of energy distribution in brain regions. It is not tri-level for reason that the addition of the third level notation has not been worked out. However, it does demonstrate the phenomena of emergence in a way that is different than the simulations developed in the artificial neural network community. Specifically a comparison can be made between the tri-level architecture and the Adaptive Resonance Theory (ART) architecture of Carpenter and Grossberg (1991). The ART architecture has the quality of being based on the need to separate the memory substructure from the categories of context. Like the tri-level architecture, the substrate and the context is mixed together in a distributed fashion to produce categorization. In ART, as the acronym suggests, the mixing of features and context is done in an elegant fashion. However, that argument continues to be that the process that produces orienting attention in the biological system involves the emergence of compartmentalized energy manifolds (Pribram, 1991). This emergence is not a feature of either the ART simulation or the voting procedure. Neither have the openness to change that has been the central innovation of Russian quasi axiomatic theory.

The mathematics that we now turn to has an explicit phenomenon of emergence, as does physical systems of magnetic oscillators for which the mathematics is an exact model. In the most general case; however, consider a connected surface in a high dimensional Euclidean space and set of initial conditions for a trajectory. The trajectory will describe the evolution of a full set of observables, for example this might be position, velocity, momentum and mass, for the compartment. But the observables may be something more complex than these classical observables from the field of physics. The observables may be the affordance paths that lead from one basin of attraction (in an ecosystem) to another basin (Scott & Prueitt, in progress). A simple relationship, expressed in equations 1 - 4, captures this model.

Mathematical notions as artifacts

An individual dissipative system might have the internal form:

H(x,dx/dt) = 1/2 m dx/dt2 + V(x) + D(dx/dt) + E(x)                           (1)

where the first term on the right hand side is an internal potential energy function, the second corresponds to architectural constraints, and the last two terms correspond to flows of energy originating from outside the compartment. The compartment is said to be closed when the dissipative, D(dx/dt), and escapement, E(x), terms sum to zero over some time interval. If the reader is not mathematically oriented then it is wise to simply skip this section as the point being made here is really addressed to the mathematics community.

The first term represents the basin of attraction classically associated with a potential whose expression depends on the value of the right hand side as well as kinetics described by V(x). As already noted, an initial condition x0 is required to fully define the compartment. In many cases the so specified trajectory in state space will move into an invariant set. This invariant set can be: (1) a (zero dimensional) point of equilibrium, i.e., a singularity; (2) a closed limit cycle; (3) a spiral into a singularity; (4) a radial motion to a singularity, or (5) a more exotic trajectory.

Architectural constraints, V(x), reflect the connectivity between regional process centers as in a traditional connectionist layered network and more complex non-layered associative networks (as within a single Grossberg type gated dipole). The term, 1/2 m dx/dt2, is, of course, generalized from the potential energy term that describes the standard model of the undamped pendulum.

A very general theory can now be proposed.

In a single compartment, higher order terms, existent at transitions, may be modeled by generalizing equation 1 to:

H(x,dx/dt) = U(dx/dt) + V(x) + p(x,dx/dt)                 (2)

where U is a potential space, V is a kinetic space and p is a polynomial in x and dx/dt. x and dx/dt are coordinates generalized from equation location and velocity to systemic degrees of freedom and the associated rates of change.

In the early part of the formation phase two general systems properties seem to operate

1.        Duplicate invariance is eliminated through phase locking (induction to memory stories).

2.        The relative importance of individual members of a feature set is ordered through competitive/ cooperative circuits (formation of categoy and assignment policies).

These two properties are the foundational properties for machine level implicit memory and adaptive categorization.

Dissipative coupled harmonic oscillators have a fast and a slow time scale, reflecting the average duration of events. The fast scale is the oscillatory trajectory whose control is modeled by equation 2. A simple coupling relationship between oscillation phase:

dj/dt = w + SUM c G(j).                 (3)

creates the slow time scale.

The components of the vector j are the phases (angle measure, or circuit state in a finite state machine ) of a set of weakly coupled rotators; w is the intrinsic rotational inertia of the rotators; c is a vector of coupling measures. G is a non-linear periodic vector transformation that depends on the pairwise differences between rotator phases.

The entrainment of phase differences produce a constraining manifold, through the rates of change, having relative minima (basins of attraction). Oscillations in the slow time scale recognize the field dynamics of this manifold.

At the fast time scale we have a large number of subsystems, i = 1, 2, . . . , n; each defined as a modification of equation 2 through shunting and additive functions, Si and Ai :

Hi(x,dx/dt) = U(dx/dt) + (Ai(ji)/Vi(x) + Si(ji)) Vi(x) + pi(x,dx/dt)        (4)

Si and Ai depend on the ith component of the vector j. These are architectural relationships between fast time compartments, as expressed via kinetic energy, or metabolic activity.

Depending on the shape of the last two terms on the right hand side, U(dx/dt) is generally parabolic, each dissipative system has multiple invariant sets related again to its basins of attraction. The basins produce the fast time scale oscillations.

Cross scale synchronism, between fast and slow scales, occurs under the condition that an initial condition in one of the embodied basins produces a limit cycle whose frequency evenly divides the frequency of the variable ji. The limit cycle, viewed as a microprocess, is seen as a simple rotator when viewed from the macro observation frame. The other, non zero dimensional, invariant sets cause more complex phenomenon.

Reasonable assumptions about the functions Ai and Si ; i.e., Ai and Si are finite sums of polynomials, leads to the following conjecture:

Conjecture (The Stability Conjecture): If all subsystems in the form of equation 4 exhibits cross scale synchronism, with a system in the form of equation 3, then the combined system is stable; i.e., the number of oscillators and the number of basins of attraction will remain constant. In this case the system in the form of equation 3 is called the "exact system image" of the systems in the form of equation 4.

Of course, "exact system images" are likely to only occur in mathematical models. Natural systems have residue as well as resonance and residue is an important mode of control for intentional systems.

There is a theory of residue and resonance that is implicit in the categorical relationship to the author's model of stratified dissipative systems. Rather then work out these relationship, we have developed the Mill's logic and the voting procedure. The reason why we chose this path is as follows. There is limited support in the universities for the required radical interdisciplinary work required in both pure logic and neuropsychology. We felt that commercial needs for knowledge management done right might well develop as the tri-level architecture is implemented as software products. In this way we may be able to fund the necessary academic type work outside of the academic establishment (see notion of "use-philosophy in the Preface).

Compartments arise in the process hierarchy that are themselves dissipative systems constrained by micro/macro influences. The situation can become quite complex. To manage this complexity we use the language of a tree structure with multiple branches. The nodes of these branches are compartments.

One branch of a temporally stratified process hierarchy has m levels of compartment ensembles {Ek} where m is an integer dependent on time scale and observation. Each compartment within the ensemble Ek has the form of equation (4) but with time variable tk ; where tk = ak * t1 and {ak} is a finite positive monotone decreasing sequence with a1 less than 1. Multiple branches can be established below any compartment, thus accounting for embedded compartments.

The resulting computational model relies on the selection of a manifold that is stable only within a finite period; e.g., within the corresponding compartment. Under the stability conjecture, it is assumed that a one-to-one correspondence is established between one basin of attraction in each manifold and the members of a "community" of sub-compartments.

The span of a manifold is measured by the continuum of time,

t e [ ta, tb ].

Over the span of stability the manifold is observed to retain a basic shape and configuration of attractor regions. This representation is degenerate; i. e., one to many, and yet it is deterministic within the compartment. A notion of degeneracy in this respect is consistent with Edelman's notion of response degeneracy (Edelman, 1987). We accept the philosophical argument that the representation should have non-algorithmic singularities at ta and tb. These singularities enable the illusion of the Minkowski time cone within the compartment's endophysics.

The stratification of processes

Two important aspects of the intuition behind the notion of a process compartment are

1.        that within the life span of the compartment the evolution rules are fixed, and

2.        the period over which the nominated set of rules operate is finite.

These two aspects often appear as logical paradox.

Fixed evolution rules are reflected in the independence of equation 3 from equation 4. The interaction between the separated (compartmentalized) levels is via an entrainment that is expressed in terms of frequency.

The slow process, once formed from the faster ones, will keep the subfeatures stable, via cross scale synchronization, and thus the slow process has indirectly ensured a stable process environment.

Both the micro and the macro process can be represented in a single spectral domain that is organized via energy distributions as reflected in frequency and amplitude. This fact is implicit in the formulation of holonomic theory by Pribram (1971, 1991).

Stability of compartments can be challenged by external perturbation, such as from optic flow at the retina. In certain cases, the perturbation will collapse the system image, producing the singularity at tb . In other cases the collapse could come because of the depletion of resources in the compartment itself. In both cases, the mechanisms that store memory must take up the phase relationships between the active frequency channels.

The collapse frees up energy and kinematic constraints formerly enslaved by the now collapsed compartment. These are recycled into the environment and form the energy distribution and mass to be incorporated into new compartments. The singularity at ta allows for a dynamic restructuring as a new synergism is formed through cooperative dynamics.

During a short formative period the dissipative system might have a very general internal form, where the right hand side is a polynomial:

H(x,dx/dt) = q(x,dx/dt),                                 (5)

and the dimension of the observation vector might be considered infinite. At the end of this very short formation period, the dimension of the observation vector becomes finite and in fact is minimized by a correspondence to the relative minima of the emerging ensemble manifold. These representations will then partition into potential, kinetic, dissipative and escapement terms, as in equation 1.

In the visual systems, one can conjecture that singularity forms due to the normal flow of energy through the lens and retina. These would be implicated in the top down expectancies that group singularities into recognizable patterns. The same principle can be used in recognizing patterns in data flow as part of personal, commercial, national intelligence or scientific data mining / warehousing processes.