[29]                               home                            [31]

 

 

5/14/2004 8:11 AM

 

Future Research Note:      .

Why is this bead important to understanding Orbs à  .

 

 

 

Stratification Theory and the Anticipatory Web Technology

 

 

 

 

Discussion about the nature of mathematics and logic

Main argument for stratification theory

Anticipatory Web Technology

Additional discussion on double articulation

Application to learning theory

In Summary

 

 

 

 

 

 

 

 

 


Discussion about the nature of mathematics and logic

 

Genetic algorithms, and related formalisms, assume that natural systems have "fully formalizable" utility functions.  When a genetic algorithm is run on a computer, a specific utility function is defined.  This utility function interacts with individual formal constructions, such a linear chains of symbolic information, to provide a global set of constraints.   Most often these utility functions are simple and not dynamic.  But even if the programmer defines a dynamic utility function, the iterations performed in the interactions are not coupled to real environmental forces.  The computer program runs in a special type of isolation from the real world. 

 

In evolutionary programming one is primarily interested in modeling a natural system that is subject to evolution.  One can view evolutionary computing, genetic algorithms and related mathematics, as a precise model of the governance of a natural system.  An assumption is made about a class of utility functions that are each precise and mathematical in nature does. This assumption, in fact, very often leads individuals to confuse a mathematical model with the natural system that the mathematics models.  This confusion is not always an issue, since in many cases the natural system can be precisely modeled.  Stratification theory attempts to account for the more general case. 

 

As a side note, which is stated to avoid a common misunderstanding, one should say something about the assumption that natural systems are all computers.  A weaker form of this assumption is that a natural complex system’s description, and/or control, can be regarded as having a computational part and a non-computational part.  However, one has to be very careful in suggesting that some aspects of the natural complex system can be modeled as if fully computational, and some aspects cannot be treated in this way.  How is one to know?  Under what criterion does one know when a system’s behavior is being modeled correctly?  If a specific system is very engineered, then this is less of a problem.  But even in this case, the physical engineered system is not a computation. 

 

It is just that in this case, the physical system has been constructed to behave in a regular fashion, in a fashion that is precise and predictable.  If there is a difference between the computational models and the physical system, then we most likely will regard the physical system as being broken.  Something else is going on with complex natural systems such as psychological or social systems.  Here we understand that simulations do not precisely model human behavior. 

 

Of course, one can regard a computer simulation as being an approximation, but what becomes missing when we do this?  In Chapter 1 of my on-line book, I develop a notion that a specific complex system may have periods of behavioral stability where simple formalism is sufficient.  However, this same system may approach and pass through states where a great deal of instability exists, or stability of specific types exists.  Here the terms, “stability” and “un-stability”, are actually framed by regularity of the structure and function of a system with respect to the environment.  It is this regularity that mathematics and logic attempts to capture completely. 

 

 


 

Main Argument for stratification theory

 

In an alternative perspective, stratification theory suggests that there is a common sub-structural stratum to any “kind” of natural phenomenon.  In physics, the great debate is over string theory or quantum theory.  In either case, the physics of reality is the same physics on which the biological processes is realized, including the social processes. 

 

It is reasonable to assume that all human social discourse has a common sub-structural stratum, much like physical chemistry has the common sub-structural stratum of the physical atoms.  The same type of organizational stratification one sees in physics and biology is seen in those processes that bring awareness to individual humans about the world. 

 

The conjectured common substructure is atomic.  By using the term “atom” we mean “not-a-compound”, and also that the substructural elements are each members of a natural category.  The category of all Helium atoms, for example; does not make any distinction between one Helium atom and a second Helium atom.  How natural categories come into being is an open problem for natural science.  But clearly the past and the future somehow become entangled in the present.  This is why some quantum physicists talk about an anticipatory universe (Robert Shaw, personal communications 1997 – 2002). 

 

The practical basis for stratified theory must be empirical.  The natural categories are as they are, one must observe that they exist even if no complete theory exists that fully accounts for each category and how each category functions.  A great flexibility arises because of the special needs that anticipatory systems have to know the world and to act in that world.  A category acquires one or more functions and a (sub)-structural set of causes allows the category to function in different roles in real environments.  Sub-structure fulfills the requirements of a category depending on environmental availability of atoms. 

 

Machine representations of ontology, such as by Ontology Web Language, has not dealt with either the issue of variability of components fulfilling the same function, or the same set of basic elements fulfilling different functions.  In the Orb notation this is the issue of ambiguation/disambiguation.  The notion advanced by the Semantic Web community, largely centered around Tim Berners Lee and the W3C standards organization are not correct because this community acts as if semantics can always be completely specified prior to a contextualizing situation.  The Anticipatory Web alternative suggests that a one can measure the invariance in computer data, and encode this into simple structures and manipulate these structures to present knowledge representation to a human being, or other fully anticipatory biology. 

 


Anticipatory Web Technology

 

In our proposed new generation of Anticipatory Web technologies, measurement and presentation of structure is done using a tri-level computational architecture.  In the tri-level architecture the “memory” of the invariance of the past is encoded separately into categorical abstractions (cA), called cA atoms.  The human user then uses various utility functions and rules sets as anticipatory constraints to organize a presentation of grouping of cA atoms for additional manipulation.  The architecture is human-centric. 

 

This architecture can be exceedingly simple to implement on a computer. 

 

1)     because a fractal compression of the cA atoms is available using a new innovation called a key-less hash table. 

2)     because the retrieval from the key-less hash table is faster by two ordered of magnitude than indexed relational databases. 

3)     because the encoded cA atoms have no pre-imposed organization. 

4)     because data organizational processes are fast

5)     because data organization can be defined as differential and formative

 

Atoms need not be merely grouped into bags, or written as linear chains (as done with the mathematical modeling called genetic algorithms).  A general theory of convolutional organization has been defined in the Orb (Ontology referential base) notational paper.

 

As a result of this work, a real time thematic representation of social discourse (every where and in any natural language) is possible.  But to make this possible one needs to separate the measurement of structure, generally in the form of linguistic co-variations, from the interpretation of these structures’ meaning. 

 

The memetic expression is a whole phenomenon, culturally shared as a natural category or archetype, defined by usage and context.  The memetic atoms have separate realities from these expressions.  This is what one means by “stratification” of phenomenon. 

 

The nature of sets of atoms, underlying complex expression (such as gene or meme expression) have to be observed and taken as empirical facts.  A theory can be constructed to describe the facts.  But formal mathematical theory cannot be used to adequate derive the empirical facts in cases involving human experience. 

 

The categorical difference between a formal system and a natural system is discussed in my on-line book (Chapter 2).  But this difference between formal systems and natural systems must be discussed in the context of human knowledge experience.  In the memetic science, the difference between formal constructions and actual reality is an essential one. What is missing is the empirical science relating this substructural invariance to the behavior of cognitive function.  

 

It may be that a set of atoms is open to the emergence of novelty within that set.  However, the stability of such a set is a property that can be observed empirically, at least in the case of physical atoms.  The stability of compositional elements in human speech production seems also to be demonstrated by empirical studies, but in this case there does seem to be the possibility that the phonemic set itself can undergo transformations. 

 

The separate reality of atoms from compounds is seen, in my work, as much like how C. S. Peirce saw the physical atoms having separate realities from the chemical compounds.  This is a structure/function issue. { + }  { # }


Additional discussion on double articulation

 

The aggregation of atoms into a compound has a set of causes, some of these are related to the nature of the atoms and some of these are related to the set of environmental conditions.  This position is widely understood to be instrumental in the emergence of operational compounds whose function is at least as dependant on the environmental conditions as the compositional elements that get bound together.  I will cite only Gerald Edelman’s notion of response degeneracy, because the issue is ubiquitous in the natural sciences.  A longer discussion is warranted.

 

This absence of a true atomic theory is consistent with most genetic algorithm and even artificial neural network modeling.  Genetic algorithms have operations that split the informational string and then recombine the parts into a new informational string.  But this recombination of elements is not fully stratified in the sense that I am defining in my work on anticipatory technology.

 

The concept of double articulation mentioned in { # } is one way to sharpen what I hope to be a proper reading of C. S. Peirce.  In any case, the problem is in determining function given only information about structure.  The development of the anticipatory technology called Ontology referential bases (Orbs), assumes that humans can determine the semantic function of word co-occurrences in ways that deductive apparatus (for example those using Ontology Web Language inferences mechanisms) sometimes fail at. 

 

The application of Orbs to complex manufacturing process is suggested in several of my research notes.  In this application domain, the problems related to natural complexity are present in a less difficult sense.  The physical system approaches “tipping points” where decisions regarding small changes in forces or environmental elements can greatly vary the production outcomes.  Physical production systems having these decision points include pharmaceutical production, production of genetic materials, silicon production, and nanotechnology production.

 

In mathematical simulations, a formal utility function provides a constraint on function as simulated in the computations.  But these utility functions are missing the measurement and response mechanisms that are present in human cognitive acuity.  The claim is that no mathematical formulation has been developed that can fully account for these measurement and response mechanism because the mathematics is not stratified in the way that the natural systems are.  Double articulation is logically inconsistent.  

 

Protein molecule transcription might be an appropriate empirical target to help us see that the mathematics of genetic algorithms is just too simple to be a complete model.  In areas of application such as subject matter indicators or as Michael Lissack calls them, catalytic indicators, or complex manufacturing control, these models are only marginally useful. 

 

Individual protein conformational dynamics has not been modeled successfully.  Yes, statistical models exist which provide useful information about protein behavior.  Something similar can be said about educational theory.  Statistical models exist which provide useful information about individual learning behavior.  But our work on anticipatory learning environments develop categorical invariants that are observed in behavior and attempt to produce behavioral atoms that when aggregated together in holistic compounds predict learning outcomes.  Again, in reference to the theory of natural categories, a compound is holistic because it is shaped in both function and (sub) structure by the natural categories existing separately as environmental archetype and as sub-structural atoms. 

 

In the natural environment, the emergence of an informational element may then be seen to have a similarity to certain formal routes to mathematical chaotic phenomenon.  Again, we contrast the mathematics of emergence with the actual phenomenon of emergence, as discussed by I. Prigogine in his book “End of Certainty” (1997). 

 

In stratified theory, due to the environmental constraints (regularities) the natural emerging phenomenon is “tamed”, of what would be chaotic nature, and something "typical of the situation" arises.  Mutual involvement between the natural system and the mechanical aspects of the emergence has been called “mutual-induction”. 

 

These are very difficult issues related to nature/nurture.  I am suggesting that stratified theory is necessary to account for any of a class of natural phenomenon that has an issue with respect to the prediction of function given only knowledge of structure.  I follow the work by a school of Russian scholars in a field called quasi-axiomatic theory, with a few modifications that depend on the work by Robert Rosen and my interpretation of Karl Pribram’s holonomic theory of brain function.


 

Application to learning theory

 

Why might a person, or society, not learn something that should be easy to learn?  Why has inability to learn been hard to model using formal systems?  My research on this issue has been developed empirically using case studies.  I let a professional experience with neural network and immune system models influence how I interpreter the case studies.  But I see that the mathematical models fall far short of being complete. 

 

The full reality of an individual is rooted within experience that occurs in real time, whereas mathematics is formulated as a set of abstractions whose nature is less than complete within the context of the moment.  In the study of human mental event formation, and the exchange of knowledge between humans, the abstraction of Hilbert mathematics is not sufficient precisely because the real time pragmatics has emergence and novelty. 

 

In our study of human social discourse, it is necessary to engage the human cognitive acuity so that what is missing in the formalism can be supplied by the more complete mechanisms supporting human memory and human anticipation.  The empirical scientist, and educator, must become more engaged in understanding those aspects of learning behaviors that have no complete formal description.

 

Bear with me a moment to allow me to make the point regarding an hypothesis on the origin of cultural differences.  We make the point that a pragmatic grounding in an experiential framework will create natural differences in cultural viewpoint.  Computer simulations assume that differences arise out of natural environments to produce the initial states of the informational strings.  That there are differences is only observed.  No theory of how cultural traits come to be is included in the simulations.

 

What can also be observed is that these differences will be protected by an immune type mechanism.  Nothing about most computer simulations take into account the observation that a more complex phenomenon is active in determining an acquisition of trait. 

 

Conceptual (also called memetic) isolation may be “given an origin” other than those resulting from "merely" genetic type evolution - as modeled by a genetic algorithm.  In the late 1960s, J. J. Gibson studied the development of traits from anticipatory responses involving action-perception cycles.  The school of thought that was founded by his work is ecological psychology.  A journal in this area is published by Lawrence Erlbaum Associates Inc. 

 

In a recent article in Journal of Memetics, Derek Gatherer writes "maximization of memetic isolation requires an intuitively unlikely combination of low cultural interaction, high migration and no selection".

 

And this is clearly informative.  So the fundamental issue, I raise, should not in any way be taken to detract from illustrative work of this type. 

 

One experiential framework for the non-transmission of memetic expression might be what Ben Whorf discussed in regards to the non-translatability of certain concepts from one culture to a different culture where the absence of a relevant common experience does not allow the translation to have a box to fit into.  Clearly cultural interaction can increase the common experience, but not always.

 

But there must be more to the model than simple genetic algorithms.  I raise this issue in the context of my private research on what I have called learned learning disability in mathematics classes.

 

In this case, an isolated meme has massive common experiences that end up re-enforcing the self-image of a non-math learner, and inhibiting what would otherwise be a simple transmission of a trait.  It is not a question of cultural exchanges but of self-image.

 

As an adult learner, the student cannot experience the meme related to arithmetic operations involved in adding fractions.  This is because there is an active inhibition of the meme’s transmission caused by the self-image of the learner. 

 

Something more is needed than simple genetic algorithms to model this phenomenon, and in fact most actual phenomenon related to the development of memetic expression. 

 

 


 

In Summary

 

Stratification theory defines a “level of organization” to be those things that can interact with each other in some single set of conservation laws

 

The theory suggests that any "level of organization" has a sub-structure reality and a set of ecological affordances.  The level is then in fact a type of persistent entanglement between two other levels of organization that are kept separated from each other by the middle level of organization. 

 

Living systems and social systems become rooted in some set of pragmatics that can resist the exchange of memetic expressions, in spite of massive cultural exchanges.  The mechanisms involve the emergence of function through the aggregation of substructural elements (called atoms).  Roger Penrose’s work (“Shadows of the Mind” (1996) can be read to suggest that there are aspects of this emergence of function that are not reducible to formalism consistent with Hilbert type mathematics.

 

However, a non-Hilbert type formalism, such as quasi-axiomatic theory, can go further in modeling the emergence of function phenomenon.  But perhaps more important, stratification theory suggests that the understanding of phenomena of this type requires a human-centric use of formalism that assumes value from cognitive priming guided by the formalism but also assumes that the complete understanding of the specific phenomenon lies outside any formalism. 

 

It is this suggestion, regarding the limitations of classical formalism, which I have used to differentiate my concept of an Anticipatory Web of information from Tim Berners Lee’s notion of a Semantic Web of information. 

 

One gets the notion of anticipatory systems acting within an organizationally stratified physical world.  From this notion one is able to take into account the pragmatic axis, where individuality and uniqueness exist as casual forces in predicting outcomes. 

 

Dr. Paul Prueitt

Director, BCNGroup

Virginia

May 14, 2004