[0]                             home                            [2]

 

 

 

On the representation by graphs

(Beginnings of a dialog)

3/29/2004 12:05 PM

 

 

Paul:

 

I've had a chance to look through material on the web about your research.  I must say that I find it very tough going.  My main problem is that you give a syntax, but not a semantics, for your theory.

 

So, e.g., at www.bcngroup.org/area2/KSF/Notation/notation.htm, you use the ordered-triple notation

 

<o(j), r, o(i)>

 

and explain that o(i),o(j) are "objects from a set O={ o(i) | i is over an index set}".

 

But what is "r"?  If it's the relation between o(j) and o(i), then what is the ordered triple?

 

And what are "objects"?

 

Similarly for "classes" and for "class:object" pairs; moreover, must the index sets be the same as for objects?

 

And what are "atomic constructions".  For that matter, what does the notation

 

"A={ a(i) | i = 1 , . . . n }"

 

mean? 

 

Cordially,

BR

 

Reply

 

wonderful note.

 

You are precisely correct.  There is, by design, no theory of semantics in the notational paper, except as indicated by the ambiguation/disambiguation operators.  In the context of controlled vocabulary based taxonomy, various reconciliation activities are needed to determine context, and meaning.  SchemaLogic Inc has an excellent knowledge management type system for this reconciliation over controlled vocabularies.

 

Even then the Orb theory of semantics is much weaker than so called formal semantics and the deductive inferences that are built on top of a theory of formal semantics.  

 

The Orbs measure structural invariance and presents this structural invariance in the form of local topological neighborhoods.  Preliminary visualization software is given at:

 

InOrb Technology Tutorials

 

The atoms A={ a(i) | i = 1 , . . . n } are identified, when one knows structural co-occurrences. 

 

An atom is simply defined when a word co-varies with another word in some loose sense.  More generally, an atom is anything that re-occurs as part of something else.  The compounds that a specific atoms occurs in is likely to be not of the same exact type, and to play various functions even in those cases where the compounds are of the same type.  This is the nature of structure/function dependencies, for which quasi-axiomatic theory was developed.

 

Then the co-occurrence, of words for example, is a “non-specific” type of relationship.  The relationship is structural is a specific way, as determined by the instrumentation and measurement of structure.   However the meaning of the relationship is non-specific. 

 

Of course, a theory of relationship can develop in several ways.  The instrumentation and measurement step can be replaced by a general framework construction, quite easily. 

 

The notational paper talks about objects as being compounds of atoms, and thus one can have more than one level of organization.  These are organizations of structure that are found (measured) to have a structural relationship of some type.  The layers of structural relationships can be addressed formally.  However, never is the notion of a formal semantics as strict as is often expressed in the “formal semantics” literature.  I do not make a review of this literature, because on principle a judgment is made that meaning cannot be formalized.  This may be a mistake, but I do not see how? 

 

The atoms are those measured structural invariance found by instrumentation, text parsing or some other mechanical instrumentation like a device recorder.  The compounds are also observational in nature. 

 

The problem of prediction of "meaning" is similar to the problem of prediction of function from structure in speech understanding and in physical chemistry.  It is not only a hard problem, but the problem may not be precisely solvable using only a standard computer.  The core concern is that emergence does not often follow pre-established rules.  Human thought, for example, is a sequence of events that emerge under several sets of constraints, not all of which are well known; and almost none of which accords itself to modeling with deductive logics.  (This is the critical and controversial opinion.)

 

Peirce addressed this, as did the Russian applied semiotics community that I was fortunate to study with the mid to late 1990s. Stratified information theory comes out of this viewpoint over their work on extending Peirce, and Mill’s logic, by the Russians. 

 

The Human-centric Information Production (HIP) paradigm suggests that if tight action perception cycles are designed to measure instrumented structure, then the meaning is experienced as humans look at the local "Subject matter indicators" (catalytic indexicals). 

 

http://www.bcngroup.org/beadgames/InOrb/one.htm

 

Topic maps (not OWL/RDF ontology) is then the optimal means to encode a representation of meaning as some additional annotation.

 

HIP maps well to the Tri-level architecture where a “memory” of past invariances is developed as a set of atoms, and a set of organizational patterns is fitted with a Mill’s logic to provide a plausible reasoning aid. 

 

But the new attempt by the Tri-level architecture to impose a theory of meaning is one that cannot be undertaken at this time.  We need first to develop the Orb technologies so that the measurement of structure is done cleanly.   

 

PSP