[35]                               home                          [37]

 

5/22/2004 9:03 AM

 

On the generality of the particular

 

Key questions on Common Upper Ontology -> .

(new beads are edited for a few days until the grammar is correct)

 

Amnon,

 

I copy part of your note on May 22ed (2004) here for reference.

 

I begin to appreciate more that we're on the same page, though there is much for me to examine that you've spent years studying and building.

 

The RUG (automated rule generation) patent may be aligned with part of your thinking.  However, TAIParse is not an analyzer that generates analyzers.  It's merely a general analyzer.  And most of the time, I neglect the automated rule generation facilities in VisualText in favor of purely manual methods, though it's been tugging at me as I work on a current application involving pre-categorized documents.

 

I see two main forms for building systems.

 

(a)       build a great system manually, generalize it, and incrementally automate the creation of the same system;

(b)      start with automated methods, examine where they fall short, augment them, and iterate on that. 

 

I suppose I work with method (a) primarily, whereas statistical NLP people use approach (b).  Both seem worthwhile to me, if used to pursue the goal of building a practical system.

 

Whether we use RUG, your voting procedure, or other methods, seems not as important to me as the general notion of bootstrapping, or human-machine mixed initiative.  In this way, a general analyzer such as TAIParse could yet be part of a method that builds domain- and task-specific analyzers.

 

First, it is very important to note that the type of conversation, that you and I are having, is rare.  I have discussed the social, political and economic issues relative to the constraints on deep scholarship regarding what would be the most useful text analyzer technologies and techniques.  The knowledge sharing foundation concept, and my several proposals to DARPA, NIMA, NIST and NSF has addressed the need for a less encumbered research, development and deployment (RD&D) environment where intellectual contributions are disclosed publicly in exchange for assistance in filing provisional patents and full patents at low cost to the innovators.  This function is a planned responsibility of the BCNGroup Science Committee. 

 

As we talk about how the Prueitt voting procedure can be used in the context of multi-step categorization of text elements (phrases, sentences, passages, paragraphs, sections, etc), we are aware of the need for a “complete system”, much in the nature of the one I proposed developing under an investment of 532 K from In-Q-Tel or other venture capitalization group.

 

The issue of deployment cannot be separated from the issue of research.  This is where intelligence gathering and the reductionist paradigm has lead into a methodological error in designing and using text analyzers.  The same problem will always exist in any machine-based system for understanding/controlling/influencing any complex natural system having response degeneracy.  “Response degeneracy” is a term that Gerald Edelman used in “Neural Darwinism” to talk about the (sub)-structure/function issue in biological (immunological and neural) systems.  The biology will fulfill a need using whatever specific resources are available in the present moment.  Likewise, the same set of resources many serve entirely different functions depending not on the specific resources but on the environmental conditions. 

 

Edelman, G. M. (1987). Neural Darwinism. New York: Basic Books.

 

So the deployment of a specific tool has to “evolve” into the present moment.  Only in this way can one capture the specific “pragmatic axis” observable in the present moment.  This requirement in intellectual more sophisticated than what most reductionists are capable of tolerating.   But the failure of a reductionist paradigm to capture the nature of response degeneracy is not a failure of the natural world. 

 

The natural world is as it is.

 

Victor Finn’s foundational work on Russian quasi-axiomatic theory is referenced:

Finn, Victor (1991). Plausible Inferences and Reliable Reasoning. Journal of Soviet Mathematics, Plenum Publ. Cor. Vol. 56, N1 pp. 2201-2248

Finn, Victor (1995). JSM-reasoning for control in open (+/-) worlds, in J. Albus, A. Meystel, D. Pospelov, and T Reader, (Eds), Architectures for Semiotic Modeling and Situational Analysis in Large Complex Systems, AdRem, Bala Cynwyd, PA

Finn, Victor (1996a). Plausible Reasoning of JSM-type for Open Domains. In the proceedings of the Workshop on Control Mechanisms for Complex Systems: Issues of Measurement and Semiotic Analysis: 8-12 Dec. 1996

Finn, Victor (1996b) Basic concepts of Quasi Axiomatic Theory, presented at the QAT Teleconference, New Mexico State University and the Army Research Office, December 13, 1996.

The Russian cybernetics community was tasked to overcome the American (reductionist-based) paradigm that they felt had specific limitations.  For three years I advocated that a special project be developed by the US Army to more deeply study and extend the Russian work exposed by four International conferences hosted by Tom Reader (US Army office of research).  Finn and Pospelov and I made several proposals and I went to Moscow in 1997 to make two presentations, one on quasi-axiomatic theory and one on the Prueitt voting procedure.  These proposals went nowhere for several reasons while the groups in Russia disintegrated.  Gennady Osipov was one of the few who managed to survive and in 1998 wrote me a note revisiting what might be possible to transfer this special knowledge to an American academic community. 

 

 

The bead games continue to be developed:

 

www.ontologystream.com

 

 

PSP