[32]                               home                          [34]

 

4/28/2004 5:52 PM

 

Key questions on Common Upper Ontology

 

Knowledge Sharing Foundation and CoreSystem -> .

 

A look at some issues

 

The discussion on upper ontology did lead to an observation by Dean Allemang regarding Dr. Deborah McGuinness’s presentation last week at SDForum event (4/20/2004).  Our discussion does not have a copy of the presentation.  Perhaps this can be provided.

 

Meanwhile, I will quote from a link from her web site.

 

Inference Web (IW) is a framework for explaining Semantic Web reasoning tasks by storing, exchanging, combining, abstracting, annotating, comparing and rendering proofs and proof fragments provided by reasoners embedded in Semantic Web applications and facilities. IW is expected to be flexible enough to address explanation requirements of a broad audience of Semantic Web users.

Why Inference Web is needed?

If users (humans and agents) are to use and integrate system answers, they must trust them. System transparency supports understanding and trust.

·              Thus, systems should be able to explain their actions, sources, and beliefs.

·              Also, if systems are hybrid, it is useful to work in an integrated yet separable manner.

These are some technical requirements for trusting system answers:

·              Provenance information - explain where source information: source name, date and author of last update, author(s) of original information, trustworthiness rating, etc.

·              Reasoning information - explain where derived information came from: the reasoner used, reasoning method, inference rules, assumptions, etc.

·              Explanation generation - provide abbreviated descriptions of the proof - may include reliance on a description of the representation language (e.g., DAML+OIL, OWL, RDF, ...), axioms capturing the semantics, rewriting rules based on axioms, other abstraction techniques, etc.

·              Distributed web-based deployment of proofs - build proofs that are portable, sharable, and combinable that may be published on multiple clients, registry is web available and potentially distributed, ...

·              Proof/explanation presentation - Presentation should have manageable (small) portions that are meaningful alone (without the context of an entire proof), users should be supported in asking for explanations and follow-up questions, users should get automatic and customized proof pruning, web browsing option, multiple formats, customizable, etc.

 

There are several things that might need to be said and talked about.

 

First, one has to recognize the quality of the work that Dr. McGuinness and her colleagues have done.  There are uses and places for deductive reasoning systems that act on logic atoms defined with RDF.  But these deductive and formal systems do not function in the same way as the human brain.  A formal model of a “belief system” might be developed and implemented as part of a computer program.  But these computer programs do not believe. 

 

Second, the language used at W3C, in this context, acts as if things that are not demonstrated are facts.  For example, the language treats humans and (computer program based) agents, as if the two are categorically the same.  Both are “users”.  This is a hard AI position, is it not?  Why is it necessary to oversell the capacity actually achieved from RDF and inference logics.  There are just too many open questions that are treated as if there will be solutions if only we work long and hard enough.  Why can’t we use a better language so that certain confusions are not re-enforced. 

 

The alternative is very well defined by some recent developments related to the notions of XML Binary Characterization Working Group.  In particular I point to Sandy Klausner’s point by point comment the W3C’s requirements on binary representation of a pre-parsed stream of Information Items. XML Infoset. 

 

I know Sandy’s work better than the work of the XML Binary Characterization Working Group.  But my sense is that the CoreTalk solution is more closely related to the optimal way in which the organization of binary exchanges between computers can be done.  It is an opinion that I have that the confusion of hard AI beliefs is too strongly reflected in the W3C work.  The exaggerations are easy to make and are re-enforced by funding and public attention.  The alternative is more complex, and the natural science requiring of a different educational background that most find in computer science graduate schools.

 

In my opinion, the CoreTalk solution does not have the confusion and because it does not, it is far more consistent with our notion of knowledge science.

 

I ask for some comment on the CoreTalk solution.  It is my opinion that the CoreTalk solution is neutral to the AI position whereas I am not sure that the W3C XML Infoset will be neutral to the types of binary exchanges that OntologyStream envisions as part of the Anticipatory Web.