The Knowledge
Sharing Foundation
Balancing the evaluation of DARPA computer science
with NSF natural science
Authored
by
January
4st, 2003
The AI/connectionist balance is not the only issue.
The connectionist paradigm has two forms of expression. One
expression simply ignores work on the correspondence between a model and the
features/functions/structure of the natural world.
But the connectionist paradigm can have a different form of
expression. This expression suggests that the mathematics of neural or
genetic models is less than rich enough to capture the true nature of something
like human cognition. The conjectured limitation
of Hilbert mathematics is a deep challenge to modern science.
The second class of models are of different types, differential
equations, finite state transforms, computer programs, stochastic; as is the
case with the first form of connectionist expression. The mathematics
serves as a look into the biology, but as we study the mathematics of
theoretical biology the less biology was he able to see.
This second line of thought requires open loop architecture between the
formalism and humans in the loop. This
architecture accounts for the fundamental difference between logic running on a
machine and human cognitive acuity. The
OntologyStream technology has exploited this architecture, in a way that is
easy to understand.
The required human in the loop exists there so that corrections can be
made when a separation between model and natural system is discovered.
Sensemaking literatures are relevant here. But the problems with computer
science and Hilbert mathematics are not so easily addressed. So we have
philosophical issues that need to be addressed by natural science.
The question that we have raised is about the optimality of the current
evaluation and deployment decisions. Are these decisions really
informed by natural science? Are the
decisions biased towards a strict form of scientific reductionism?
As someone mentioned to me recently:
"I think you are very correct about the "strong" AI position. It seems to include a *religious* belief about a Theory of Mind -- which even cognitive psychology rejected as impractical in the mid '80. To quote the title of one article by Jenson, "You Can't Play Twenty Questions with Nature." The resolution was that mental models are appropriate to the extent that they are practical, such as in specific models for specific tasks, such as for human factors. In contrast, AI was founded with a teleological imperative to replace humans one day.”
There are powerful people in academic and in the funding agencies that
claim that “brain = mind”. Francis
Crick’s book “The Amazing Hypothesis” is one of many examples of this religious
type belief. But beyond this reductionist
camp, there is a deep science literature that is not fundamentalist.
The core economic/security issue has to do with the interface between
human cognitive processes and its world.
The issue comes around, in an important way, in how I have talked about
a differential ontology that moves back and forth between Latent Semantic
Indexing (measuring the linguistic variation in text corpus) and relational
models (like taxonomy or databases). The problem is how to put structure
to data that is not highly structured, in circumstances where the data may be
misrepresented, spoofed, incomplete and inconsistent, and for which good models
do not exist.
One can make the observation that the innovations that are
MOST relevant to addressing the current intelligence needs.
We call for a conference of natural scientists to make peer review
about which software capabilities are to be tested, and HOW these are to be
tested, for deployment.
In order for this review to occur, the government must recognize that
the expertise on this problem is not within the agencies; it is within a
science community that has not traditionally received federal support for
research.
We must step in to demonstrate that a new capability can easily be built that
provides Human-centric Information Production Systems (HIP) with human vetting
of the fidelity of this information.
Many say, "well this is the system, what can we do about
it?" How can something that does
not exist, and that has been inhibited for a very long time, come into
existence?
From a deep scientific open question, regarding the nature of
measurement, we may step into an analysis regarding whether the technology
evaluation and procurement process is serving the nation, or serving the
private interests of a few large vendor corporations. It certainly
appears that there is a problem, and perhaps the appearance is enough to concern
policy makers. Opening the process of evaluation is the first step in
making sure that the appearance of a problem is managed.
How to raise this issue with Industry leaders? Do they feel that the evaluation process
makes rational sense? Is there a
way to boot strap the Business-to-Business technologies by first solving the
intelligence vetting needs of the American intelligence community?
The notion we have observed is that there is a well-recognized mismatch
between the needs of the intelligence analysts and the systems that are
developed by vendors. The question of business models so dominates this
process as to make the process highly burdened as well as not being fully
informed about the deep open issues.
The Knowledge Sharing Foundation takes a
different approach towards the evaluation and deployment of core innovations.