Tuesday, August 31, 2004
Manhattan Project to Integrate Human-centric Information Production
A common language underlying Readware and InOrb Technology
I am glad for your email . We are in agreement with Krieg’s paper on biological computing. Readware is about computing relations between anchors or terminals we call words and concepts. When we get the start up investment, Tom will help resolve all issues and will be available. Nathan and Anwar will provide excellent coding support.
The point is that relations are plentiful. Focusing on relations between facts yields significant information while focusing on relations between some encountered items (that are independent of facts) can be entirely happenstance. Defining facts in terms of underlying structural primitives changes the nature of the tasks related to knowledge discover in data, or in free form text.
I believe this can be proved beyond a shadow of a doubt.
Tom and I have complied facts about the concepts underlying all languages. We have done an imperfect job, but we have developed a substructural ontology that is very close to what actually exists in the real world. I will explain a bit further.
The conception like "Mother" or "Father" exists in every language written or spoken 3000 years ago and today. These are concepts that are independent of their representation, yet quite dependent upon their connections to a real-breathing entity. This entity or reference exists throughout time and often despite language or environment.
One of the key qualities of a concept or even a representation of an implied concept, is that it must withstand the test of time-- eons, in fact.
Many modern mothers are single parents, and by definition also fulfilling the social-role of a father. Yet, we do not call them fathers. However, this example brings into the light the stress that is placed in the short term on the misalignment that can occur between words used and the referents that they make to the long term, and thus relatively fixed, structural relationships that are captured in the Readware substructural ontology.
The letter semantics caused us to see letter consonants and vowels as signs of connections between things represented by certain consonants combined in specific ways.
LS96 contains the best explanation and examples we know.
Letter Semantics is the science of how sounds and symbols correspond to real world ideas. These ideas are quite objective. And they tell a story in their structure.
Their structure is the idea. As you have stated many times, one has to go slow here, since there are two levels. The primary, and least understood, idea is that things do connect substructurally.
Humans are needed to determine how those substructural connections combine into forms of meaning and significance. But algorithms and heuristics help us mere mortal in teasing out the substructural ontology.
Basically we identify and order things. We put them together by asking how they might fit together and how they can be held apart or separated. The Letter Semantic Matrix is really a table of potential patterns. Letter semantics showed us various ways to group words into coherent concepts that were not altogether obvious .
After that, it was all about identifying those concepts that are real and identifiable and sustainable references whether literal or figurative. That was the hard work. Without this work Readware would just be a system for locating letter combinations without determining whether they are possibly meaningful or not.
 A natural inference is possible where the table is a table of potential patterns. In this exact way the Readware letter semantics table is similar to algebraic latent semantic indexing, except that the information in the matrix is hand coded rather then being subject to the distortions of the numeric model (see Lev Goldfarb’s discussion on this.) Karl Pribram quotes Smolensky’s concept about substructural coherence in Pribram’s book, as referenced in my introduction in “Foundations of Knowledge Science”.