an AS-IS Model and TO-BE Framework:
Table of Contents
Section 1: Concept Definition 2
Joint Intelligence Virtual Architecture 3
Section 2: An AS-IS Model 4
Section 3: The Product Line from Acappella Software Inc. 9
Draft integration architecture 12
Fours aspects 13
Section 4: A TO-BE Framework 15
Language development 15
The analogy to economic theory 16
The knowledge life cycle 17
Enumeration of cognitive process 20
Section 5: TO-BE Framework Use Cases 22
The World Wide Web has made an unprecedented amount of data available using the HyperText Markup Language (HTML) but this prosperity of data has come with its own set of problems. Specifically, a lack of organizational authority has created a “Wild West” world that makes it very difficult to find, reuse, and identify authoritative data. A similar problem has existed in Intelink where most web-based intelligence gathering and analysis has followed virtually the same procedures as those used to develop intelligence in a hardcopy form.
Human analysis often uses standard operating procedures (SOP) that are based on a hard copy standard. However, hardcopy based SOP is not suitable for computer integration of dynamic work products. This limitation is widely recognized. For example, there is no automated methodology for adding and subtracting hardcopy. Computer networks provide a greater opportunity. Attempts to organize information using XML have improved the situation and provided some organization experience with the production of digital intelligence work.
Content tagging support in XML and location dependant knowledge maps allows machine level integration of authoritative data. Thus we have the possibility of automated integration of intelligence from various sources. However, this possibility confronts us with issues that can only be resolved by proper appeal to the cognitive and knowledge sciences. The promise of web-based knowledge management cannot be fulfilled by computer science alone.
The advent of XML has provided the potential to improve overall intelligence gathering and analysis. Research is still needed to automatically identify and describe a concept schema with authoritative tags at the element level. However, process models rigorously map XML delineated knowledge over existing systems. Thus timely and authoritative products can be dynamically generated based on the needs of a user. One needs only dedicated human resources and proper procedures. We are short on these resources and in-place procedures seem somehow inadequate. Authoritative tags are difficult due to a recontextualization of intelligence products within a massive virtual database. Personalization of the recontextualization is necessary but is difficult to achieve using the standard Information Technology model.
Our approach is based on a specific innovation that enables individual analysis and information consumers to quickly develop and communicate using machine-readable topic taxonomies. These topic taxonomies standardize the flow of personally vetted information from one analyst to another and from one context to another. The information flow is encoded in XML and can be indexed using concept and themespace representations of policy or domain analysis. However, machine algorithms never subsume the introspection of humans. Thus our innovation is consistent with, and yet more advanced than, current methods.
The so-called “Acappella Innovation”, developed for commercial application, has centered on automated interview and report writing using Specialized Software Applications (SSAs). A framework for increasing enterprise productivity using these SSAs and auxiliary resources is proposed for intell deployment at this time. We are forwarding this White Paper as a proposal for a brief study period (3 months).
Joint Intelligence Virtual Architecture (JIVA): The focus of the Joint Intelligence Virtual Architecture, JIVA, has been the modernization of intelligence analytical processes and methodologies. JIVA now provides the Defense Intelligence Community's all-source analytical workforce the cognitive tools, automated capabilities, and training necessary to ensure successful mission accomplishment in an information-rich environment.
JIVA is oriented towards battlefield intelligence. JIVA has focused on the functions associated with the Analysis, Production, and Dissemination phases of the Intelligence Cycle. This includes cognitive analytical methods and procedures as well as administrative processes. JIVA also addresses the technical, automation, and related systems and support required for the processing of information and the dissemination of intelligence.
JIVA provides the battlefield commander dominant battlespace awareness by enhancing real-time situational awareness, information superiority, and an intelligence infrastructure that supports instantaneous intelligence production and dissemination.
The following are JIVA's primary objectives:
The analyst breaks down his job in several ways. After review of information and introspection, the analyst develops a report. He may not think to include all contextual elements. Most analysts shape relevant points on a subject into two or three lines and then add supporting data to complete the thought. They may include four or five relevant points in a single paragraph and then feel they have to add two or three additional paragraphs in order to provide sufficient background information.
Multi-dimensional data representation permits an individual analyst to choose the way he wants data, which is presented to him, displayed. For example, an analyst whose background is SIGINT, looks at a map with multiple contact points on it and begins to map out the corollaries between the patterns of the contacts and there relationships to each other. The imagery analyst looking at the same map and contact points and assumes it is a display of the latest collection and starts identifying images for review based on their relationship to the nearest military facility. The analyst who has been trained in HUMINT capabilities sees the map as a depiction of strategic targets based on the proximity of a target to the nearest large city.
The SIGINT analyst wants to display the number of times that contact has been seen over the last six month based on periodicity and duration of signal. The IMINT analyst wants to identify changes to the image that has been detected from the last several collections. The HUMINT analyst wants to know how long the contact has been identified as a strategic target. Each of these analysts can display the data in a format that is best suited to their task based on their profile data.
Current intelligence systems leverage Knowledge Management concepts. These new commercial technologies have provided the government with a web-based process for intelligence production. These technologies have been quickly implemented in some production centers. Web-enabled tools provide content tags. In addition, process control over the production of intelligence documents provides managers editing capabilities for modifying content tags. These tags, and composite structures called profiles are used to push and pull information around in internal systems, thus increasing precision and recall metrics.
The knowledge management process within the intelligence community is relegated to certain key aspects of the intelligence production environment. However, external factors impact the ability of these production centers to provide timely and authoritative data to commanders in the field. There is thus a type of impedance mismatch between information available through computer networks and human source information. The impedance mismatch has been significantly reduced in recent years.
In addition to the analytical process used to produce intelligence information, the new high-value production centers maintain a workflow process to ensure the information being produced has been validated and is the “ground truth” for battle intelligence. This combination of analytical and workflow processes provide part of the foundation by which Knowledge Management must be implemented.
The other part of the foundation involves the representation of human knowledge. Knowledge Maps (K-maps) are representations of concept schemas and their relationships. The K-maps provide ontology and navigational aids. In some cases, the K-maps may provide automated inference, simulation and what-if scenario construction.
A limiting factor in the development of our intelligence communities’ knowledge management foundation is our traditional means for reporting and communication. The creation of documents within the intelligence community has not been identified with digital production in mind. Document production is based on the Standard Operating Procedures (SOP) designed for hard copy production and distribution. Almost all intelligence production centers have retained those SOPs. Documents are produced and then given over to the web developers to apply HTML to the document for publishing to secure web pages. This process makes the development of digital reporting and communication cumbersome and indirect.
Concept schemas have been developed from hard copy documents, which have been created in a dynamic environment. Digital reporting and communication is then often made by a specialist who is not as fully involved in the relevant situational analysis as those who produce the primary document resource.
The application of HTML to these documents only supports the format of the document, not the content. However, XML schemas are supported by K-maps. Concept schemas can be easily transformed into XML. The primary practical limitation of K-maps has been uncertainty of information, shifts in context, and changes in the underlying situation. In most cases, a human analyst must alter interpretations and schema properties in real time to accommodate these practical limitations.
As changes in interpretation and relevance occur, K-maps and XML schema can be dynamically updated. There are some high quality industry standards to follow. The Object Management Group (OMG) standard for defining object schemas is Unified Modeling Language (UML). CASE tools also exist that conform to OMG standards. These CASE tools provide a framework with which to design database schemas for Knowledge Maps and then apply those schemas directly to document type descriptions and content descriptions. The exchange format for UML is XML modeling interface (XMI).
Using CASE tools and XML, comprehensive knowledge maps have been directed at improving digital production. These K-maps are used as a basis for normalizing structured content development and therefore structured content reuse. The methodology requires the representation of knowledge and the reduction of this representation into forms that can be easily manipulated by human analyst and by algorithms.
The K-map provides a framework for simple and complex data enrichment that can include simple correlation models, heuristic or decision tree logic. The correlational models use Shannon and Bayesian information theory to automatically extract possible thematic structure from linguistic and pattern analysis of text. These correlational models are also used directing with XML tagging and concept schema in K-maps.
So-called complex technologies are sometimes employed. KnowledgeBots or Artificial Neural Networks work, alone or in conjunction with each other, to initiate new relationships within data archives. The KnowledgeBots are an outgrowth of distributed agent research in areas of information warfare and battlefield simulation. Neural networks provide an evolutionary programming methodology to produce latent semantic indexing and emergent viewpoints.
Intelink search engines utilize heuristic and algorithmic search mechanisms. These mechanics are seen in the Autonomy, Tacit Knowledge Systems or Semio portal products and other knowledge management systems. Within these systems, decision trees are constructed from rule-based models. Most of the tedium of constructing these systems involves the definition of many domain-specific rules (and perhaps procedures for generating rules from those rules). Artificial neural networks appear to accelerate efforts to reproduce human creativity, since the newer systems self-organize to form their own rules about what they experience.
Within the “learned environment of an artificial neural network”, we need only expose a network to a few judiciously chosen data points within any given database to generalize the relationship between all involved parameters and provide it the training it needs to create new object groups. In the midst of such training, connection weights between processing units grow and dissolve to simulate the changes in underlying schema and mechanisms. Thus a network of simulation states can be developed. Specific pathways develop within the network embodying sufficient logical, comparative, and algebraic relationships to accurately generalize what it has experienced. The knowledge management systems thereby reduce the production of models to an iterative application of historical examples to the network.
The knowledge management system has become a knowledge vetting system that selectively focused the attention of the analyst on various organized views of how the situation has been represented. In short, the design of artificially creative systems allows us to concentrate on the mechanisms of creativity rather than the acquisition of problem-specific knowledge. Of course, any cognitive reduction methodology has some limitations. However, the reduction automates certain tasks. Perhaps one of the most important of these tasks is the time-consuming ordeal of gleaning all of the whys and wherefores within any given problem domain. The analyst can then view these “end-products” in order to accommodate the limitation of a reduction methodology.
Once we develop neural network based systems we attain a purely connectionist paradigm. The paradigm is grounded in connectionist neurobiology and thus has the possibility of increasing validity. We attain a degree of originality from these artificial systems using a limited palette of neuro-biological analogs such as computational neurons and connection weights. These network-based creativity systems have various degrees of success in modeling human inventiveness. Part of the research community now attempts to identify exactly what elements are needed to bridge the gap to artificial intelligence.
Distributed agents are mobile programs that can be disseminated out over a network into data stores to retrieve information requested by user through a search program. The search program is often itself a mobile agent or intelligent agency (software). K-maps provide input into the knowledge discovery process, as defined by internal models and algorithms, and attempt to translate information into a variety of contexts. The result is an enriched data set with tagged association to structures in the K-map.
It is important to note that in the context of JIVA, an artifact of knowledge discovery is a natural process that will in fact parse data into information. The process is tailored to a specific usage that removes irrelevant data from the user’s attention. In this way a K-map provides the analyst a tailored view of the data. It reduces the amount of time to access and assimilate needed information. This implies that the map must reflect the consumers desired context by capturing the analyst needs and requirements. This cannot be done with algorithms alone. The K-maps will facilitate access to the distributed knowledge base, which span the JIVA Enterprise to provide fast query responses. However, the personal vetting of information processes is required to correctly recontextualized information from one interpretation to another.
Knowledge discovery is the process by which new relationships can be identified. As web- based Knowledge Management systems are implemented the availability and access to timely and authoritative data is magnified exponentially. Concepts such as Adaptive Probabilistic Concept Modeling algorithms are used to analyze, sort, and cross-reference unstructured data. Once compiled, this data is linked automatically via XML tags to provide additional resources for further reference. Summarized documents and related articles are automatically returned from a query according to the relevance of their XML tags and hypertext links. At this point the query results are unvetted. Data visualization technologies present multiple views of disparate data sources, including e-mail, databases, spreadsheets, presentations, archives, etc. Visualization of the concept representation from knowledge management systems is then utilized across the enterprise to provide a more concise picture of the available resources and information.
Inherent content management is the process by which content tagging is applied automatically through the use of algorithms to analyze, sort, and cross-reference disparate types of data in their native formats. Content management includes the automatic re-distribution of data categories automatically based on a pre-established set of determinants. Often these determinates only include the number of queries against a particular theme or the number of accumulated documents that belong to sub-categories. Correlation and categorization leads to routing products that serve in the background. These products, and their XML tags and storage locations, are updated, hyperlinked, and presented to the user.
Some possibly relevant information cannot be algorithmically processed to the degree indicated in the previous paragraph. The majority of portal applications available today, i.e., Verity, Semio, Autonomy, Tacit Knowledge Systems, provide various levels of user- and administrator- defined profiles and capabilities with regard to the automatic tagging, caching, or content management of associated data retrievals. This background process eliminates the need for manual labor in the process of categorization and routing. However, humans must view these results with some skepticism as sense making can occur, falsely, when the information is merely well presented and yet has somehow lost its validity.
The introduction of external data to Enterprise level K-maps requires access to external data in order to provide relevant information. This access cannot depend on the structure of the database or the systems level interfaces. This accessibility issues reinforces the importance of XML and XML type technologies. There should be no requirement to invest in the transfer of data, or transposition of data, from these external disparate databases into the enterprise repository. One solution is to continue to process and store data in special meta-object repositories. Such facilities are located within the JIVA architecture with pointers and paths to databases.
Acappella Software has developed new language about shared knowledge events. The language is reflected in a data-object model and in a consulting methodology. The data-object model is implemented within a software suite.
We have sought a philosophy of business enterprise that enhances Enterprise Productivity through knowledge acquisition, use and sharing activities. To advance our goals, we use the minimal complexity required by our interpretation of the knowledge science. We developed minimally complicated software in order to reduce the Information Technology burden on humans and financial resources. This technology integrates well with push-pull adaptive knowledge management as described in the AS-IS model of the JIVA system.
Our requested study will specify how best to manage a prototype implementation within the JIVA environment. Acappella Software will dedicate time from its office of knowledge sciences in a partnership with
Section 3: The Product Line from Acappella Software Inc
Acappella software produces Specialized Software Applications (SSAs). These SSAs organize structured question-asking activities. The activity of question-asking is prompted by a browser enabled interface. A sample SSA is shown in Figure 1.
Figure 1: SSA Interface
The SSA itself is a simple tree-type data structure. This data structure is a topic taxonomy organized in a specific fashion. The organization and the content is originally created by a consultant/specialist team. These team constructed SSAs are called Enterprise SSAs, and generally have between 2,000 and 3,000 topics. However, the development of small (20 – 100 topics) situational specific SSAs is feasible. These are called component SSAs and are used for rapid communication of structured information. A common architecture for both enterprise level SSAs and component SSAs is recommended as a possible implementation model within JIVA.
The development of an Acappella Enterprise Environment is made in five steps. The first step develops a multi-user web based database, such as the JIVA system, for managing relevant information. The system is modeled to provide an understanding of exactly where the Acappella technology will be most useful.
The second step involves the formalization of specific information vetting processes in line with accepted cultural practices. This is to be accomplished using Unified Modeling Language (UML) type use cases, in which the step-by-step enumeration of information processes is written down. The enumerated steps form a model of cognitive, communicative, and behavioral aspects related to existing cultural practice.
The third step involves the development of a knowledge-use map, indicating where in the enumerated processes one might deploy SSA formalism to capture the structure of typical information flow. The knowledge-use map is part of a consulting methodology developed by Acappella Software. Once the use map is developed; additional analysis can be made regarding how knowledge is used in the UML-type model.
The fourth step is to develop medium size enterprise SSAs. The enterprise SSA is developed using consulting methodology and results in a model of information flow within part of the organization. The model specifies information paths as well as the details that should be considered in the perception of information through introspection.
The fifth step is to develop a number of component SSAs that encode situationally specific information structure. These component SSAs are to be minimal in size to help in situational relevance, manageability and training. They initially form prototype communication templates for sub-steps specified in a developed model. These component prototypes as expected to be modified in a structured fashion during use events. Situational modifications will be allowed as though the analyst is merely using natural language in an expressive fashion. However, a community vetting of the modifications will be incorporated as part of the information flow model and the enterprise SSAs. The vetting process will negotiate and validate new community language and new SSA components.
Component and enterprise SSAs are placed into a web warehouse and made available to the user community through a browser. The use of these SSA produces assessment objects that contain information derived from structured analysis. The acquired assessment information is stored in an object-oriented fashion for future reference and for systemic analysis and trending (data mining).
Once assessment objects are in a data warehouse, various existing concept recognition and trending technologies can be used to produce a model of the evolution of various situations. The technologies that can be used are those that now exist as parts of the Autonomy, Tacit Knowledge Systems, and Semio portal products.
Both SSA and assessment elements from this warehouse are retrieved as required by users. When appropriate, information is acquired from situational analysis and encoded into various SSA structures as modification to the SSA structure or as answers to questions posed by the structure. Information from other parts of the JIVA knowledge management system can be reviewed by human introspection in order to enrich SSA structure and related assessment objects.
The enterprise SSA serves two purposes. First, the SSA reminds users about details of a global UML model for information flow within the enterprise. Second, the SSA provides consistency and uniformity for many intelligence products across a large distributed enterprise. The behavioral consequence of this is that the user will conform naturally to the information flow model, thus providing uniform alignment to policy. Cross system coherence will increase due to increased expectation that work products have specific character. The overall system may express personality by a closer coupling between anticipation and response.
The use of component SSAs breaks the UML type model into small steps that can be vetted by almost anyone. New staff members can be trained in the SSA system first by using simple components. Later on, the staff will be able to select appropriate components for specific reviews. Those who have become experienced in the uniform review process will develop new components. Styles of SSA use and construction will reinforce a sense of community and allow the development of personalization and familiarization so essential to real time community.
The development of new components by staff allows one domain of expertise to be a prototype for multiple extensions of policy and knowledge into new areas.
Figure 1 indicates how SSA components would be pulled from a warehouse in order to conduct incremental steps, A, B, C, D. These steps start with a Screening Assessment and end up with a Situational Assessment made after the commander makes a go or no-go decision. The steps in between may vary, depending on the circumstances.
Figure 1: Component SSA are selected by staff
to vet a review process
Once enterprise SSAs are in place one may extend SSA usage as a reporting and communication medium. A critical consideration is addressed by the SSA technology. Through the use of UML type modeling and consulting efforts one is able to build component SSA architecture within a web based distributed system. This system provides a successful methodology within a rapidly growing usage community.
Draft integration architecture: To create enterprise SSAs, a consultant/specialist team uses the Acappella Knowledge Engineer and Business Process Engineer software. The process of developing an enterprise SSA is called the Acappella Consulting Methodology. To date there have been around a half dozen large SSA produced, Word Weaver ™, a professional product for clinical speech and language disorder assessment, being the most publicly accessible. Each enterprise SSA has a development cost of around 250,000 dollars.
The SSA organizes a universe of ideas into topics and questions. We acknowledge that the universe of ideas is present in the mind of the specialist. In building the enterprise SSAs, the consultant assists the specialist in representing the details of this universe of ideas into a re-organizable taxonomy. Thus, like natural language, the SSAs will reflect the nature of the communication between humans.
Figure 2: Data flow between component, enterprise SSAs and JIVA meta-object stroes
A first draft of the data flow model for implementing SSA technology is shown in Figure 2.
A) An analyst, or team, uses SSA resources to develop structured assessments based on question answering within small-specialized components. These assessments are forwarded into an Enterprise SSA and routed using work flow.
B) New component SSAs are created, or modified from prototypes.
C) Component SSAs are attached to the enterprise SSA.
D) A meta-object facility is used to link information content to data elements within the massive virtual JIVA database.
E) SSA assessments are archived for future reference.
As Figure 2 indicates, the relationship between component SSAs and enterprise SSA can be complex and yet shaped to the features of emergent situations. For example, a component SSA may capture part of a larger process of conducting a review of a situation. In the larger SSAs, this part can become a view of a subordinate process. The component SSA can also serve to produce a specific order to questions, or components, that are then viewed within the enterprise SSA by a larger community.
Lets look at the question of topic/question navigation by the SSA interface. In Figure 1, one sees 17 sections listed in the left window. In Figure 3, the 5th of these sections, Environmental Settings, is opened to show 6 topics. The first of these, “Site topography”, is selected. The selection of this topic causes the display of five questions. The questions themselves are attached to the topics by the consultant/specialist and can have any of a large number of formats, in including full text responses.
Figure 3. The SSA Interface with one of the topic/questions selected
Figures 1, 3, and 4 provide some sense of the SSA structure. A presentation of the Acappella software is necessary to obtain a first hand experience with the concept and operation of SSA technology.
Four aspects: One may model the knowledge event having the following aspects:
1) the creation of the SSA and its resources
2) the use of the SSA by a specialist to conduct an assessment (for example, a screening review)
3) the answering of questions by a respondent
4) the generation of a report
Creation: Regarding the creation of the SSA and auxiliary resources, the consultant/specialist team does this for the enterprise SSAs.
The enterprise SSA provides an organization to thought about the flow of information in the large enterprise. The details have been worked out in advance and these details are available as mental reminders. In this sense, the SSA is a workflow product or project management product like Microsoft Project. There is however, a natural and easy just-in-time selection of topics that allow the specialist to “navigate” through the universe of ideas.
Figure 4: Navigation from one topic to another using mouse clicks
Use: The SSA provides an overall structure to the conceptual representation of mental universes, and the navigation process itself causes an additional (and separate) structuring (for example, using the fast keys). The structuring of the navigation through topics is adaptive to how the specialist has navigated the topics up to a certain point. Of course, at any time, the specialist may easily jump anywhere in the list (see again the left window in Figures 1, 3 and 4). The structure of navigational bias can also be imposed using portal technologies and special situational logics, such as are being developed in a number of labs, for inclusion in meta-object facilities.
Answering: Topics are viewed in order to make reminders that information is needed. A respondent supplies this information. The respondent also has choices. A set of questions is related to each topic. In some cases, the consultant/specialist team may have associated only one question. In other cases, the team may have associated a larger number of questions. But questions asked and not answered provide information.
The doors to certain thematic structure in the SSA are opened or closed depending on what is answered or not answered. Freedom is given for both the respondent and the specialist to instantiate a specific relational organization within the SSA. This relational organization occurs indirectly due to responses captured in the SSA interface.
Report: A report is generated based on the questions answered. The report can be made in a high quality narrative format, or in an XML format. Narrative generation is a somewhat complex process that uses linguistics and the effort of the consulting team. The XML format has tag resources that allow linkage to meta-object facilities. However, the SSA auxiliary resources that are used in one assessment (for example, a screening review) can be the same used in a separate assessment. The report writer projects a specific semantic interpretation using the SSA, the respondent’s data, and the SSA auxiliary resources.
Section 4: A TO-BE Framework
Individual humans, small coherent social units, and business ecosystems are all properly regarded as complex systems embedded in other complex systems. Understanding how events unfold in this complex environment is not easy. Knowing how to implement advanced knowledge technologies can be just as difficult. Process models can be usefully developed to show an as-is condition. Then a to-be condition can be established and from this condition benchmarks can be drawn and outcome metrics created.
The TO-BE framework often ignores certain difficult aspects of the complex environment and attempts to (1) navigate between AS-IS models and the perceived ideal system state, or (2) construct the AS-IS and TO-BE with the anticipation of process engineering and change management bridging the difference.
In this paper, we develop a TO-BE model using use case analysis about an abstraction. The abstraction is really a metaphor that we are making between the three categories of industries as described in 20th century economic theories. What we project into the TO-BE framework is a theory about how the emerging world knowledge economic system will develop. It is left to the reader, and to the requested study, to extrapolate this framework to the JIVA environment.
We fully realize that use case analysis is artificial to the degree that a specific objective reality is not the object of the modeling. As mentioned above, this often is a source of difficult IT implementation issues and lost productivity due to IT failures. However, in our exercise, the metaphor is our guide to the use cases. The use cases, in turn, give detail to this metaphor.
Language development: We need clear language to discuss internal and external information processing by humans, social units and ecosystems. However, we need to first point out that a certain conceptual knot can hold us back. Lets address this conceptual knot, and untie it by separating the issues related to language.
Language and linguistics are relevant to our work for three reasons. First, the SSA technology is an extension to natural spoken languages. We will speak to how “SSA arithmetic” may become a new form of social communication. A community uses the terms of the language to point out what needs to be done. Our vision of the future has SSA technology used to perform communications between systems composed of many humans and many computer networks.
Second, we need to have a common understanding of what we are attempting to achieve during the early part of the knowledge revolution. TO-BE and AS-IS models and the use case analysis address this issue.
Third, the terminology used in various disciplines is often not adequate for interdisciplinary discussion. Thus we reach into certain schools of science, into economic theory and into business practices to find bridges between these disciplines. This work on interdisciplinary terminology is kept in the background, as there are many difficult challenges that remain not properly addressed.
These issues of language are in a context. We make a distinction between computer computation, language systems, and human knowledge events. The Acappella Innovation specifically recognizes that the human mind binds together the topics of a SSA. The computer cannot do this for us. Our society has simply over invested in this false promise. The rules of how cognitive binding occurs are not captured into the data structure of the SSA, as this is regarded as counter to the Innovation. The Innovation reflects reality and draws a line demarking exactly where algorithms can and cannot serve to replace humans. In our vision of the future, the human remains central to all knowledge events.
Language is one way in which knowledge is communicated within social units. Thus the use of SSAs is historically significant. The SSAs are a new type of communication medium. We claim that this medium is far better suited to intelligence gathering than are the current document based SOPs, as modified by meta-object tagging. This claim will have to be proven, or disproven.
The analogy to economic theory: We observe that the natural world is organized in such a way as to reveal nested event types. The nesting quality is a key feature of event paths within the business ecosystem. Traditional economic theory postulates that there are primary, secondary and tertiary business events. Observations about the organizations of industries seems to support the usefulness of this conjecture.
A corresponding view of knowledge event types are differentiated in a model having dependency inclusion and scale. Tertiary events depend on secondary events and secondary events depend on primary events. Tertiary events are emergent phenomena that arise from, e.g. have a dependant inclusion, on secondary events. Secondary events arise from primary events.
Primary Knowledge Event are those that produce new materials for later use. The creation of a SSA is a primary event. Primary knowledge industries would create original SSAs from the whole cloth of human introspection.
Secondary Knowledge Event are those that are shaped by a community for delivery of a completed product. The character of the community constrains the acquisition, assessment and use aspects to reflect a community’s interest. The use of a SSA to make an assessment is a secondary event.
Ternary Knowledge Event are those that correspond to the services sector of economic activity. Here the event reflects value as a service to other processes within our society. The social system imposes an evaluation of the knowledge event. The refinement of SSA through outcome metrics is a tertiary event. The distribution of service fees for SSA use is a tertiary event.
The knowledge ecosystem modeled by these three categories of events is not one that exists today. It is a TO-BE framework that may have tenuous status in the present state of knowledge technology. The point is that some might believe that there will never be a knowledge ecosystem of the type modeled in this TO-BE framework.
Others might feel that through an anticipation of this specific organizational structure it might be possible to precipitate the very structure that we now only construct from imagination. There is in fact a strong knowledge science argument that anticipation is a constructive participant in how the social world becomes organized.
One can imagine that knowledge technology companies become aggressively funded by venture capital and the stock market. At first there is the type of chaos in this market as money is poured in and as many as one out of very four knowledge technology start-ups experience phenomenal financial success. There would be some confusion as to what knowledge technology is, but definitions and standards would become informally recognized within a short period of time. Several definitive books are published. At this time entrepreneurs would be gravitating towards creating a “knowledge company”. These individuals would read and find out that there are (say) three types of knowledge industries; primary, secondary and tertiary. They would be encouraged by peers to select one of these and to focus only on a specialized niche. Finally market forces and the TO-BE framework that happened to be accepted by the public would shape the knowledge ecosystem.
The scenario given above suggests how a TO-BE framework can be formative to the development of a specific structure within the world’s business ecosystem. The development of knowledge systems, as part of the world’s economic systems, will have clear implications to how intelligence is gathered and analyzed. Perhaps most importantly, the economic forces will develop the technology used in the intelligence space.
The knowledge life cycle: The knowledge life cycle specifies events within cycles. The Acappella SSA provides for the capture and reuse of a specific data structure.
Figure 5: Three classes of context for knowledge events
This data structure, and related content, is a primary raw material that we believe will be used as a commodity within the secondary knowledge industries
Knowledge events can be mapped to the individual, social unit or ecosystem (see Figure 5). Individuals play essential roles in all knowledge events. Figure 5 provides a view that focuses on the systems capabilities related to assessment and evaluation. A different view focuses on the industries that might evolve to manage SSA marketing. The critical distinctions are that primary events produce raw materials, secondary events use raw materials for products and the tertiary events are service events.
Figure 6: Individual interaction with an environment
The knowledge event is formally separated into knowledge use and knowledge acquisition (Figure 6). An analysis of use cases show that the distinct is useful primarily in considering the resources required for and the consequences from events. Correspondences between the knowledge ecosystem and the economic system provide a potential to impose an evaluation system on events. Evaluation this becomes an integral part of how the SSAs are used within a community.
One may characterize the relationship between use and acquisition in a fashion similar to the relationship between rational numbers and irrational numbers. Within each use event there is an acquisition event, and within each acquisition event there is a use event. This is just the way things are organized by scholars. The enfolding of use within acquisition and acquisition within use reflects the nested nature of complexity in the natural world. In the real number line, the rational numbers and the irrational numbers are enfolded in a similar way. This enfolded relationship is critical to tracing use and acquisition through the knowledge cycle.
In addition to the distinction between use and acquisition, we represent any knowledge event as having independent binding aspects, assessment and evaluation.
We see a reduction in the simplified model in Figure 6 where knowledge acquisition events are directed across the boundary between an individual’s awareness of self and the environment. Knowledge use events are directed from the self into an environment where there are many other individuals. The assessment occurs within the mind of the individual. The evaluation occurs as a function of collective social judgment about the outcome of the use of the SSA.
The assessment and evaluation aspects of knowledge events reveal an essential and observable nature. They both involve a binding phenomenon. The assessment is made by a whole system about something external that has been internalized from knowledge use and knowledge acquisition events. The evaluation is made by a whole system about something internal that has consequences to other things internal. It is thus natural to see assessment as being localized and evaluation as being global. The distinction, like the nesting of use and acquisition, reflects principles from knowledge science and the current theory of complexity.
There are two kinds of binding needs. One is for an assessment and one is for an evaluation.
Figure 7: The endophysics of assessment
Assessment is a creative process and the creation of knowledge during assessment is a complex phenomenon. The assessment itself is internal to an individual’s, or group, perception of self and environment. Facts and data are bound together in a fashion that cannot be represented as a set of formal rules.
The Acappella Innovation allows the production and review (evaluation) of knowledge objects created by individuals (or coherent groups) and reused within the larger enterprise. Evaluation is external to the individual. The action or behavior of the individual is reviewed and a systemic value is placed on this action or behavior.
Figure 8: The exophysics of evaluation
Enumeration of cognitive process: In this subsection, we develop use cases based on:
1) The assumption that there exists a primary knowledge industry that produces “raw materials” for SSA products.
2) A supporting methodology has been developed to reflect and automate the natural processes that occur during informed introspection about a situation
These two assumptions are anticipatory. We expect any future knowledge system to have processes that support these two assumed conditions.
P1: Familiarization. The knowledge worker acquires familiarization with a specific domain
1.1: Standard library and/or virtual data sources are searched to produce material for reading and synthesis
1.2: Time is given for reflection and introspection
1.3: Discussions are facilitated using meetings, travel, e-conferencing, phone calls.
P2: Indexing. The knowledge worker works to index materials that have been studied.
2.1: Library science are consulted to extend and re-contextualize information sources
2.2: Indexing is linked with XML tags to extend and contexualize information that may be associated with SSA resources
2.3: Associative and logical linkages are developed to automate the combination and separation of SSA fragments
P3: Localizing. Small self-contained SSA fragments are constructed into re-organizable units
3.1: The fragments are evaluated for consistency and completeness
3.2: Index information is positioned to support fragment configuration and fragment disassembly
3.3: Narrative scripting is positioned to support narrative reporting
P4: Simulation. The knowledge worker conducts some what-if exercises to determine if points of view have been left out, or redundancy left in.
4.1: SSA fragment arithmetic is used to gather and express viewpoints
4.2: Narrative generation undergoes quality control
4.3: Linkages to external information sources is tested for relevance
P5: Reorganization. The knowledge worker can make fundamental changes in the organization and content of SSA fragments within the domain of interest
5.1: The SSA fragments are regarded as one type of raw resource that will be combined into product by the secondary industry
5.2: The organization of the SSA fragments and other resources are made with the anticipation that someone else will use these resources.
P6: Critical Review. A review is made of the SSA fragments and SSA resources.
6.1: The critical review is oriented towards final release of standardized raw material for SSA and SSA resource production by secondary knowledge industries.
6.2: Critical review analysis is forwarded to industry management for trending and planning.
Section 5: TO-BE Framework Use Case
We assume the following:
1) That the Acappella Innovation will replace the rule construction methodology followed by the Artificial Intelligence and Data Mining community
2) The primary sector is developed from a Consulting Methodology (the one that Acappella Software will develop in the near future.)
Low-resolution use cases are developed to encode and express our vision of what the secondary knowledge industry sector will come to expect from the primary sector. The types of processed raw materials are:
1) SSA fragments (defined below)
2) SSA auxiliary resources (like narrative script)
3) Indexing (both native indexing within a industry standard and indexing of “external” non standard data sources.)
The SSA fragments are a type of construct that Acappella Software invented. The purpose of the invention is to modularize knowledge constructs with specific attentional focus on some area of expertise. This focus requires study and contemplation and an objective view of the area of expertise.
The fragments are small SSA, perhaps 10 to 40 topics in each fragment. They are made available based on some evaluation by the primary industry as to the value of the fragment. Similar evaluation occurs in mineral mining and in agricultural products.
The SSA auxiliary resources are built specific to a SSA fragment. These auxiliary resources are used in the Communication Product classes to produce a means to communicate the SSA fragment content. Communication can be one to one with the receiver being a (smart agent) resource of a second fragment (SSA arithmetic).
Communication can be from the fragment into a narrative form. Communication can also be one to many and many to one as detailed in our theory of complex data transfer.
Indexing is not considered as a SSA resource since the indexing is for the purpose of high precision / high recall routing and retrieval. It has an “external function”, unlike the auxiliary resource. XML is now considered to be a high value solution to how informational context can be tagged. Thus we expect that XML or an extension of this will be used to index SSA fragments.
Use Cases: The primary knowledge industry sector produces raw materials that are then available for rapid assembly into products by a separate industry. This second industry sector depends on certain resources that are first produced in the primary sector. The nature of the resources can be conjectured, but these conjectures depend on the type of technology and philosophy that comes to exist in the future.
S1.1: Negotiation of value
S1.2: Match new raw materials to future product development needs
S1.3: Evaluate for possible just-in-time acquisition
S1.4: Local indexing is made to refine routing and retrieval
S2: Analysis and synthesis
S2.1: SSA fragments exhibit properties when combined using the patented SSA arithmetic
S2.2: Fragments and resources are evaluated in the context of product development and situational (just-in-time) response to client needs
S2.3: The market place for knowledge products communicates to individual companies within the secondary knowledge industry sector. This communication is managed using e-commerce portals.
S2.4: Indexes are used to develop market and production strategies
S3: Validation of communication
S3.1: Communication can be an aggregation of form and content expressed with the help of the theory of complex data transmission
S3.2: Communication involves the interpretation of language, and thus we consider the various interpretations that can be anticipated during the receiving of communication from an SAA fragment. These interpretations can be considered and modification made to the product to provide a greater range of response.
S3.3: Global indexing is validated through the use of outcome metrics and reinforcement theory.