next up previous
Next: 2.1 Context Sensitive Situations Up: Context Sensitivity and Ambiguity Previous: 1 Background

2 Position

Complex systems designed to control real-world events are becoming commonplace, yet no feasible method exists for confirming that such systems behave as desired. These complex systems are typically several hundred thousand lines of code, and are occasionally in excess of one million lines. We define mission surety as a key aspect that a complex system must exhibit; i.e., the user can be sure that an event has actually occurred as reported, and software reacts to that event as expected. Current development methods attempt to demonstrate mission surety through extensive testing with a wide range of simulations. Oftentimes these efforts result in surprises (such as determining that the tests are incomplete so that an incorrect portion of the code is not exposed until late in the development cycle), which in turn leads to higher costs and missed schedules. There are two principal strategies for achieving mission surety: first, develop (or obtain) reusable modules that can be included in systems - thus simplifying the development process, and second, automate as many of the complex development tasks as possible. Although there are numerous alternatives for most functions common to large complex systems such as display managers, network infrastructure subsystems, and database managers, other tasks are not well supported in Commercial Off-The-Shelf (COTS) software or other reusable forms. These functions include splitting bulk data into pieces, validating the sequence and structure of the data, extracting meaning from data, detecting runtime errors and, finally, automatically performing error recovery. Unfortunately, these are complex and error-prone operations that easily compromise mission surety. Automating the development and implementation of these functions is the key to generating more capable systems at a lower cost with greater mission surety.

Unlike the strategy of achieving a high degree of correctness and completeness through component-based development, or code reuse, the ability to model a large set of system states appears to require a different strategy. It is suggested that many `real-world' applications require the automatic generation of custom components to handle the inherent complexity of the problem.

The position taken here is the result of an investigation into the reasons why certain software components are considered to be problematical. That is, after numerous development cycles, the software is still perceived as error prone. All of the modules we investigated provide the similar function of transforming data from one structure into another. In each case, the developers attempted to apply various compiler generating tools to the problem at hand, but failed because the tools were simply overwhelmed. The developers then attempted to hand-build components that are vastly more complicated (in term of the numbers of states the system can enter) than modern compilers. Our conclusion is that the software can be improved though the factoring of the software into a control (or state modeling) module, and a semantic processing module. The former can be generated automatically if the interface between the two components can be specified with a formal language. This, in and of itself, is not a particularly new suggestion - the functions provided by the problematical modules are not that different than the first stages of a modern compiler. No competent computer scientist would attempt to develop a compiler without the tools to automatically generate the lexical (see [1]) and syntactic (see [2]) processing modules.

Thus, our goal is to raise awareness that there are classes of problems that are not well suited to the formal language tools available to the software engineering community. The basic problem we have seen relates to the tendency for the tools to handle small problems well, but not problems of interest to the engineer working on state-of-the-art problems. The second issue we wish to raise is that there are problems too complicated to be approached with a finite-state solution. We will illuminate this by discussing two applications requiring models more complicated than typically associated with systems development tools. The first is the context-sensitive grammar, and the second is directed backtracking, sometimes referred to as the assumption-based truth maintenance system. The former is an important and powerful tool for compactly representing large numbers of system states in situations where the complexity arises from the specific context in which the system is operating; the latter is appropriate when the complexity arises from multiple branches in the solution algorithm.



 
next up previous
Next: 2.1 Context Sensitive Situations Up: Context Sensitivity and Ambiguity Previous: 1 Background

Stephen J. Bespalko and Alexander Sindt
Sept. 2, 1997