Careful use of terms is important since terms are the building blocks of conceptual descriptions. Often times the same term is used with many subtle meaning variations and different people define the term differently. Capturing these variations is the purpose of a descriptive glossary, which provides one-stop-shopping for the range of uses of a term. As a concept base becomes well understood, more precise operational definitions can be identified that form the basis for a prescriptive glossary, that is, a glossary where a community has identified more precise meanings for terms, avoiding the vagueness and ambiguity in descriptive (overloaded) definitions.
At this stage, this document mainly provides descriptive terms and those somewhat informally. Since the main goal is a better understanding of component software and extensible software architectures, it may be that the glossary itself never becomes the centerpiece of our work. Still it is useful in defining a scope and baseline of concepts in this area.
There is no attempt at completeness here, that is, no attempt to cover all software architecture concepts but rather to provide a useful starter glossary that can set the stage for asking a collection of R&D questions about OSAs.
OSA architectures claim to provide a framework for composing "services" (or components which provide them) to form systems. If a domain is modular, it can be expected that an overall glossary is the concatenation of specific glossaries for each modular component concept. The same can be said of requirements, which are invariant statements about a concept. That is, both glossary and requirements can be partitioned. This is useful since it allows us to specify a collection of glossaries and requirements for the parts of a thing and then combine them (composition by concatenation) into a master vocabulary. Similarly, we might decompose an area by partitioning off concepts into sub-glossaries.
The form of glossary entries below can vary from English to a precise specification language. For our purposes at this stage of analysis, we use English. The format of glossary entries is
(General) when a new modular capability can be added to an already developed system via exposed interfaces. An example is Wordia which extends Microsoft word with limited html authoring capability.
() Application architectures are broadly architecture of the domain/application of interest and more narrowly sometimes application generators for specific domains. Workflow and CASE tools might be in either category though likely here. PDES STEP tools might fit.
(Building Construction) the structural abstractions (e.g., blueprint) and styles (families of related common variations) that define a class of structure (e.g., a cathedral) or a particular structure (e.g., my house). Architecture usually focuses on the big picture and not the details of what color my rug is or specific pictures on my wall though such details can be viewed as architectural since they could be consonant or dissonant with the architecture's theme. There is no clear dividing line.
(Software Architecture, General) a static framework or skeleton (structure or set of conventions) that provides the form of a software system and the conventions, policies, and mechanisms for composing itself with subsystems, or component parts, that can populate the architecture. The architecture defines how the parts relate to each other including constraints governing how they can relate. An abstract framework is one that has not been instantiated with specific subsystems. A concrete framework (relative term) is one that has been (progressively) instantiated with specific subsystems as binding decisions are made. If a system is divided into parts (e.g., an architecture and its components) then there are interfaces that define how the parts intercommunicate. An architecture may just be a particular composition of subsystems. More often, it is a specific subsystem that other subsystems interface to. In this latter case, an architecture may have architectural properties that preserve certain guarantees (e.g., safety, scaleability, fault tolerance, location transparency, …) for systems built using the architecture. Architectural properties may sometimes be specified via rules (e.g., load bearing walls) or conventions (e.g., must be written in C++) or constraints (e.g., use C++ or Java).
Architecture description languages (ADLs) exist but are immature at present. Some consist of structural and sequencing relationships and properties. Finally, subsystems may have internal architectures (an O/S calls a compiler or a DBMS). In general, one man's floor is another man's ceiling. That is, there may not be a good way to distinguish architecture from the rest of design though one can still describe architectural abstractions and prove their properties.
() An architecture is an abstraction and one wants it to be something one can reason about. That is, an architecture can have provable properties (a kind of requirement) and one should be able to ask questions and answer them based on a specific architecture. To date, most software architectures are immature, ad hoc, one-of-a-kind and their properties are not well understood. Many people feel, based on experience developing software, that Next Generation software architectures need to be more flexible, safe, evolvable, scaleable, open, survivable, high performance, etc. It would be especially nice if new and useful architectures could be derived from an architecture template. The OMG architecture principles is one list of properties - it would be worth reviewing the items in the list since not all might be found to be sound and clearly stated. Below is a list of hypothesized desirable and achievable OSA architecture properties (see Table 1):
() method wrappers that side-effect the behavior of some method. Supported explicitly in Common Lisp. Not supported in C++ or IDL via explicit constructs.
(General) the time at which decisions are made. In software, binding times vary from conceptual, to design, to coding, to compile time, to execution. Static binding happens at compile time and certain type information is used and sometimes then thrown away; dynamic binding happens at run time. C++ throws away much type information at compile time unless a Run Time Type Interface (RTTI), similar to the CORBA Interface Repository, is requested as a compiler option. CORBA supports both a static invocation interface and Dynamic Invocation Interface (DII), which requires explicit low level argument marshaling and un-marshaling as defined in CORBA. UNIX Shared Object Libraries and Microsoft Dynamic Link Libraries (DLLs) provide link-time bindings. Where OMG uses inheritance (specified at design time), OLE uses a form of delegation in COM that permits run-time binding.
Build time/Run time
() a distinction in binding time where many configuration and deployment decisions are made in at build time and fewer at runtime. Where systems must continuously evolve or be available 24x7 then they must incrementally be upgraded in place.
Class libraries are collections of class definitions and implementations. Companies like Rogue Wave, Microsoft, Borland, and JavaSoft vend class libraries - these are reusable economic units. One could deliver an OMG implementation as a class library or collection of class libraries. OLE DB can be viewed as such. Class libraries and toolkits have the reputation of being open but too-much-assembly-required. A best of both worlds is to deliver a useful application composed from a toolkit where disassembly and reassembly for evolution is supported. Open OODB did this to some extent but was immature. OLE DB promises this.
COE, that is, DoD Defense Information Infrastructure (DII), DoD Technical Architecture Framework for Information Management (TAFIM), DoD Joint Technical Architecture (JTA), DoD Common Operating Environment (COE) -
The Defense Information Infrastructure (DII) Master Plan reflects DoD's collective strategy for providing the Warfighter with information capabilities to achieve mission success. DII is a web of communications, computers, software, databases, applications, data, security, services, and other capabilities to meet DoD information processing needs in peacetime and in crises. It provides a profile of recommended information infrastructure software standards. See Enterprise Architectures, DoD DISA homepage, JTA homepage (JTA was published 22 August 1996), and COE homepage. TAFIM provides general guidance and documents the processes and framework for defining the JTA and other technical architectures. The JTA focuses on interoperability requirements in C4I and future versions will cover weapon systems, sensors, and models and simulations. The DII COE is a specific implementation of the technical architecture. To a developer, the COE is a plug-and-play, open architecture (TAFIM-compliant description of how a collection of reusable system components fit together), a runtime environment, software, and APIs. The COE is not a system but a foundation for building open systems. Functionality is added or removed in manageable units called segments, which are configured remotely using a graphical user interface and downloaded to the field from a COE Software Repository. COE is being used as the underpinning to implement Global Combat Support System (GCSS) which provides integration of crisis planning, near real-time combat execution, intelligence, logistics, transportation, personnel, medical, and procurement applications. COE provides interoperability, integration, and reuse.
(OMG organizational unit) the name of an OMG Task Force working on common object services that are horizontal in nature and that would be commonly useful by many applications. That is, this distinction between OMG object services and common facilities is based on an organizational division of labor
(OMG architectural distinction) not defined by OMG but commonly assumed, the OMG common facilities are "higher level" (an undefined term) than the basic object services. One reasonable distinction might be, that the common facilities are compositions that are generic, like an RDBMS (which composes query, persistence, transactions, etc.) or Workflow (which composes a domain-generic model of work with distribution, persistence, etc.) or a KBMS (which is like an RDBMS with rules added). But there are common facilities (so called) like rules and scripting that may be primitives like the basic object services. Note that the examples of compositions given above might be statically bound compositions of services. It would be a question of binding time if one could later add other services like security or rules to a running composed system.
(General) any software (sub)system that can be factored out and has a potentially standardizable or reusable exposed interface. Components in a SW architecture can be identified at different levels of abstraction, and the components identified at these different levels may not be in one-to-one correspondence. For example, viewing an architecture at one level of abstraction, object services may be identified as components. Viewing the same architecture at a more detailed level, a given service may be implemented by several distinct software modules, which may be individually identified as components.
(issues with components)
The promise that application development can be done using larger building blocks than lines of code. DARPA used to call this mega-programming. The additional promise of rapid application assembly from components. Leggo-like reuse to build large systems from known components. Components themselves do not have to be tested and re-tested. It may be possible to derive properties of configurations of components from the properties of the component parts and the glue holding components together.
(NIIIP) the interface (specified in IDL) of components needed by NIIIP in the specification of industrial VEs. Some of these are borrowed from OMG or elsewhere and NIIIP is an early adopter of these. Some are identified by NIIIP and NIIIP will share these with standards groups outside NIIIP.
(General) putting parts together into a larger whole usually without changing the parts themselves and often via some rules or mechanisms of composition. The result might yield a special purpose composition or a general purpose one. In either case, it might be observed that the derived system depends in some way on the component parts (or at least on their functionality, possibly their interface and some implementation). In that sense, it could be said to be higher level. If a general purpose composition, then one could talk about a protocol stack in that higher level protocols (abstractions) depend (in some sense) on lower level ones. Another dimension of composition is binding time-some compositions are static and some dynamic. The following might be composable from primitive object services: DBMS, workflow, KBMS, repository,. Notions of component, glue mechanism, and isolation principle are relevant. Care should be taken in the use of terms like higher level general purpose compositions since these may often be special purpose compositions and other interesting compositions may also be productive and useful. On the other hand, it is likely not the case that all components compose freely with all others (even if syntactic types match, e.g., "colorless green ideas sleep furiously") to produce useful systems so higher level compositions of even the same component set may yield interesting systems with differing semantics.
A dependency graph showing how components are dependent on other components. Users of a component may only see the component API and I/O characteristics and its environmental dependencies; developers may see some or all exposed internal interfaces.
(in OMG) A set of preferences, expressed as (name, value) pairs (properties); something akin to an environment in Unix.
() Like inheritance in that a class definition is defined in terms of other class definitions but not necessarily via a static class hierarchy and often dynamically at runtime so that new dependencies can be added. Microsoft COM does not support inheritance but does support a kind of delegation allowing new behaviors to be added to running systems (an advantage).
() system A depends on system B if B is required for A to perform some or all of its functions (in some logical or abstract sense). Dependency may be mandatory/required or optional. It may be direct or indirect (transitive). Many relationships are dependency relationships: human parent to child, whole to part in product data, source to binary. Sometimes the relationship is manually maintained; sometimes it is computable. In an optional dependency, A may be capable of performing without B, but only in some degraded mode. Often A may depend on the function of B but not necessarily on B itself (an important distinction). If A and B are components, A may depend on any component implementation with B's interface or even any that can be coerced to B or a trader function may find a match. The binding time may be early or late. Often we say that systems that depend on other systems are higher level and this notion leads to protocol stacks where higher level abstractions are built on and may hide lower level ones. Sometimes a user of a higher level abstraction can drill down to access a "hidden" lower level abstraction.
In the future (as well as today with reengineering tool suites, tools may be built that can locate patterns in legacy code enabling manual or semi-automated architectural specifications to be used to augment legacy code and recover the design rationale, architectural patterns, structure, behavior, and constraints, that is, the design record.
() Domain models in the sense of OMG or PDES/STEP are industry specific but still generic class libraries or object models that encode important entities from that domain in a standard interchangeable way so data can be shared across system or organizational boundaries. See domain-generic objects.
Domain Specific Software Architectures
() missile systems, C4I systems, logistics systems, GIS systems, DBMS systems, compilers, query engines, and security kernels have internal patterns specific to that class of software. DBMS, workflow and KBMS systems have patterns in common. The DoD JTF/ATD is an application architecture for command and control that layers C4I domain information (e.g., weather, threat assessment) on a generic middleware backplane that consists of CORBA and basic services at a lower level and other higher level query, situation assessment, and planning services at a middle level. Some might view JTF as having an application architecture layered on a technical architecture.
() Large companies sometimes develop enterprise architectures. Similar to reference architectures but broader in scope, enterprise architectures account for the entire host of software and machine types strategic to the enterprise. Critical is a description of standards for various environments and also the company's recommended product choices for given standards. Also, an attempt is made to state that some kinds of software or implementations are only available in some environments. Purposes of an enterprise architecture are to provide guidance on purchasing, center information on evaluations into one place, preserve investments on licensing and training, etc. Many small or midsize companies do not have the resources to develop their own Enterprise Architecture and only have an implicit enterprise architecture distributed in heads in the MIS department or across departmental functions. Enterprise Architectures are living documents. Some Enterprise Architectures take evolution legacy code evolution into account by describing grandfathered systems (no longer recommended) as well as emerging standards. Enterprise architectures are sometimes not well aligned to departmental computing needs. The Web may actually shift the balance here since it provides downloading from a central repository and remote maintenance. See Technical Architecture and Application Architecture as well as DoD Common Operating Environment.
(Conceptual term) when parts of a system are designed so additional functionality can be added at some later date. An example might be, adding distribution, replication, security, or versioning to a distributed system via modular additions. You could put an add-in in an expansion joint. This extension may be seamless or seamy.
Extended Finite State Machines
() A finite state machine is an abstract machine formalism composed of nodes (states) and arcs (state transitions or operations). Normally there is a start state and one or more stop states. A generalization is a push down automata which permits "pushing down" to other finite state machines when crossing an arc. A close analogy is a subroutine call though in an EFSM the nature of binding time and glue is left unspecified. In an extended finite state machine, attributes can be specified at nodes, making the system much more useful for modeling. Finite state machines are used in hardware design, modeling protocols, compilers, and even natural language. Finite state machines may also be a good abstract model for protocol definitions of component software. They neatly separate caller from callee (isolation principle).
(General) synonymous with open. Mainly refers to the architectural property aspect of openness with intent to be able to add at a later date new or custom behaviors.
(General) how a problem or system is broken into parts. There is generally no one unique factoring for any complex problem. A basis set is a collection of reusable library parts that can be composed together to build various systems. There may be no unique basis sets but nevertheless experience has shown that there are useful basis components like DBMS, OMG, O/S, compilers, …Standards evolve to define the interfaces to these parts. Class library vendors are selling a basis set which developers can add to in order to solve problems. Toolkit vendors sell a collection of useful parts. System vendors sell much larger generic subsystems. Turnkey vendors sell a complete application specific solution. Some toolkits are simultaneously systems (OLE DB, Open OODB).
(deconstructing sense) There is a trend to deconstruct once monolithic stovepipe systems, both special and general purpose, into frameworks or components that have exposed interfaces and can separately be serviced (extended, replaced with best of class parts, customized, or alternatively made minimal in function and footprint). Perceived advantages are mix-and-match, best-of-class and customizability. Component software is one name for this trend popularized by the Microsoft component object model COM, often loosely referred to as OLE with a growing number of OLE-centric specs, e.g., OLE DB, OLEtransactions. OMG is a competing complex of component specifications where specifications are not proprietary to a vendor but open to all to implement (though generally closed to any but OMG to evolve with all current implementations proprietary).
Family of Systems
(General) a conceptual category like DBMS systems or RDBMS systems. Oracle is an instance of relational DBMS. Smalltalk and C++ are examples of OOPLs (a family). Families share a common design space (set of features, dimensions, aspects, properties) and typically a common architecture but differ in details such as which exact collection of properties are in a specific system. Inheritance hierarchies can be viewed as a family of general to specific/custom variants.
(system composition sense) if A1 and A2 are members of the same family of systems and if A1 and A2 can interoperate (federation sense) in such a way that work together to accomplish the function of A as if they were one system, then they are said to be federated. If several CORBA implementations are gatewayed together, then a message sent to an object in a remote CORBA arrives as if there was just one logical CORBA. The purpose of the CORBA interoperability specification is to accomplish this. But note: many CORBA services must also have the federation property: naming services are often federated, event services may need to call other event services, transactions services must compose (see the X/Open XA protocol for a transaction commit coordinator), query services must compose, DBMS and workflow systems must federate and so on. Typically, interfaces to a system for the purpose of federation need to be exposed. This has not been done in a consistent manner by OMG-there is a need for a Federation Design Principle. Federation, in this sense, composes like systems into a larger like system or alternatively it permits a system to be replicated in function but still provide a uniform image (interface). A major consideration of federated systems is efficiency
(DBMS sense) DBMS systems that are interoperable in the federation sense. This may mean that a parent DBMS with a federated schema provides a uniform interface to new applications. Internally, it may process queries itself and store data but it may also partition some queries to call other leaf (or federated) DBMSs. The DBMS literature often uses the term federation with the connotation that the systems involved are relatively autonomous; they participate in the federation to an extent agreed upon by the federation, but retain autonomy in other respects.
(bad connotation) federated systems may be inefficient
(General) the part of an overall system architecture into which variable components can fit
(Object Technology) a set of cooperating object classes that make up a reusable design for a specific class of software or function. Such a framework is typically customized to a particular application by creating application-specific subclasses of the abstract classes in the framework. Apple's MacApp is an example of a framework for a complete Macintosh application. Taligent's CommonPoint Application System is a framework consisting of a collection of frameworks. Each of these frameworks is a group of C++ classes that operate together to implement some function.
A problem Taligent had is that different frameworks were implementationally dependent so that using one used many others creating a large footprint. Another problem was, certain interfaces were hard-wired for distribution-that is the distribution glue was specified at coding time (see binding time). The same footprint was a problem with Common Lisp where applications that reused all Lisp features were hard to port to or re-implement in other less functional environments.
A looser use of framework in object technology is to refer to a toolkit. A toolkit is a set of related and reusable classes designed to provide useful, general-purpose functionality. An example of a toolkit is a set of collection classes for lists, stacks, etc. In this sense, a toolkit is the object-oriented version of a subroutine library. Unlike a framework (object technology sense), a toolkit does not impose a particular design on the application; it just provides useful functionality which can be used as needed. Apple's Macintosh User Interface Toolbox as a variant on this idea.
By providing a common user interface toolkit, all Mac applications tended to have a common look-and-feel (e.g., pull-down menus). End users benefited since new applications were more quickly understandable; developers benefited since it was cheaper to build user interfaces since it required much less specification and they could reuse trusted parts.
Side note: this framework like the Mentor Falcon or OLE DB framework is proprietary. The alternative is open framework or architecture specifications like those of OMG or NIIIP. An issue for open frameworks is whether there exists a common reference implementation from which all commercial implementations are derived (ex. DCE) or each vendor implements a specification (as in OMG or RDBMSs) and conformance testing or interoperability testing is separate.
(OMG) the term framework, not formerly used by OMG, has recently (at the Madrid meeting on August 1996) been added to the OMA Guide to refer to the CORBA message passing bus or backplane, the presumption that any clients and services use IDL bindings, various architectural principles about isolation of services, and the presumption that application objects can build on domain objects which can build on common services which can build on basic object services.
(General cont.) Within the community of people who adopt framework F, the mentioned benefits accrue (common semantics for end-users and reuse for developers). But if a system must be ported to multiple frameworks F and G, designers may create a floor interface to insulate their system from the multiple frameworks (environments) they may need to depend on. Another point, the design space of a family of systems may be an architecture framework which permits selection of architecture variants. Similarly, FW1 may be used to encapsulate FW2, FW3, etc. via providing specialized wrappers. An example is Java beans that encapsulate OLE controls and OpenDoc.
() a kind of glueware that provides the federation interface between two systems composed in the federation sense, especially those systems that transport information or messages across possibly heterogeneous networks. A kind of glue mechanism.
(General) any of several mechanisms that permit composition in a software architecture. Glue can be special purpose (connotations: hard-coded, bad, expensive, idiosyncratic, not reusable, necessary for functionality or efficiency) or general purpose (connotations: reusable, good, elegant; inefficient). Composition can occur at various binding times. General purpose glue mechanisms include: subroutines binding arguments via call by reference or call by value; remote procedure calls to other address spaces (involving marshaling arguments manually or automatically); templates and macros; wrappers that wrap f(x) with additional side effects; static inheritance hierarchies with single or multiple inheritance; before and after methods and other variations on method composition as in Common Lisp; contexts and argument stacks; delegation (dynamic or runtime binding playing a similar role as inheritance (which is a design time/compile time mechanism); dynamic and interpreted definitions; predicate based or rules based scheme where conditions that become true activate actions or imply other conditions; unification schemes like prolog; extended finite state machines. A few systems support multiple glue mechanisms and allow throttling from one to another. COMMENT: ELEGANT GLUEWARE IS WORTH ITS WEIGHT IN GOLD. NIRVANA IS THE ABILITY TO SEPARATE FUNCTION FROM GLUEWARE AND THEN LATE BIND TO DIFFERENT GLUEWARE TO ACHIEVE DIFFERENT RESULTS. A FAIR AMOUNT MORE WORK IS NEEDED TO DESCRIBE THE RANGE OF GLUEWARE.
(General) The ends of the spectrum are coarse grained and fine grained. RDBMSs and OODBs operate on fine-grained data like records and instances. File systems operate on coarse grained data like files. Compound document systems are typically thought of as coarse grained. WAN operations are often coarse grained; intra-processor operations fine-grained. There is no clear cut dividing line. Separate systems often provide similar functionality for coarse or fine-grained information, but then developers must choose among competing alternatives in representing problems. Many applications store some persistent data in file systems since an OODB might be too heavyweight for their purpose, that is, it provides extra bundled functionality they do not need and has a different interface than files and might require a license and make the application footprint larger. If file systems and OODBs had the same interface and OODBs were deconstructed to be modular and provide just the functions needed by applications, then the distinction might evaporate (a long term 5-year goal, implying OO file systems).
() relative terms - two systems are homogeneous if they have the same X where X is some or all aspects like the same interface, the same implementation, can interoperate, can share data, etc. Two systems in a family are heterogeneous to the extent that they are incompatible in some way. One may represent information differently or not include certain functionality or adopt different security policies. Federating homogeneous systems is presumably simpler than federating heterogeneous systems. See also glue and wrapper. Some form of mediation (adding the needed functionality to one or resolving the differences) must account for the differences.
The characteristic that causes something to be distinguishable. Identity has a rich philosophical underpinning. Some view identity as universal (UID), others view it relative to a system or abstraction or namespace. Thus version v27 and v28 are different versions of the same entity (e.g., entity(v27) = entity(v28) but version(v27) .NE. version(v28)).
(Objects) Components can import and export objects via I/O operations including via call by value operations.
(Interfaces) A component can export an interface by making it available for use by other components. A component can import an interface by indicating that it uses that interface.
A generalization of the importing and exporting of interfaces is the importing and exporting of services. This concept is just starting to be studied. For instance, can one think of exporting the query service of a relational DBMS to become a network service. Can one import a rules service to a DBMS to become an active DBMS.
(General) to put parts together into a whole somehow. "The goal of integration is to combine the required elements into a needed capability" (from recent ARPA DTII Meeting).
(positive connotation) systems are integrated when they are to a lessor or greater extent seamlessly combined to support similar conventions or styles.
(negative connotation) when integration is done via coding and the result is hardwired or when integration requires changes to existing subsystems. Limited exposed interfaces and evolution of subsystems are problems that make integration harder. See interoperability.
(Repository, General) a collection of meta data, or a system for collecting meta data. There are many variations on the kinds of functionality a repository should have and how it is related to a DBMS or KBMS. The X3H4 IRDS community and many companies have spent years unsuccessfully trying to define the notion of a repository. OMG recently issued an RFI and three responses were received.
(OMG Interface Repository) a specific OMG specification defining a collection of IDL classes that can be used to store specifications of IDL classes. The specification does not say how to query or make persistent or transactionally operate on the information stored in the IDL repository. One could use OMG services like Persistence, Query, and Transactions or some other mechanism like a DBMS or KBMS
(NIIIP VE Repository) a collection of interfaces, including IDL but also including STEP Express SDAI, possibly NCL, and possibly other interface specifications. Issues are, how is it stored (in IDL or multiple meta data formats) and accessed or operated on (KBMS, OODBMS, RDBMS, object services, … any of the above). Details TBD.
Interface, Interface Specification
() an interface is a defined means for a system to communicate with other systems. A boundary between a system and its environment providing ways of providing the system inputs and receiving outputs. In OO programming, class definitions and method signatures provide interfaces. Application program interfaces (APIs) form the interface of a system to applications and often consist of collections of functions or commands in a scripting language. Often we say that an interface encapsulates an implementation in that the implementation can be changed without changing the interface. Interfaces may be hidden (available only to the system developer) or exposed (available to others).
In general, explicitly defined interfaces are a good thing, the source of modularity in systems. But they do set up walls that partition off parts of systems. Optimization is a dual to modularity in that it tends to break down walls to take advantage of specific opportunities for efficiency. When this can be done automatically it is a good idea. Hand optimizations should be used with more care.
The term API or application programming interface is used to refer to an interface to a system that allows the system to be controlled programmatically. The term floor interface is sometimes used to refer to an interface that defines the system's dependency on another system. In general, a given software system may have multiple interfaces: an install/uninstall interface, a user interface, a API, a floor interface (to allow for porting), a federation interface (for gatewaying to other similar services), and possibly a system management administration interface (to allow for tuning and controlling the system), and install/uninstall interfaces. There is typically not much support for these interfaces in terms of specific constructions though there may be conventions for some.
(Object Interface) a specification in some OO language describing a class, its component elements, its operations (methods), and any superclasses its definition depends on. Tomes have been written on variant object modeling languages (see X3H7 Final Report).
(IDL Interface) a specification written using OMG IDL as the object modeling language
(General) when systems work together, they are said to be interoperating.
(unplanned reuse sense). When an existing legacy systems needs to be composed with some other systems for some purpose, we say they are interoperable if the integration is easily accomplished and changes to component subsystems are not needed. In this sense, if A and B can interoperate then they can work together (the sense is, without changing A or B).
(substitutability sense) if a component in a mediation pattern can be replaced by others in its family of systems we sometimes say the replacement parts are interoperable, that is, interchangeable. For instance, if RDBMS1 can be replace by RDBMS2 they are interoperable.
(federation sense) if A1 and A1 are members of the same family of systems and if A1 and A2 can be federated then A1 and A2 are said to be able to interoperate with each other.
If two software systems (abstractions) can be separately specified then they are orthogonal and independent of each other and can be isolated. References in one to the other are indirect (not hardcoded).
() Software systems that exist now. We are creating tomorrow's legacy systems today. The key puzzle is, what strategy can an organization (e.g., the DoD or industry) use to maintain its software investment over time. This breaks down to problems of evolving and maintaining existing software systems, replacing old ones and adding new ones that can interoperate with old ones. Part of the trick to designing long-lasting legacy systems is understanding the requirements of the system as it is likely to evolve and scale up. Wrapping is one way the OO community (e.g., OMG) proposes to interface to legacy systems. While it is likely DoD and industry will migrate some legacy systems like C4I systems toward new architectures, it is just as likely it will replace the existing with the new over a several year period.
(grandfathered sense) some use the term legacy system to mean one that is already grandfathered.
(General) fitting parts together typically without changing the parts; the glue that does this may be a framework into which framework friendly parts are made to fit or special purpose glue to accommodate foreign parts not originally designed for this framework. An example of a foreign tool might be a CAD system that does not use Motif when the rest of an environment does.
(positive connotation) less special purpose glue required; can add more functionality later by other than the developer (e.g., extensible).
() this pattern describes the situation where two or more component subsystems are composed into a higher level system in such a way that neither is changed (or aware of the other). The new higher level pattern accounts for the particular sequencing (in a broad sense) of calls that transmit information among the components. In addition, the new higher level pattern depends on the lower level patterns
(General) when any of a collection of subsystems can be composed without programming (possibly at build/compile time or sometimes at runtime)
(Microsoft) MS uses the term plug and play to describe configuring a system with varieties of monitor, disk, tape, Ethernet, and modem.
()an encapsulated software unit consisting of both state (data) and behavior (code). In some object models, an object is an instance of some class as specified in some OO object modeling language (e.g., IDL).
(business object) The OMG BOM Task Force seems to use this term to refer to domain-generic business objects that could be standardized and that describe a library of useful business entities (e.g., org charts, purchase orders, …).
(domain-generic objects) OMG has identified several application domains: manufacturing, healthcare, telecom, financial, GIS (coming soon), and these groups are isolating reusable domain objects in IDL in much the same way as the STEP community has been doing for ten years with product data objects using EXPRESS. These efforts are fairly immature at present but are all now promoted to OMG Task Forces and so can adopt technology from industry. DoD has a C4I Object Modeling Working Group and a group meets on C4I at OMG meetings.
(General) an object modeling formalism. Examples include C++, Smalltalk, Object COBOL, CLOS, … An object model is a subkind of data model with primitive concepts identity, state, encapsulation, operations/methods, messages, inheritance, polymorphism/overloading. The ANSI X3H7 committee recently published a Final Report on the large number of variations among different object models. For instance, an object model could be extended to add rules, relationships, and other constructs (as in NCL). Small variations in object model yield large variations in factoring or problems. (Thus, it is a sad state of affairs that SQL3 ADTs is a different object model than OMG IDL though both are meant for sharing across environments.) Mapping between object models may be possible but may not preserve understandability (and hence make it easy for humans to operate on the result of the mapping). Mappings in general are not bi-directional. This is a giant, giant source of interoperability problems, and is one problem that object modeling (due to the variety of object models) exacerbates rather than helps.
(General) a specific collection of classes in some object modeling formalism.
Object Request Broker (ORB)
(OMG sense) a software framework or architecture consisting of a message passing backplane into which component object services can be plugged to allow them to intercommunicate. OMG defines a specific ORB called Common Object Request Broker Architecture (CORBA) consisting of the OO interface description language IDL, a mechanism for dispatch which may or may not be distributed, and mechanisms for static and dynamic dispatch. The current message passing semantics is synchronous, "at most once" and blocking. Some other middleware alternatives that could be supported later are queuing, asynchronous, isychronous, group dispatch, call by value. Other ORBs could exist for other OO languages, e.g., C++, Java, Smalltalk. OMG additionally defines standard mappings from IDL to C++, Smalltalk, Ada, COBOL, …. Express maps to IDL. ODMG ODL is a superset of OMG IDL. Note that OMG does not define standard mappings from host languages to IDL. Also note that DCE IDL is a non-OO interface description language and is not the same as OMG IDL though they play a similar role.
(OMG organizational unit) the (now historic) name of an OMG Task Force (now called ORBOS TF) working on basic object services that are horizontal in nature. They are basic in that many applications are likely to need them and so were scheduled for earlier adoption than the OMG common facilities.
(OMG architectural distinction) one or more objects that together perform some abstraction or function. Services are reusable, have a well defined object interface, have cleanly specified interfaces to other services they must or may depend on, and are composable with other services into higher level systems, and are often federate-able with other copies of the same or similar service. The intent is to separate abstractions like persistence and versioning as separable services that can be recombined and/or specialized by some general glue mechanisms like inheritance. Note that separation of interfaces does not necessarily imply separation of implementations. So a legacy system providing several services may simply provide a compliant interface to OMG services to be compliant. It may still be difficult to impossible for a third party to augment the functionality of the black box implementation by providing other services. On the other hand, separate implementations of each service may be standalone and not be composable themselves (even within a single vendor's product offering). OMG and the entire component software community still needs to wrestle with implementation composability as well as interface composability. Until this is done, several of the mix-and-match benefits of the component approach will go unrealized.
(Object Services Architecture) An OMG document (which I edited in 1991-92) which lists the basic object services that OMG eventually adopted most of.
(from the proposal) "Interoperation theory for OSA architectures: We will provide a semi-formal basis for better understanding scaleability of OSA architectures by developing an "algebraic" OSA architecture description language to describe components of an OSA by their typed interfaces; constellations, which are monolithic compositions of components (legacy systems); composition rules, which include abstractions like wrapper, gateway, and around-method; constraints including dependencies between services; and properties including performance, and licensing restrictions. We will use the algebra to describe how services combine and also service multiplicity in OSA-OSA federations to better understand recursive service calls (e.g., distributed queries, nested transactions, federated namespaces, CORBA-CORBA interoperation). We will initially address homogeneous services and later extend the model to heterogeneous collections of services. We will be trying to better understand both static architectures, which are fixed compositions of federated OSAs and dynamic architectures which model how architectures change over a lifetime of use and how to plan for change. The emphasis on this work will be pragmatic, to transfer abstracted descriptions of real systems and examples to a formalism that helps in understanding and planning for architectural change.
Developing Internet OSA: To gain experience, we will build a scaleable OSA. We will use existing encapsulated components where possible, and build or modify others as required by the driving electronic commerce application. Problems we will address are: how to accommodate large systems that require crossing object model boundaries, how to handle situations where data in OSA1 needs a service in OSA2, how to combine the rich Internet environment and all its services with the explicitly object-based environment of CORBA (and OLE2.0).
[Note: vendors are defining CORBA-CORBA interoperability but not service level interoperability.]
Demonstrating OSA-OSA Interoperability: Corresponding to the three scenarios of section D.1, there will be three OSA interoperability demonstrations. The first will just demonstrate encapsulation of existing tools to form an initial, working OSA system; the second and third will enrich the OSA but also begin to demonstrate OSA-OSA federation. Experience we gain from building the demonstrations will be recorded descriptively in the OSA architecture. We hope to enrich the algebra formalism to explain our observations. Some observations are informally known already, e.g., modifying g++ to add, say, persistence is risky unless the g++ development team adds your modifications since you will be responsible for upgrades whenever they upgrade. Other observations may be subject to experimental verification, e.g., that some object services are object model independent (OQL[C++], OQL[Express], or OQL[Relations]). The ability to reason about system compositions may lead to new systems (e.g., object-file systems that unify file systems, relational DBMS, and OODBs) and will better explain reuse and architecture migration in existing systems."
(architectural property) System A is open if it has (a) configuration or runtime parameters, (b) exposed interfaces, or (c) programmability that permit (a) varying its functionality, (b) augmenting its functionality with add-ins (c) replacing or removing some subsystems, (d) doing any of the above to evolve the system. The antonym of open is closed, a blackbox system with no exposed internal interfaces. Of course, a system is more or less open (that is, closed-open is a spectrum) and the question is, how open should a system be. Ultimate openness is delivery of source code. While that may be desirable for research communities, too much openness is not necessarily a good thing for all customers since it can often lead to customers evolving system variants that are difficult to merge or maintain.
(economic aspect) Systems may be architected open but sold closed (permitting only their developers to take advantage of openness) and then the owners can additionally sell openness. Developer licenses are an example of exposing additional interfaces for economic reasons. That is, openness is both an architectural concept and also has economic/legal aspects used for competitive advantage.
(Conceptual) Patterns are reusable abstractions that can be documented and logged in a pattern repository. They range from detailed algorithms (e.g., Knuth) to OMG OMA to enterprise architectures and to other fields like conventional architecture. Patterns can have provable properties. They are useful because they represent a common library that experienced developers are likely to see over and over again. They can be instantiated into many different programming languages at the expense of adding details that specialize them in some way. Template and macro facilities in some programming languages help to provide some support for specifying some kinds of patterns so that details like which concrete class they are governing can be added at a later stage in defining a system. Glue mechanisms and object services are subkinds of patterns. An architecture itself is a patterns. (As far as I know, there is no precise definition of pattern relative to these related terms.) Some believe that composition patterns can knit together other patterns. Some believe there might be some design patterns that can be used to explain OSA composition. There is not much in the area of explicit libraries or repositories of design patterns or clichés (though books on patterns, OMG specifications, IETF protocols, Java Beans, class libraries, and various standards surely qualify). Gamma, et. al., use the term design pattern to refer to an abstract description of a particular design problem which may occur numerous times in a given architecture. Each of their design patterns is expressed as a small collection of abstract object classes, emphasizing the roles of the various classes in solving the design problem.
(General) when new components can be added to a (running) environment without an extensive system configuration effort or integration effort.
(Netscape) An Add-in in Netscape (or certain other Web browsers). Plug-Ins are not guaranteed to be portable across environments. Plug-Ins are programs that are downloaded to the client, they are not guaranteed to be portable across environments. They obey a plug-in interface in that they are launched from Netscape and may use the Netscape browser as a user interface. There are no environmental guarantees that plug-ins are safe, where there are such guarantees (or reasonable claims) with Java applets.
(General) If system A dependent on environment E1 can be re-hosted to environment E2, A is said to be portable. A is oftentimes given a floor interface specifically to make porting easier. In that case, separate porting packages A-Ei can be written to interface A to environment Ei for each i. This can be done by the vendor of A or possibly by a third party if the floor interface is published or exposed.
One other architectural idea to add here is that of profiles, or collections, of standards or systems that can be used compatibly (also see environments and Enterprise Architectures).
(diplomacy) rules of order or conventions of behavior in communication among parties. "Thank you. You are welcome."
(General) a specification that is general purpose and may become subject to standardization. Includes OO and non OO interface specifications, wire formats, APIs, as well as generic (communications sense) sequencing between communicating objects.
(Narrow communications sense) protocols are only protocols in the communications sequencing sense. That is, a protocol defines specific generic sequencing rules for communication. Does not include APIs or OO interfaces.
(NIIIP Art's Working Paper p1) the specification of the behavior between objects towards a defined end. (which sounds like the communications sense)
(NIIIP Art's Working Paper p2) protocols are general and potentially standardizable. Objects, components, systems, and technologies have protocols. This definition is intended to subsume earlier definitions, which include:
() In EFSM work, complex protocols can be modeled with collections of EFSMs. Many times only a subset of the functionality of a general purpose EFSM is needed in some deployed system. Protocol pruning is a general mechanism for reducing the size of an EFSM and customizing it to a given need. It could be used at definition time or even to permit demand loading of seldom used features of a system.
reference architectures, reference implementations
The standards communities talk about reference architectures (aka reference models) and NIST, consortia, companies, or interested others sometimes provide reference implementations. Reference implementations can be public domain or proprietary. Reference Architectures and Reference Implementations are commonly stages in the standardization process for some class of systems. The existence of a usable reference implementation gives assurance that a reference architecture is viable. IETF requires implemented specs. OMG requires commercial availability at some future time. X3's OODB Task Group (c 1991, my work) did some pioneering work on a Reference Model for OODBs (a progenitor of the OMG OMA Reference Model, also co-authored by us in c. 1991). The reference model was cast as an AND/OR tree of selections called a design space. One could select an object model and features from a list including persistence, transactions, queries, etc. to describe a real or hypothetical OODB. The reference model provided a framework for evaluating various commercial OODBs. Paul Pazandak's current work on Multimedia DBMS evaluations is similar. That is, at a level of abstraction, the reference model descriptively accounts for the range of variations among a collection of similar software systems. But it does not by itself account for the detailed composition of the features nor guarantee properties of interoperation even for similarly constructed systems let alone systems where even one design choice differed (e.g., what if two OODBs are similar in all regards except they vary in that one does allocation time persistence and the other transitive closure persistence?; if they vary in security policy; how interoperable will they be?).
(General) invariant true statements about a software system or subsystem. A collection of requirements true of some software system is called a set of requirements. Requirements can be implicit (I'll know it when I see it) or explicit (recorded, written down); qualitative or quantitative (measurable); true of a whole system or just true of parts of a system; end-user requirements of what is wanted or technical requirements of what a system is supposed to do. A given set of requirements may be incomplete (underspecified) or contradictory (overspecified) or both. For any realized system, it may meet its explicit set of requirements but there are additional unstated things true of it that may be implicit requirements, don't-cares, or may violate explicit or implicit requirements. In the life cycle of a software system, requirements evolve (change) as the uses and needs for the software system change. If r is the set of requirements that a system s obeys and if we can identify R, a superset of r, which include future requirements, then it is beneficial to design s not only to meet r but also to not preclude R-r. Systems that are evolving typically meet increasing numbers of requirements in the future and evolve upwards compatibly. Unfortunately, there is always a time t-future where some new requirement n is inconsistent with r-future and hence s-future so the system may not be strictly upwards compatible. In fact, there is often an unforeseen future requirement that cuts deeply enough across the current design so as to require substantial refactoring (redesign) of s. Systems or features that are supported for an interim time before being discontinued are said to be grandfathered. Similarly, there may be tradeoffs in satisfying r1 and r2 so neither can be completely satisfied. It is an (unproved) thesis of open systems that they provide better expansion joints than closed systems for withstanding evolutionary change in requirements. (NEW) A related definition of requirements is, "requirements are "implementation free statements of what is wanted". This captures the application view that requirements are statements in the application domain of the problem to be solved whereas the original definition "invariant true statements about a software system or subsystem" captures the view that requirements are statements true about a system in the design space of what is wanted.
(Requirements Specification Language) (Semi) formal language and process for capturing and representing requirements in English, semi-structured English, or a specification language and refining them to operationally quantified requirements.
(Requirements-to-Code) In specialized domains, a high level specification language may be viewed as an assertional language for stating requirements that are then translated to procedural code (semi) automatically. Examples might be queries in a relational DBMS viewed as requirements or goals in a Prolog program. The latter demonstrates the relationship of requirements-to-implementation and goal-based problem solving in AI.
Event-condition-action rules that can be used to define functionality as in an expert system or could be used to sequence components as NIIIP is experimenting with at UFl. These rules compile down to lower level events or sentries (Open OODB term) that a distributed event monitor can detect and dispatch. This is lower level glue that may be used to sequence components. Both UFl and Open OODB supported both rules and event monitoring. The NIIIP VE Monitor is a generic event monitor, not particular to NIIIP or virtual enterprises.
(Business Rules) The OMG BOM Task Force identifies the need for business rules. By this they seem to mean that these are high level assertional specifications that can be understood and changed to reconfigure the way a business operates.
Run time (see Build time/Run time)
(General) fitting parts together so that the seams do not show. For example, program P may be seamlessly extended with new behaviors like persistence, versioning, replication, caching, distribution, parallelism, others. The operation to extend P to new behavior B ma be manual or automated, and may or may not require re-coding, re-compiling, or re-linking.
(communications sense) the Narrow Communications Sense of Protocol
(broader NIIIP sense) the necessity to account at some higher level (that is, specify) how components compose to form higher level systems. Cycle 2 dependency tables are part of the specification.
(General) See near synonym component. A software system is a collection of software for some purpose or purposes. It is a unit of functionality and may be an economic unit. The software system meets some set of requirements. A software system has one or more interfaces. The software system can typically operate in some environment, which defines a set of environmental dependencies or environmental requirements and may provide some environmental properties like safety. Example environments are PCs, UNIX, MS Windows, CORBA, WWW, Netscape, SQL, RDBMS, Oracle, OLE, OpenDoc, … A software system may have a distinguishable separately specifiable software architecture. A software system may decompose into separable subsystems, each with a well-defined interface. Such interfaces may or may not be exposed. Such interfaces might be called floor interfaces if they interface the system to its environment.
A specification in some precise language of the format or functionality of a software artifact. Also see interface, protocol, and profile. De facto standards are those developed by accredited standards bodies like ANSI X3, IEEE, and ISO. De jure standards are those developed by industry including industry consortia. Standards are only as important as they are effective and as their acceptance by the user community. One can also distinguish standards that organize existing practice and standards that are specify future interfaces for which there are no current implementations.
() DoD uses the term informal term stovepipe system for closed systems that embody idiosyncratic compositions of thin and thick layers of various functions/services and that are inflexible to evolve, expensive to maintain, and do not interoperate very well and make sharing data hard. There are many such systems, built for a specific C4I, intelligence analyst, or other need or at a time when the U.S. had one monolithic enemy, but now conditions have changed and these systems must interoperate in new ways under new conditions (regional conflicts, operations other than war (OOTW), depending on allies with heterogeneous computing environments) and there are architecture problems to solve.
A stovepipe system is a closed legacy system containing special purpose variants of many functions that could individually be genericized and made into reusable patterns or components. A stovepipe system may have a minimal versioning capability, a custom query capability, a variant of a persistence capability, etc. The layers or functions may be thinner or thicker (have fewer or more) features than a standard layer, service, or function. The architectural glue that binds is special purpose code. Large stovepipe systems of systems replicate many functions in different layers in non-interoperable ways and are difficult to evolve. Functions available in one may not be available in others.
System of Systems
Software systems rarely operate in a vacuum. Generally, they accept inputs and generate outputs and so are connected to other software systems. A problem is that related systems (e.g., separately constructed C4I systems) may need at some later time to interoperate. Another problem is that data generated in one system needs functional analysis provided only in another system so they must somehow share data or functionality. It is desirable that this connectivity happen in an organized manner. It is equally desirable that it be easily reconfigurable since conditions change. An flexible architecture for connecting together a system of systems is desirable. Note: the term system of systems and the definition of systems as possibly composed of subsystems is the same concept except that the former term emphasizes that the systems being combined were built at different times for different purposes in different environments and the requirement to compose them evolved after the systems themselves.
(NIIIP) the interface of mediation patterns that depend (in some way) on other system or component protocols.
This is more a categorizing term for helping to identify some of the higher level composite systems needed in defining a VE. It must account for the sequencing or rules of composition of lower level components. There is not a sense that components contain no sequencing and system protocols contain all control. There may be a sense in which one could call component protocols the leaf protocols and system protocols compositional. There is not a sense that every generic system protocol depends on all dependent protocols - some might be (or become) missing and the result is a somewhat less functional (survivable) system protocol. For instance, in general or in the abstract, an RDBMS must support persistence, query and transactions (by definition) but may though usually does not support versioning. An instantiated, concrete, particular RDBMS will either support versioning or it won't. But if it does, it must account for how versioning is composed with other functions.
() Technical architectures are the infrastructure portion of enterprise architectures. They cover DBMS, middleware, network management, communications, and message passing/queueing, and email. The Technical Architecture is the building codes and set of rules that select complementary components and arrange them so applications built on this base will better port and interoperate. See Enterprise Architecture.
Three Schema Architecture
The three schema architecture of DBMS systems (application view, community view, storage view) is an example of view mappings from one interface layer to another. There are not many examples of higher level constructs for composing view interfaces to system in general. (Note that the three layer DBMS scheme is really two separate uses of encapsulation, between application and community, and between community and storage.
Three tier architecture
(Web) Many Web tools use this phrase to indicate the three layers of web architecture characterized by web client - web server - CGI-gatewayed-backend server. The backend server (for instance, a DBMS) takes parameters typically embedded in URLs and returns html pages that it constructs on the fly. Plug-Ins and Java provide other interesting Web extensional architectures.
(Enterprise) A client/server architecture consisting of three layers: "thin clients" primarily implementing presentation services (e.g., graphical interfaces); "application servers" implementing business functions and business logic; and "database servers" managing persistent data. In some variants, interfaces to legacy systems may also be included in the database server layer.
(General) fitting parts together typically in a tightly integrated manner; may require recoding the parts (negative connotation) or exposing their internal interfaces (sometimes desirable, sometimes not) thus permitting more efficient coupling (desirable).
(from DARPA I3 workshop) a kind of glueware that is used to attach together other software components. A wrapper may encapsulate a single system, often a data source, to make it usable in some new way that the unwrapped system was not. Wrappers are often assumed to be simple but in general they can be used
This research is sponsored by the Defense Advanced Research Projects Agency and managed by the U.S. Army Research Laboratory under contract DAAL01-95-C-0112. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of the Defense Advanced Research Projects Agency, U.S. Army Research Laboratory, or the United States Government.
© Copyright 1996 Object Services and Consulting, Inc. Permission is granted to copy this document provided this copyright statement is retained in all copies. Disclaimer: OBJS does not warrant the accuracy or completeness of the information on this page.
This page was written by Craig Thompson and Frank Manola. Send questions and comments about this page to firstname.lastname@example.org or email@example.com.
Last updated: 1/3/97 sjf
Back to Internet Tool Survey -- Back to OBJS