Workshop on Compositional Software Architectures

Workshop Report
Monterey, California
January 6-8, 1998
Editor:  Craig Thompson
February 15, 1998

[Workshop homepage:]


Sponsors and Organizers


Workshop Committee

Objectives of the Workshop

The workshop focused on: Fundamental concerns face organizations developing and maintaining large, enterprise-critical, distributed applications. Component software did not exactly set the world on fire five years ago. Now we have new languages, maturing visions of compositional architectures (CORBA, WWW, ActiveX, ....), the web as a distributed system with a low-entry barrier, and emerging middleware service architectures. Do we have the critical mass to jump start the component software cottage industry? Even if the technology enablers are there, what is needed to establish an effective component software market?  What are the remaining barriers?

The objective of the workshop is to bring together a mix of leading industry, government, and university software architects, component software framework developers, researchers, standards developers, vendors, and large application customers to do the following:

Workshop Focus

The workshop consisted of a set of invited presentations and topic-centered breakout sessions.  Topics of interest listed in the Call for Participation included (but were not limited to):

Planned Outcomes and Benefits

The explicit planned outcome of the workshop included position papers and this workshop report, which summarizes the breakout sessions.  Implicit benefits of the workshop are:

Position Papers

Position papers (mostly around three pages long) on a topic related to the workshop theme were solicited by November 21, 1997.   Generally, an accepted position paper was a prerequisite for attending the workshop except for a small number of invited talks. The position papers were made web-accessible by December 7, 1997, in various widely used formats (.html, .doc, .ps, .pdf, .txt) -- see the List of Position Papers arranged in the order received.

We originally expected around 40 position papers but received 112 and accepted 93.  This was a first indication that the workshop theme was of broader interest than we originally expected.  We decided to scale up the workshop rather than severely restrict participation.  We solved the scaling problem by adding extra parallel breakout sessions.

Workshop Structure - Presentations and Breakout Sessions

The workshop consisted of presentations and breakout sessions.

Most presentations were based on position papers but a few were invited talks (so there is no corresponding position paper for these).  Several invited talks were scheduled the first morning to get many of the workshop ideas exposed early.  Other talks were scheduled for presentation in relevant breakout sessions to get the conversation going.  Because of time limitations, we could not schedule all position papers for invited talks.  We did our best at matchmaking.

Breakout sessions were two-to-three hour working sessions focused on a topic, led by a moderator and recorded by a scribe.  Most breakout sessions started with summaries of a few relevant position papers skewed toward helping to introduce the session's topic.  Following a breakout session, in a plenary session, the moderator present a summary of the breakout session and some discussion occurred.  The scribes were responsible for sending a summary of their breakout sessions to the editor of the workshop report by January 16, 1998, for assembly into a draft report.

There were four sets of half-day breakout sessions (I-IV), each containing four parallel breakout sessions (1-4).  In order to partition the workshop into breakout sessions, we completed a poor-man's Topic Analysis on the workshop papers.  This really just consisted of keeping track of several topics per position paper and then making a large outline of all the topics.  The topic analysis was useful for several purposes.  As an outline for the many topics covered by the position papers, it provides a way to scope and structure the topics covered.  To a lesser extent, it provided a way to locate (some) papers based on topics (though it is pretty incomplete if used this way).  Finally, it provide the basis for partitioning the workshop into a collection of breakout sessions.

The last step was to pre-plan the breakout session topics.  This involved identifying for each breakout session the title, topic description, moderator, and relevant presentations (this was done before the workshop).  This information appears in the breakout session summaries below.  In addition, we did late binding in selecting scribes at the beginning of each breakout session.  And scribes authored the breakout session descriptions, which is the main body of the breakout session descriptions below.

Opening Session:  Problem Domain and Workshop Goals

[Thanks to Robert Seacord (SEI/CMU) for notes on these presentations.]

This half hour session consisted of presentations by


Breakout Sessions

Breakout Session Rationale

Breakout sessions were organized to encourage effective discussions in a cross-disciplinary workshop where the attendees are coming in with very different backgrounds, viewpoints, terminology and interests. Seen from another vantage point, there were collections of sometimes sequenced sessions that covered:

Breakout Session Structure

Each breakout session description has the format:

I-1 Problem Definition by Application Architects

Moderator:  Craig Thompson, Object Services and Consulting (OBJS)

Scribe: [notetaker, please contact report editor]

Topics:  From the large application builder's perspective, component software is an enticing vision but there are roadblocks in the way of realizing the benefits.  Large application architects and enterprise software architects will identify the critical shortcomings they see in current technology, develop a vision for future component-based development of enterprise-wide applications, and identify key architectural concepts, tools, and processes needed to realize their vision.



The purpose of this session was to look at the world of middleware choices from the point of view of large application architects and to understand their requirements and experiences to date with using component-based middleware.

There were three presentations.

Louis Coker - DARPA AITS Architecture

Louis Coker talked about the DARPA AITS architecture, especially the experiences of the JTF-ATD command and control team in being early ORB adopters who have evolved their application in parallel with CORBA and the World Wide Web.  Their problem involves users collaborating in developing situation representations and in planning courses of action.  Some limitations of today's ORBs are:

These lessons learned are not getting out widely to the OMG community.  Vendors do not seem to be addressing the needs of the Wide Area Net (WAN) community.

Colin Ashford - The TINA Service Composition Architecture

TINA is a consortium of telecommunication providers, manufacturers, and researchers.  Their mission is to develop an architecture for distributed telco applications that is bandwidth sensitive and supports multimedia.  Colin Ashford talked about service composition in TINA.  You can create a service by combining service components or in a temporary merger in a session, e.g., a teleconferencing session. Service composition may be static or dynamic.  Services can be composed in parallel or series (he did not mention weaving).  The TINA architecture is based on OMG's but has richer ORB+ capabilities, administration and management services, and network resources, and it has a session model.  The business model is to allow retailers to re-sell services to customers that the retailer has composed from more primitive services supplied by third party providers, that run on an infrastructure provided by a communication infrastructure provider, possible with an RM-ODP-like broker in the picture to locate services.  They need algebras for specifying composition, scripting languages, and toolkits to make composition easier to perform.

Gabor Seymour - Compositional Software Architecture "ilities" for Wireless Networks

Motorola careabouts include mobility and wireless communication.  Mobility impacts the ilities.  Cellular topologies require smarter cell-sites.  The physical environment and geography forces quality concerns.  There is a need for object migration at runtime and a desire to migrate functionality to cell-sites not keep it central.  With respect to reliability, availability is location-dependent and replication varies by site driven by cost.  With respect to scalability, they need upward scalability (LAN to WAN) and downward to move objects from large central sites to less capable distributed sites.  Network management must work in the present of external events and providers.  They need change management and performance.  They need graceful degradation.

Discussion covered the following topics:

We asked why there are so many technology floors to stand on -- OMG, ActiveX, the web, Java, other middleware, etc.  One reason is that there are so many dimensions of requirements, for instance, the need for static versus dynamic solutions, the need for LAN-based and WAN-based solutions, the wide variety of needs for thin or thick security solutions, the degree of support provided by COTS technologies, their openness, granularity, time scales (relative speed of change), and many more.  Some felt that the variety is needed because the solutions for the different combinations require different mechanisms to be integrated.  Others felt that maybe over time we will be able to see how to untangle this so that not every system is some unique manually coded combination of functions and non-functional qualities (what DoD calls stovepipes).  That is the promise of open middleware, after all.

We briefly considered the distinction between applications and infrastructure.  This line is gray today because applications must reach down to meet the middleware floors they are built on, and there is a gap in the form of missing middleware (missing standards and missing implementations).  Also, even when there is less gap, applications often encode controls over middleware policies; and sometimes quality considerations reach up into applications.  The move by OMG (and PDES and SEMATECH) to standardize some domains mean standards that reach into traditional application areas.  We still do not have a good way to insulate applications from their middleware choices.

We discussed designing with evolution and adaptability in mind.  Craig Thompson mentioned that when one designs to meet requirements, it is a good idea to distinguish different kinds of requirements:

What would be nice is to somehow guard against this final category of requirements.  Perhaps we do this by modularizing designs so cross-cutting requirements only affect some portions of the design.  In addition, maybe we can learn how to add or change the binding glue and insert new capabilities into systems via wrappers or other mechanisms that can side-effect the communication paths.  This would leave expansion slots for various kinds of adapters.  This would take us from a view of systems-as-designed to a view of systems-as-continuously-evolving.  In some sense, expansion joints mean looser coupling though performance might be optimized back in.  To some extent, market conditions drive ility needs and change rates.  This also argues to avoid monolithic middleware, though there is a tendency in the vendor community to produce just that -- today's ORB vendor services lock you to a particular vendor; you can't often port services to another ORB implementations.  Rather, we want to compose middleware components for a specific problem and then evolve the solution.  There is unlikely to be a "one size fits all architecture" at least as concretely implemented (though there might be an abstract model like OMA possibly augmented with an abstract ility architecture framework, which might be used to generate and evolve specific concrete architectures.)

Todd Carrico showed a slide of a fat application containing many encapsulated services on the left and a thin application dependent on many explicit middleware services on the right.  One way to interpret the picture is to imagine that we are evolving toward thin applications and richer available middleware so it is easier to build applications based on tried-and-true middleware components and to mix-and-match lighter or heavier weight services and ilities.  Another interpretation of the picture is that it would be nicest to be able to move up and down the spectrum of thick and thin applications without redesigning systems.

We discussed some roadblocks.

I-2 Extending Current Middleware Architectures

Moderator:  Ted Linden, Microelectronics and Computer Technology Corporation

Scribe:  Diana Lee, Microelectronics and Computer Technology Corporation

Topics:  From the viewpoint of middleware architects, where are the critical shortcomings in current technology, what kinds of component-based development can be supported in the future, and what are the additional key architectural concepts, tools, and processes needed to realize this vision. Current middleware architectures like CORBA and COM+ are a step toward compositional architectures, but they do not fully support component-based development and maintenance of large applications. Are current middleware architectures from OMG and Microsoft steps in the right directions? What are the roadblocks in the way of realizing greater benefits?  Problem areas for discussion may include:



This session identified requirements for middleware architectures capable of fully supporting component-based development.  The three introductory papers approached the problem from complementary viewpoints and envisioned similarly strong requirements for middleware architectures: These presentations and the discussions argued that support for component-based development requires more than methods for developing, exchanging, marketing, and composing components. We also need well worked out methods to: Relation between Architecture and Components

Which comes first, architecture or components? Currently components fit within an architecture such as those defined by Java Beans, COM+, or a browser or other product for its plug-ins. Architecture first is consistent with the traditional approach of architecting a system before writing components. However, component technology will be more economical if components can be developed and used in multiple architectures. The ability to wrap a CORBA object, Java Bean, or COM+ object so it can appear in one or the other architectures means that a component does not have to be totally dependent on an architecture. There was a surprising amount of consensus that components should not have to be strongly dependent on a specific architecture. Components are written first, then architectures tie them together. An application that uses components will have an architecture—especially to the extent that the application must support ilities, dynamic debugging, and dynamic reconfiguration. Specific components may interoperate more or less easily within a given architecture; i.e., the wrapping necessary to make a component work within an architecture may be more or less easy.

We asked whether there is a minimum common architecture that can be developed as a way to facilitate reuse of components. Components developed to this minimum common architecture could then be used in a variety of specific architectures. We concluded that it is unrealistic to search for a "minimum common architecture." There are multiple dimensions involved in interoperation, and no one dimension is always most significant. One increases interoperability by increasing architectural specifications. The question "what is the minimal architecture" is better described as "how interoperable do you want the component to be?" and "how much wrapping or rewriting will be needed to make it interoperate within a specific architecture."

Levels of Interoperability:

Compositional Software Architectures must deal with component interoperability at several levels. Interoperability at all levels is especially important for developers of large, long-lived applications that grow incrementally. Development of new products and technologies may, over time, necessitate:

While there are many interoperability requirements, the answer is not in the direction of complex middleware architectures. In fact, there is a desire to make the middleware as transparent to the application as possible. A paper at the workshop, Middleware as Underwear: Toward a More Mature Approach to Compositional Software Development [Wileden and Kaplan], states that middleware "... should be kept hidden from public view ... It should never dictate, limit or prevent changes in what is publicly visible... In most circumstances, it should be as simple as possible." But how does this apply to different middleware products being interchangeable? Is it possible to change middleware in a transparent fashion? Using the underwear analogy, one attendee rephrased the problem "transparent, yes, but it is awfully hard to change your underwear without removing your clothes."

Solutions Toward Interoperability proposed and discussed include:

Obstacles to Component Technology: Other Relevant Issues:

I-3 Challenging Problems in Middleware Development

Moderator:  Bob Balzer, USC/ISI

Scribe:  Kevin Sullivan, University of Virginia

Topics:  This session views component composition from the point of view of middleware developers and system programmers.  The approach is to select one or two interesting system software component composition challenge problems that can be used to identify component software strengths and weaknesses.   Hopefully the challenge problem can be reconsidered from other perspectives in later sessions of the workshop.  Sample challenge problems:



This session focused on the use of composition enablers and inhibitors in the design of middleware systems.  The questions that we addressed included the following: What distributed middleware would be useful for component-based system development?  What information and mechanisms are necessary to enable composition of components into systems, the automation of such composition, and reasoning about such systems?

At a more detailed level, the questions we addressed included the following:

Much of the discussion centered on the issue of metadata as a composition enabler.  Metadata is machine-readable, descriptive information associated with components, connectors, and systems.  A simple example of metadata is the type library information that is often associated with COM components. Such metadata describes component interfaces at a syntactic level. An extension of that kind of metadata might include descriptions of what interfaces are required and provided by a component, and how it expects to interact with its environment. Metadata can be used by programs to reason about composition, properties of components, and even about middleware itself.  What kinds of reasoning and manipulation are supported by various metadata types?

In that dimension, we discussed the following specific issues.  First, the position was taken that we need precise semantics for metadata.  Second, we might use metadata to describe component and system provisions and requirements, e.g., what a component needs in the security area, and what a system needs in the reliability area.  It was noted that security, reliability, etc. cannot be bound to individual objects.  One reason is that desired properties often change as knowledge is acquired over time.  Another reason is that opinions might differ as to when given qualities are good enough.  Third, it was observed that metadata can be attached at many levels of a system.  There is no particular place where metadata annotations necessarily go.  However, there has to be a mechanism to propagate information, so as to enable desired or required levels of control.  Fourth, it was suggested that metadata can be organized through views of complex systems, e.g., a security view, a reliability view, etc.  Fifth, it was suggested that automated "composers" (e.g., class factories, the Software Dock of Heimbigner and Wolf) might use metadata such as figures of merit to compose components to meet given specifications.  Sixth, we discussed the need for type and constraint systems to enable automated reasoning about and systems and compositions from parts.  For example, it can be necessary to reason about which combinations of actions and components are acceptable, and to have ways to name them. For example, there are cryptographically insecure combinations of secure algorithms and techniques. Another example is that adverse interactions at the level of mechanism can have unintended semantic consequences, e.g., pinging for fault detection can interfere with aging-based garbage collection in Internet-scale distributed systems.  We might also want to enable the automatic selection of alternative abstractions and implementations, e.g., in the context of management of quality of service. Finally, it was observed that it is critical to detect mismatches between components and systems and their environments and that metadata might facilitate detection and reasoning about such mismatches.

We also discussed relevant properties of both middleware and systems based on it, although the discussion remained abstract in this area.  One middleware property that we discussed was usability. What can we do to make the middleware itself easier to use?  For example, what middleware metadata would make it easier for developers or tools to understand and evolve systems?

We also discussed inhibitors of composition in component-based systems.  First, it was suggested that we lack fundamental knowledge of what is useful, not just in terms of the metadata descriptions of systems, but even in what basic system properties are important.  For example, what are the key, distinct levels of security in a system?  One person said we have almost no engineering knowledge of what parameters and parameter values are important.  We lack clear definitions of key terms.  Much work in this area is vague and general, and people have inconsistent views of what terms mean.  Second, it was said that most engineering advances are made when failures are analyzed and understood , but that in software engineering failures tend to be hidden, and not analyzed. Third, one person noted that there is little discussion of analysis and formal foundations in this area.  It was noted that Sullivan’s work on formal modeling and analysis of compositionality problems in Microsoft’s COM represents progress in the areas of failure analysis (of sorts) and the formal foundations of component-based software development.  Fourth, it was noted that
although the notion of decomposing systems into functional components and ility aspects is seductive, it might not be possible to effect desired behavioral enhancements without changes to functional components of a system.  Fifth, the competency  of people using components and middleware is (it was said) often questionable or poor.  That idea led to the suggestion that competency requirements for component use might be attached to components as metadata?   Sixth, computational complexity is an inherent impediment whenever semantically rich metadata have to be processed.    Seventh, it was noted that computers are so much more capable than they used to be, and they’re going to get even more powerful, so we need to find ways to control complexity growth. A key strategy, it was said, is to make systems simple so that they work.  Finally, the issue of the diversity of approaches in practice was raised as a practical impediment to the use of metadata.

The participants also discussed the issue of where standards (e.g., for components and metadata) will come from: whether from de jure or de facto standardization? We also discussed some key ways in which software engineering is similar to or different from more traditional engineering disciplines, such as the bridge building.  In particular, we discussed whether the notion of tolerances (close enough), which is critical in the design of physical artifacts, has an analog in the software realm.  Good points were made on both sides of this issue.

We ended the session with a discussion of quality of service.  First, it was noted that discovery of key properties happens at both design and run time.  Second, it was observed that it’s important to avoid a combinatorial explosion in interface types, and so interfaces types are used to discriminate objects down to a certain level of granularity in Microsoft’s COM, below which properties are used as a mechanism for manipulating quality of service parameters (in OLE DB).  Second, QoS specifications can differ, even for a single component.  Third, there need to be generic services for invoking components that provide services.

I-4 Software Architecture and Composition

Moderator:  Gul Agha, University of Illinois at Urbana Champaign

Scribe: Adam Rifkin, CALTECH

Topics:  What is the vision of component composition from the software architecture perspective?  What does software architecture explain and what does it not yet address?



Many fundamental challenges exist when developing software applications from components, among them: Many members of software community have been researching solutions to address these challenges.  Among these efforts: What these systems and others have in common is the meta-model:  customizable components as actors running in given contexts (such as environments with real-time scheduling and fairness constraints), interacting via connectors, which themselves are also first-class actors.  Constraints can be imposed on components, on connectors, and on contexts as well.  As system designers, we can specify protocols over the connectors' interactions with components, and we can specify policies managing the deployment of resources.

As first-class actors, components and connectors are dynamically reconfigurable, and they manifest observable behaviors (such as replication or encryption); a component in isolation in a certain context has an observable behavior that may differ from its behavior when it is composed into a new environment.  This may have a significant impact on the software "ilities" such as quality of service (performance), reliability, and survivability.  However, the lesson of aspect-oriented programming is that some ilities cannot be encapsulated entirely within connectors because they by nature cut across components [6].

It would be ideal to have modular ility first-class connectors that transparently provide appropriate behaviors for interacting components assembled by application developers.  In some cases, this is feasible (for example, the addition of a transaction server to a system designed to accommodate transactions); in other cases, it is not (for example, building recovery into a system not designed to accommodate fault tolerance).

Ultimately, the software component architecture vision is to build a notion of "compliance" on a component so it can work with arbitrary connectors and behave as promised.  Then, compliant components can be plugged together using connectors to achieve a desired (feasible) ility.  For example, under compliance assumptions, a connector can provide a property like passive replication.

The research challenge, then, is to provide formal methods and reasoning models for making clear the semantics of the components both in isolation and interacting through connectors, and for making clear the properties of the aggregate system.  In addition, developers can use tools for performance modeling, runtime monitoring, system configuration, and component feedback cycle tweaking.

We have already witnessed the utility of reasoning models in furnishing specific ilities to specific applications.  For databases, performance is the desired ility (for example, "What will be the size of the query result?"), and research has led to reasoning models for concurrency control and transaction management to address that ility.  On the other hand, for Mathlab, accuracy is a desirable ility (for example, "How much error exists in the answer?"), and research has led to reasoning models of composable algebraic matrix operations for predicting the accuracy of results.

These models and others -- such as atomicity precedence constraints [7] and modular interaction specifications [8] -- demonstrate that research can provide useful models for distributed component software architects. They also indicate that much more research needs to be done -- for example, in the automatic checking of compliance for component validation [9].  When CORBA-compliance alone is not enough to guarantee an ility, solutions can be custom-made; for example, a consortium is working on an object framework for payments [10].  Furthermore, distributed object communities continue to work on the problem of common ontologies to pave the way toward common solutions [11].

In short, the challenges to software component architectures have no generic solutions; however, the abstractions and models developed by specific efforts have led to considerable gains in understanding and guaranteeing properties of systems and their components.
[7] Svend Frolund, Coordinating Distributed Objects, MIT Press, 1997.
[8] Daniel Sturman, Modular Specification of Interaction in Distributed Computing, available at
[9] Paolo Sivilotti, Specification, Composition, and Validiation of Distributed Components, abstracts available at

II-1 Economy of Component Software

Moderator:  Jay M. Tenenbaum, CommerceNet

Scribe:  Catherine Tornabene, Stanford

Topics:  The purpose of this session is to develop a framework for thinking about what it will take to build an economy of component software.  Why?  to accelerate coming economy of components and possibly to level the playing field for multiple vendors [avoid CORBA vs. DCOM as the central debate]



See session summary slides (.ppt4).

In this workshop session, the participants examined the issues surrounding the development of an economy of component software.

The session had three presentations:

Robert Seacord - Duke ORB Walker

Robert Seacord's work on the Duke ORB Walker is based on a model of the component software economy that was widely accepted by session participants; namely, a marketplace which will house many heterogeneous components, some publicly available, and some behind corporate firewalls. The Duke ORB Walker is an automated Internet search engine that walks through this marketplace to collect information about software components that are ORB compliant. It is analogous to how current Internet search engines collect information about web pages and web content. Questions to answer regarding the Duke ORB Walker revolve around mechanisms for searching for other component technologies such as COM, JavaBeans, etc. as well as whether the ORB walker will need to rely on an implementation repository to find ORB compliant components.

This talk led to further discussion regarding the essential infrastructure of a software component market. Session participants considered a simple method of finding usable components as a necessary part of the infrastructure of a component-based software market.

Anup Ghosh - Certifying Security of Components used in Electronic Commerce

Anup Ghosh's work examines a method of providing software certification for software components. The underlying assumption of his work with regards to a software component marketplace is that consumer confidence in the -ilities of a software component is necessary to the widespread adoption of the component marketplace. He examines the use of certification as a viable business model for providing security assurance for software components. Under this model, a potential component is put through a series of pre-planned tests. If the component passes these tests, then it is considered security-certified, and thus can be considered ready for the component marketplace.

This talk led to a further discussion of what sort of certification services might exist in a component-based software market. It was largely agreed that not only would certification of a component's -ilities would be essential, but that further semantic certification would be necessary as well. There was great interest displayed in services that might test a component's semantic correctness.

Martin Bichler - Object Frameworks for Electronic Commerce Using Distributed Objects for Brokerage on the Web

Martin Bichler discussed the OFFER project, which is a CORBA-based object framework for electronic (e) commerce. One of OFFER's key components is the e-broker, which acts as an intermediary for e-procurement. The OFFER e-broker assists consumers as they peruse heterogeneous e-catalogs and also acts as a price negotiator using auction mechanisms. The OFFER group is also studying what other functionality might be needed in an e-broker.

This talk led to a discussion about the research question the OFFER group is studying regarding what features might be necessary in an e-broker. Since OFFER is CORBA and Java compliant, there was discussion as to whether the e-broker should be extended to COM, and whether that would be feasible.

Discussion covered these topics:

Market Issues

The fundamental issues surrounding a component software market were effectively reduced to one question: what will people pay for? This question established a framework for our discussion of market issues:

Conceptual Issues & Barriers

After the discussion about the market issues, we then turned to broader issues in the development of a component market that are not purely marketing issues. (Some of the marketing issues were subsumed under these conceptual issues.) We had originally intended to talk about barriers to the component market as a seperate discussion, but that discussion ended up merging with conceptual issues, so they are listed here together.

Good quote:  "Forget REUSE.  Just aim for USE." -- Marty Tenenbaum, CommerceNet

II-2 Component Model Representation and Composition

Moderator: Umesh Bellur, Oracle

Scribe:  Kevin Sullivan, University of Virginia

Topics:  Component models expose information about how a thing is constructed, which is needed for replacing parts and for providing higher order services.  What information should be exposed?  What is the nature of the glue? We are not there yet with respect to mix-and-match, plug-and-play, or best-of-breed.  Give technical reasons why not?  One is that encapsulation hides implementations (which components need to do) but component models expose some information about how a thing is constructed, which is needed for replacing parts.  Component models are just now coming on the scene.



This session focused on what components are, how they are changed, and how they can be designed to ease the evolution of component-based systems.  It was proposed that we discuss what a component is in terms of a programming model (how it is used in implementations), meta-information model (how it is described in machine-readable form), and rules of composition.  It was emphasized that meta-information should be "reified" so as to be machine-usable at runtime.  It was also suggested that we consider the issue in terms of the model of system evolution implicit in the definition of a component.

By way of scoping, it was also noted that composition is not always merely a question of gluing components together; it requires reasoning about system configurations.  We distinguished between design and execution time.  We also asked whether substitutability fits under the component model.

A significant part of the discussion focused on the question, What is a component? Many useful definitions were offered.  It was clear that the question is one for which no single answer (other than it depends) could suffice.  The notion was offered that a component is something that satisfies a given component standard, that there is no one useful standard, that the purpose of a standard is to provide certain defined assurances about the properties of  components that actually conform to the standard, and that the right question is, What assurances do you want from components and systems from built from them to have, and what rules must a standard impose in order to ensure that such properties are obtained?

Among the answers to the question What is a component question were the following: a reusable package of functionality; a unit of manufacture; a unit of distribution; a unit of standardization; a reusable specification/implementation pair (emphasizing the provision of a semantically rich specification along with an implementation); and a unit of software obtained elsewhere (emphasizing that a central purpose of component approaches is to enable construction of systems from parts whose designs are not themselves controlled  by the system designer).

The discussion then turned to the categorization (in terms of programming model, meta-model and composition model, primarily) of concrete aspects of the modern concept of what a component is.

Next we discussed the connection between components and objects.  Is every component an object?  Is every object a component?  If not, then what is it that you have to add to an object to make it a component?  Finally, if a component has to do with packaging, then what is it that’s inside the packaging—an object, or something else, e.g., something more complex than an object?

II-3 System-wide Properties or Ilities

Moderator:  Robert Filman, Microelectronics and Computer Technology Corporation

Scribe:  Diana Lee, Microelectronics and Computer Technology Corporation

Topics:  How does one achieve system-wide properties when composing components. How can we separate application logic from -ility implementation



The Problem: How does one achieve system-wide properties (also known as: ilities, system qualities, qualities of service, and non-functional requirements) when composing components? How can we separate application logic from ility implementation and then weave them together to make a complete system?



What makes an ility?

Ilities have in common the property that they are not achievable by any individual component. It is not possible to achieve an ility internally within a component.

A Partial List of Ilities:

Reliability, security, manageability, administrability, evolvability flexibility, affordability, usability, understandability, availability, scalability, performance, deployability, configurability, adaptability, mobility, responsiveness, interoperability, maintainability, degradability, durability, accessibility, accountability, accuracy, demonstrability, footprint, simplicity, stability, fault tolerant, timeliness, schedulability.  See also Workshop Topics - Ilities.

Which ilities can be achieved in a component system? What makes some harder to achieve than others?

There are some properties of ilities that impact how easily the ility is achieved in a composed system. For example, a security policy for mandatory access-control (which is transitive) is easy to compose. However, security based on discretionary access control uses a user or group id (which is not transitive) and is more difficult.

Composability of policy, where implementation and system architecture are dependent on one another, will also determine if ilities are composable

How do we go from the customer’s high-level description of an ility to specifications that can be mapped to code? Are we really achieving the ilities? For example how do we go from a requirement that a system be "secure" to a code specification for 64-bit encryption? Is this 64-bit encryption really what is meant by "security"?  Conclusions:

Managing compatibility: Can we support hard real-time Quality of Service?

There seem to be two communities interested in hard real time:

Enabling hard real time:  Tools such as Doug Schmidt’s TAO and OMG’s Real Time RFP.  There is a need to guarantee that components obey certain rules that allow for things such as scheduling of services and resources.

Some Other Related Papers:

II-4 How do these fit in?

Moderator:  Gio Wiederhold, Stanford

Scribe:  Craig Thompson, OBJS

Topics:  Few position papers covered the following topics directly but they are challenging, and we'll need to understand them to fully understand architectures and component technology.



We only discussed the first two topics listed above:  modularity and views.

Aloysius Mok - The Objectization Problem

The presentation by Al Mok covered what he called the objectivization problem, that objects effectively set up encapsulation boundaries but that different ilities tradeoff differently, which can often lead to different object decompositions, so that there is not always one best decomposition, or object factoring.

Mok described a problem in real-time scheduling of AIMS Boeing 777 data , which includes process, rate, duration, and latency bounds. There were 155 application tasks and 950 communication tasks. The hardware architecture is a bus plus 4 task processors and a spare, 4 I/O processors and a spare. Such systems are built using repetitive Cyclic Executive processes. The FAA likes this proven technique for building certified systems. Process timings are rounded off. The problem is represented as inequalities and many disjunctions representing many scenarios or choices.  A paper in the RTSS'96 proceedings describes the work.  We noted similar problems in other domains:  In A7 aircraft, every unit is different and can require constant reprogramming. Bill Janssen mentioned that the largest Xerox copier has 23 processors and 3 ethernets with many paper paths to coordinate.

Why show this? You can't solve these sorts of problems by adding more processors. To add a new function to the above system, we need to recompute schedules and possibly even rework the hardware architecture. We want to make it much easier to build, maintain, and validate such systems! Object technology is attractive because of the small address spaces per object, with limited interactions walled off by encapsulation and methods. But tasks participate in other ways than to send messages; they share resources to insure performance.

Mok's work indicates that requirements precede design and are not themselves objectified though design is.  There seems to be a different view for each ility.

Mok showed a use of objects as a unit for resource allocation, integrity management, and task requirements capture. These three roles require different granularities. We need the system to provide these views and automate consistency maintenance among them. The three types of ilities conflict: resource vs. tasks, integrity vs. task, resource vs. integrity.

Favoring different ilities lead to different object decomposition schemes.  One scheme is more amenable to incremental modification, another is more resource efficient, another is more robust against object failures. There is no best scheme due to tradeoffs. Now, when one objectifies a certain solution, then a change in system decomposition is needed to optimize for another purpose. A conclusion is that OO technology makes choices up front regarding ility tradeoffs. Sometimes we take objects too seriously.

Discussion followed the talk.

We seem to agree that the right way to view a system is from the vantage of multiple views (aspects), not a single view.  The power is being able to separate the views when you want to.  A key advantage is separation of concerns.  This leads to a series of puzzles:

In practice today, much of this is manual but it can be automated by UML-like notations, compiler-like technology, and other mechanisms.  One of the promises of middleware today is to use object wrappers around legacy code to turn legacy code into an object.  There are several problems:  it is difficult to make sure there are no interactions, internal modularity that may be present may not be exposed, it is difficult to know the metadata component properties of the opaque legacy code if that code was not built to expose such information (which it was not, almost by definition), and the legacy code does not make or expose explicit guarantees. Still, there can be value in walling off in a standard way the parts of the system, even if only to provide a place to hang metadata.

We are searching for ways to glue parts together that will result in systems that are easy to maintain, more fault tolerant, etc. But the search will likely require going beyond a black box components view of compositional middleware to say more about aspects of the glue, that is, treating the connectors and constraints as first class so we can reason about them.  Today this glueing is mostly done with scripting languages.   Applying this to middleware, what concrete things to we need to do to augment the OMG OMA architecture? Add ilities via implicit invocation. The end result is a pretty tightly coupled system. One comment was to treat views as a constraint satisfaction system, and let the ultimate compiler put them together.  Another comment was that reflection is important. Connectors are meta objects. An architecture is more than nodes and links; it should also be reflective and connectors have state. Andrew Barry mentioned he is working on a connector language where connectors enforce these constraints and might reify the events into methods.

We discussed static versus dynamic aspects of systems.  It is easier to build static systems versus dynamic ones but the latter is the goal. In the firm real time approach, you budget everything you do and know where you are in the budget. Failure in hard real-time systems does not mean the whole system collapses but that some constraints are missed. You want to insure traceability. If you have done analysis at compile time, you don't have to make decisions at run time.

But in several kinds of systems, we also need to accommodate evolution.  In the automobile industry, design lifetime is relatively well understood.  Car designers put in well-defined change points for upgrading to the next model year. Malleability should be a design view. You construct your software so what must be changed should be visible. This is predicated on the assumption that subsystems will be stable. Gio Wiederhold noted that universities do not teach design for maintenance yet the military budget is 70% maintenance, industry is 85%. Thompson referenced his point made in session I-1 that system design is protected against known and foreseen requirements but some unforeseen ones which can cause radical redesign.  One hope is that modularizing a system's functionality and ility aspects can often make future systems more evolvable and adaptable.

III-1 Scaling component software architectures

Moderator: Stephen Milligan, GTE/BBN Technologies

Scribe: John Sebes, Trusted Information Systems

Topics:  What family of design patterns and mechanisms can be used to scale component software -- federation, decentralized or autonomous control, metadata, caching, traders, repositories, negotiation, etc. in this environment.   What will break as we move from ORBs in LANs to ORBs on 40M desktops?  We'll have to replicate services and connect 1000's of data sources, cache at intermediate nodes, federate services, federate schemas, and worry about end-to-end QoS.  What can we say about federation, decentralized or autonomous control, metadata, caching, traders, repositories, negotiation, etc. in this environment.  And don't forget that when we scale across organization boundaries, we have security and firewalls to contend with.



We started by having each person describe their definition of and interest in scalability, which included design scalability, operational scalability, number of components (as software grows in the maintenance cycle), amount of capacity (as number of transactions grow at runtime), control scalability, network management, and survivability.

There were two presentations.

John Sebes - Collaborative Computing, Virtual Enterprises, and Component Software

John Sebes addressed the combination of security and scalability in the context of distributed applications operating between multiple enterprises. He asserted that the scaling factor of number of enterprises has a significant potential for breaking other system properties. He described a technique for cross-enterprise distributed object security, and asserted that component software techniques (especially composability) are required for ordinary mortal programmers to be able to integrate distributed security functions into applications that require them. Scale factors include: number of applications with security requirements, number of users with access privileges to applications, number of rules relating users and privileges to applications, and number of system elements (application servers, firewalls, etc.) that must enforce security constraints.

In discussion, various respondents described related scalability factors:

Electronic commerce was identified as an area of new software development where rapid scale-up will occur quickly in the software lifecycle.

Pitfalls include bad individual components, bad integration of components:  you can break the system at any level. We also discussed that in scaling a component, one must also address the interrelationships (reuse) of that component by other components, and the effect scaling will have on components using the scaled component, resource utilization requirements and the impact on the configuration of the system. There is a general issue of whether components themselves should be responsible for scaling

Venu Vasudevan, Trading-Based Composition for Component-Based Systems

Venu Vasudevan addressed scalability in the context of dynamic service location:  scalability of number of service instances, where there are multiple instances in order to provide service distribution, availability and reliability.  His example was annotation of World Wide Web documents.  Service discovery in this example is discovering services that store and provide annotations to URLs. Multiple repositories might publish information about the annotations that each has, so that other repositories and/or users can find them.  Federations of annotation repositories would be related by a "trading service" (in the CORBA sense of the term, which itself is based on the trader standard from Reference Model for Open Distributed Processing (RM-ODP)) that allows users to find annotations throughout the federation of annotation services.  There are multiple possible approaches to implementing such a trader.  Some solutions are so simple as to be scalable, but also not useful in large-scale systems.  For example, a pure registry-based trader is stateless (and hence does not have to store progressively larger state as transaction scale grows) but can't refine previous request/results because it doesn't remember them.

Discussion followed.

Object composition/delegations was also identified as a scale factor in terms of performance. If an object is composed of multiple other first-class objects, this is inherently higher overhead than an object being composed of components that are "mixed-in" to form the object, but which are not objects themselves. Hence, component composition (in one object, rather than componentified objects calling on componentified objects) may be a promising way to increase the scale of software reuse without getting an exponential growth of objects.

We also discussed scaling issues related to federation. One sense of federation was explored in more detail than some others -- that of linking together organizations/domains in ways so that resources are shared while still retaining the autonomy of the domains system (without relinquishing control over the resources). A definition was "A community of autonomous, freely cooperating sub-communities. Within a federation, one sub-community is not forced to perform an activity for another. Each community in a federation determines resources it will share with its peers, resources offered by its peers, how to combine information from other communities with its own, and resources it will keep private." Autonomous control implies fairly fine grained control over what is shared and what is not, and with who, e.g., dynamically enabling a domain to offer resources for sharing, and to relinquish those resources from sharing, as well as to enter or leave the federation.  Federations allow interfacing for resource sharing among organizations in a way that can slow the growth of combinatorial explosion that would apply with the N-squared bilateral relationships that would apply without federating (a more brittle, complex system of systems). The work currently being done by the Defense Modeling and Simulation Office (DMSO) on their High Level Architecture addresses federation among environments of models; the work of NIIIP addresses federation among Virtual Enterprises; the work of the Advanced Logistics Program on clusters addresses federation of logistics domains.

One of the challenges of trading/brokering/warehousing is the desire to centralize (or to create central repositories) in order to "scale up" the amount of data that can be consistently stored and analyzed. However, this should be virtualized and the ability to migrate (one more ility) from a central repository to a distributed repository should be transparent to the accessing component. When composing components, you don't want to be faced with the choice of relying on centralized control or implementing scalable consistency. Perhaps some part of the glue between components can provide some of the scalability so that component composers don't have to.

Metadata is critical to writing evolvable/scalable components. A component should describe what it wants to use (from other components) rather than referring to what it needs. This seems counter-intuitive (why would I look up my friends phone number every time I call?) but it is an isolation principle that is needed as systems get larger and change. Connectors and their resultant interfacing mechanisms, along with the metadata about components, is the key to enable scalability. But in order to do this effectively, the connectors between components must be defined well enough to include not only syntax, parameters, naming, but also semantics, reusability, side-effects, and other components upon which they are dependent. Further, connectors must be efficient enough to find what is described very quickly in the 99 times out of a 100 when the answer is always the same. Hence, component connectors seem to be critical in terms of making components general enough to be scalable. Here we arrive at the idea of connectors as first class objects. Caching and related techniques/issues (consistency cache invalidation etc.) then become critical infrastructure parts of the glue between components.

III-2 Adaptivity of component software architectures

Moderator: Bob Balzer, USC/ISI

Scribe:  Kevin Sullivan, University of Virginia

Topics:  Is there a common component software framework that allows us to build software that is



There were two presentations:

David Wells -  Survivability in Object Services Architectures

The project objective is to make OSA applications far more robust than currently possible in order to survive software, hardware, and network failures, enable physical and logical program reorganization, support graceful degradation of application capabilities, and mediate amongst competing resource demands.  Survivability is added, not built in, requiring minimal modifications to OSA application development and execution, because survivability is orthogonal to conventional OSA application semantics.

Louise Moser - The Eternal System

The Eternal Systems, based on CORBA, exploits replication to build systems that are dependable, adaptable and evolvable.  Consistency of replicated objects is maintained with multicast messages.  All of the processors receive the multi-cast messages in the same total order and perform the same operations on the replicas in the same total order.  The replicas of an object are presented as a single object to the application programmer.

Discussion followed.

Discussion centered on identifying and characterizing various mechanisms and approaches to adapting systems.  By adaptivity in this context we meant changing systems in ways that were not anticipated by their designers, and for which the right "hooks" are not present in the design.

The discussion focused on changes involving augmentation of systems with desired non-functional properties (ilities).  The approach that the group took was roughly analogous to the taxonomic style of the Object-Oriented Design Patterns work of Gamma et al.  More specifically, the group identified different adaptation mechanisms, and then, for each, developed a description of it by giving it a name; referring to one or more systems in which it is used; identifying the basic technique involved; and listing key benefits and limitations.  The following mechanisms were identified during the session.  Some mechanisms are special cases of others.

Mechanism:  Callback
Instance:  Message/Event Handlers
Technique:  User supplied handler
Benefits:  Late binding
Limitations:  Not composable, nestable; Long user callbacks can starve event loop

Mechanism:  Type Extender
Instance:  OLE DB
Technique:  Enriched set of interfaces for supplied component
Benefits:  Expanded standardized interface for alternative implementations; Existing interfaces pass through
Limitations:  Fixed extension

Mechanism:  Binary Transformations
Instance:  EEL from Wisconsin, Purify, Quantify
Technique:  Rewriting
Benefits:  Source not needed; Fault isolation guarantees (Sandbox)
Limitations:  Low level; Representation not adaptable (composability hard)

Mechanism:  Source Transformations
Instance:  Refine, Polyspin
Technique:  Rewrite
Benefits:  Composable; Application level behavior mix-ins; Simpler analysis
Limitations:  Require access to source; Confounds debugging;

Mechanism:  Target Intermediation
Instance:   Proxy, Wrapper

Mechanism:  Communication Intermediation
Instance:  Firewall, Proxy Server, CORBA interceptors
Technique:  Communication Mediator
Benefits:  Client and server unchanged; Overhead amortized in high cost operation; Facilitates playback
Limitations:  Only applies to explicit communications channel.

Mechanism:  Aspect Oriented Programming
Instance:  AspectJ
Technique:  Weaver (transformation)
Benefits:  Maintain modularity; Separate aspect specification
Limitations:  Possible interactions between aspects; Complex interactions debugging

Mechanism:  Instrumented Connector
Instance:  PowerPoint architecture editor, virtual file system
Technique:  Shared Library (DLL) mediator
Benefits:  Finer granularity (by called, by interface)
Limitations:  Platform dependent; Composability is difficult

Mechanism:  Reified Communication
Instance:  Higher Order Connectors; ORB Interceptors
Technique:  Connector modification
Benefits:  Locus is independent of participants; Mediation at application level

The group found the discussion useful enough that it was decided to continue the effort after the workshop.  To that end, we decided to develop a World Wide Web repository of these adaptation patterns.  See (contact:  Peyman Oreizy <>) which contains more detail on the above mechanisms.  You are invited to contribute to the discussion represented at that site.

III-3 Quality of Service

Moderator:  Richard  Schantz, BBN

Scribe:  Joseph Loyall, BBN Technologies

Topics:  What are the main concepts?  How do you insert QoS into an environment.



This breakout session focused on Quality of Service (QoS) as an organizing concept for integrated resource management, especially as it relates to development by composition.

The session had four presentations:

Gul Agha - Composable QoS-Based Distributed Resource Management

Gul presented a research direction for addressing the composition and management of QoS policies and mechanisms. His idea proposes enforcement of QoS separate from application functionality and enforcement of QoS mechanisms separate from one another. After presenting an overview of the Actor model, a formal model of distributed objects, he proposed several ideas for managing and enforcing QoS using actors, including connectors, objects representing component interfaces; a two-level actor model for managing and enforcing QoS, including base level actors that are simply functional objects and meta level actors that watch system activities and resources; a set of core resource management services, i.e., basic system services, chosen by looking at patterns of system activities, where interactions between an application and its system can occur; and QoS brokers for coordinating multiple ilities.

Gul proposed that the set of services be made components themselves with a compliance model. Then the actor model can be used to formally prove properties, such as liveness. It would also enable services to be reused and composed in managed ways.

Peter Krupp - Real-time ORB services in AWACS

Peter is co-chair of the OMG Real-time SIG. His talk discussed the work that is in progress to include QoS in CORBA. This work came out of work in evolvable real-time systems being developed for AWACS 4-5 years ago. In the last year of that project, the team decided to use CORBA, but they needed scalability, predictability, real-time response, and fault-tolerance. OMG currently is developing a specification for real-time CORBA and is soliciting a real-time ORB, i.e., one that does not get in the way of a real-time operating systems, has predictable QoS, real-time performance, and fault-tolerance. In addition, OMG wants an ORB with an open middleware architecture that is customizable.

Peter described some current real-time ORB work. A real-time ORB has been developed for the AWACS program, but when everything was put together, the database became a bottleneck. MITRE, the University of Rhode Island (Victor Wolf), Sun, TINA, and Washington University (Doug Schmidt) have been doing work in the area of real-time ORBs.

Michael Melliar-Smith - Fault-tolerance in the Eternal System

The objective of the Eternal system is to provide fault-tolerance with replicated objects without using a custom ORB. Eternal uses a replication manager sitting on the IIOP interface, soft real-time scheduling, and extensions to QuO's QDL languages. The replication manager is a set of CORBA objects, with a profiler that feeds it information about replicated objects. Eternal tries to hide replication, distribution, consistency, and handling of faults.

Rick Schantz - QoS descriptions and contracts in QuO

QuO is a framework for incorporating QoS in distributed applications. QuO provides a set of quality description languages (QDL) for describing possible regions of desired and actual QoS, mechanisms for monitoring and controlling QoS, and alternate behaviors for adapting to changing regions of QoS. QDL can be used to describe aspects of an application's QoS and a code generator (similar to the weaver in aspect-oriented programming) creates a single application from all the description files and application code.

The moderator posed the following questions to the group for discussion:

What is and isn't meant by QoS? What do we mean to cover with the term? Is it defined broadly or narrowly? What is the relation of QoS to other ilities like security and system management: same, different, integrated?

We discussed whether QoS should be defined in the traditional network-centric, narrow way as network throughput and bandwidth; or if it should be defined in the broader sense, including QoS from the network level up to the application level. As a group, we unanimously (or nearly so) agreed that QoS should be defined to include "ilities", as well as network capacity. Thus, QoS includes security, timeliness, accuracy, precision, availability, dependability, survivability, etc.

Security and system management needs change over the lifecycle of systems and coordinating these changes is a part of providing QoS in the system. Specifically, mediating the need for security, varying degrees and levels of security, at different times and situations, is analogous to providing, negotiating, and adapting to changing bandwidth needs in a system.

What are useful concepts toward introducing, achieving, integrating, and composing QoS?

The moderator offered the following candidate concepts: adaptation, reservation, scheduling, trading, control, policy, region-based, specification, components, abstractions, and aspects. Gul's talk stressed that composition of QoS is necessary since more than one "ility" might be needed in an application. It also stressed formal analysis of QoS mechanisms, since some might interfere. The AWACS work relies on QoS being enforceable. The QuO work, however, doesn't rely on QoS being enforced, but relies on adaptability, i.e., the ability of mechanisms, objects, ORBs, managers, and applications to adapt to changes in QoS.

How is QoS introduced into an environment and into which environments? How aware should client applications be of QoS:  unaware, awareness without pain, immersion? How do we specify or measure it? Is there a common framework? Mechanisms vs. Policies vs. Tools vs. Interfaces vs. Application specific solutions?

Several of the speakers and participants mentioned specific examples of ways in which QoS had been inserted into applications. AWACS embedded QoS into the application and environment, i.e., the application provided interfaces for specifying QoS parameters while the mechanisms and enforcement were provided by the OS. Eternal placed QoS (i.e., replication) at the IIOP interface, effectively hiding it from the application. Gul's approach uses wrappers so that objects believe that they are actors and exhibit actor properties. QuO uses a combination of wrappers (i.e., object delegates), QoS contracts separated from the functional application code, and interfaces to system resources and mechanisms; it supports insertion of QoS at many different levels.

Most session participants agreed that QoS concerns should be kept as separate from functional concerns as possible. However, while some believed that QoS could be provided by wrappers and middleware, others believed that QoS could not be in middleware. Instead it needs to be somewhere, like the OS, where it can be enforced. Others believed that, in many cases, enforcement is not as important as notification and adaptation. That is, instead of trying to guarantee QoS, the system does its best to provide it and tries to adapt (or allow the application to adapt) when it is not provided. It was mentioned that there are situations in which enforcing QoS requirements is more important than other situations (hard vs. soft QoS requirements).

Many session participants also agreed that QoS in distributed applications provides the need for another role in software development, that of the QoS engineer. In many cases, the lines between the roles will be blurred, and it's possible that one person or set of persons will develop both the functional and QoS part of applications. However, in many cases, someone will require an ility, e.g., availability, and someone else will decide what policies and mechanisms are needed to provide it, e.g., the number of replicas and type of distribution.

The session participants disagreed on the idea of awareness of QoS. In some situations, applications and users might want complete awareness of QoS and many believed that some techniques, such as wrappers and embedding QoS into the application (e.g., AWACS), provided it. In other situations, applications and users want to be completely unaware of QoS. One person argued that complete unawareness is seldom, if ever, wanted. He offered the analogy that airline passengers didn't want to worry about QoS, but they want someone (e.g., the pilot, the mechanics) to worry about it. Someone else offered the opinion that there are two kinds of being unaware: not caring and not knowing. In some cases, one doesn't care how QoS is provided, as long as it is provided. This might fit into awareness without pain.

Composition was a major concern when providing QoS. Everyone agreed that many applications will need more than one ility at a time. However, we believe that some will compose better than others, while some will not compose at all. The concern was expressed that retrofitting applications with QoS might lead to interoperability and composition problems. It might not be possible to separate ilities in many cases, even though it is desirable to change one without concerning the other. Designing QoS concerns or ilities in so that they are maintainable and controllable might be all that we can accomplish. The speakers provided different ideas about composition. The actor model enables a certain amount of formal reasoning about the composition of ilities. AWACS provided interfaces to QoS mechanisms so that a trained expert could make tradeoff decisions. QuO recognizes the need for composition of QoS contracts and needs to address it.

Where are we now, in which directions might we head, what are the hard problems to overcome?

As the last part of the session, the moderator asked each participant to summarize a major point, concern, problem or direction with relation to QoS and the session discussions. The answers follow:

We need a system-level notion of QoS and need to build adaptive applications aware of the quality they need and adapting to changes in it.

Providing QoS means striking a balance between conflicting non-functional requirements and providing the tools to make tradeoffs. This creates a new engineering role, that of the quality engineer with the expertise to make these tradeoffs.

Building systems will include building QoS contracts and developing code compliant with them.

Composition of ilities, contracts, and mechanisms is a key issue that will need to be addressed.

There is no single definition of QoS yet, but examples suggest that it can be addressed by a common framework. There is also no well-established notation for describing QoS yet.

Another key issue is bridging the gap between the high-level notion of QoS that applications need, i.e., ilities, and low-level QoS that mechanisms and resources can provide.

III-4 ORB and Web Integration Architectures

Moderator:  Rohit Khare, University of California at Irvine

Scribe:  Adam Rifkin, CALTECH

Topics:  ORB and web architectures will increasingly overlap in function.  How are ORB and web architectures alike and how are they different?  are the differences accidental and historical?  How can we avoid recreating the same set of services like versioning for both architectures?  can we find ways to neatly splice them together.

Papers Discussion

Distributed object enterprise views were converging nicely in the early 1990s, until the Web came along.  Tim Berners-Lee succeeded in modularizing systems, making information truly accessible to the masses by combining universal thin clients with flexible back-end servers, with interaction through a gateway with third-tier applications such as databases.

In the late 1990s, the question remains how to use the best of both the Object and Web worlds when developing applications.  Object Services and Consulting, Inc., (OBJS) is investigating the scaling of ORB-like object service architectures (for behaviors such as persistence, transactions, and other middleware services) to the Web (for behaviors such as firewall security and document caching, as well as rich structuring) by using intermediary architectures [1].  They are also exploring data models that converge the benefits of emerging Web structuring mechanisms and distributed object service architectures [2].
Ultimately, the Holy Grail of application development using "Components Plus Internet" can be realized in many different ways, including:

In prototyping an annotation service [6] and a personal network weather service [7], OBJS is developing an intermediary architecture that exploits the benefits of both ORBs and the Web [8].  Web engines provide universal clients and web servers provide global access to rich data streams; ORBs furnish middleware object services and open the door to enterprise computing.  In integrated, hybrid systems, these roles are leveraged.

Fundamentally, is a Web server all that different from an ORB?  Perhaps. HTTP as a protocol makes provisions for error recovery, latency, platform heterogeneity, cross cultural issues, caching, and security, for a specific type of application (the transport of documents), whereas ORBs have a generic object architecture that allows for the selection of services piecemeal as needed.

The Web has focused on providing a rich typing system, whereas ORBs have focused on providing a rich set of APIs.  To that end, the Web is useful for describing data aspects (as opposed to operations), whereas CORBA focuses on procedural aspects (as opposed to types).  This is manifested in the Web's document-centric nature -- and in CORBA's loss of its compound document architecture.

It is also manifested in the Web's approach to heterogeneity: a single common type indirection system, MIME, allowing new data types to be added to the system as needed.  By contrast, ORBs define data types strongly, so that the IDLs know exactly what is going to hit them.

MIME is one example of how the Web was adopted using a strategy of incremental deployment, starting with file system semantics and building up from there.  As a result, the Web in its nascent stage has not yet deployed "services" (in the traditional "object" sense), but they are forthcoming shortly (WebDAV for versioning, XML for querying, and so on).

One limitation of HTTP is that although in theory HTTP can be run for communication in both directions (through proxies), in practice HTTP 1.x must be initiated by a client, so two peers would need two open channels (that is, two HTTP pipes).  IIOP has hooks for avoiding this through back and forth polling.

On the other hand, the Web has several strengths:

These strengths could be applied to CORBA, adding real attributes to CORBA interfaces.  Perhaps some combination of Object and Web technologies may ultimately prove useful (for example, IDL for interface specifications and XML for data exchange and storage).  The Web might assimilate CORBA, and CORBA might assimilate the Web, and W3C's HTTP-NG might do both [3].  And although Web technology as commercially available is insufficient presently for many object developers' needs, it is, as Marty Tenenbaum of CommerceNet would say, simple enough that a fifth grader can use it.

For now, we continue to attempt to understand the commonalities and the differences of ORBs and the Web in our application analyses and designs. As of January 1998, CORBA remains a generic framework for developing many applications, whereas the Web is a generic framework that has deployed many applications.

IV-1 Working Toward Common Solutions

Moderator: Dave Curtis, Object Management Group

Scribe:  Craig Thompson, OBJS

Topics:  The database, middleware, Web, and other communities are expanding their territories and are coming up with their unique solutions to problems already addressed in other communities. How do we prevent a proliferation of incompatible standards from developing out of these separate communities.



Dave Curtis began this session by renaming it "Can't We Just Get Along?" He translated this to several questions: Where should the OMG Object Management Architecture go from here?  Craig Thompson presented some ideas for the next generation of the Object Management Group's Object Management Architecture (OMA) -- see full paper.  The OMA Reference Architecture is the familiar ORB bus with basic object services, common facilities, domain objects and application objects accessible via the bus (see OMG OMA tutorial).  This was a radical experiment in 1991 and the OMG has since populated the architecture with a fair number of middleware services, in fact, OMG is working on a record number of RFPs for services, facilities, and mechanisms at present.  The OMA has been a serviceable and successful middleware architecture that has provided a design pattern for the middleware community.  One strength is that it has provided both technical expansion joints and parallel community organizations, for instance, the ORB, basic object services, and common facilities have their own sub-architecture documents that expand finally into leaf RFPs (and there have been organizational subdivisions of OMG along these lines until recent consolidation.  Thompson pointed out that the OMA does not explain nor does it preclude several things and perhaps it is time to give substantial attention to these (and others): OMG-DARPA Bidirectional Technology Transition Opportunities.  The discussion turned to cross-community opportunities for technology transfer.  We focused on identifying specific opportunities between DARPA and OMG.  DARPA is developing an overarching architecture called Advanced Information Technology Services (AITS) architecture which covers command and control, logistics, planning, crisis management, and data dissemination.  Todd Carrico showed a complex foil covering one view of the AITS architecture, which shows object services in the bottom right, and asked for a mapping to OMG, that is, where are the technology transition opportunities. Complexity.  How can newbies take part in middleware? One would think that since we are dealing with components that it might be easier for small groups to contribute to middleware with new components.  This may become true over time but there are still barriers.  For instance, right now, most services vended by middleware vendors are not portable across different vendors' ORBs.  We spent some time discussing complexity and the ility understandability? Where does the complexity come from? ("is it Satan?" asked the Church lady) or is it having to know about OO, OMG, W3C, and all the little standards and subsystems, hundreds of details like when to use public virtual inheritance. There is perception complexity in telling the healthcare community what CORBA is. There are other roadblocks, for instance, OMG not having a programming metaphor, the OMG community not providing too many development tools, the need for training and difficulty in teaching students about CORBA, even the ready availability of specifications in convenient formats. We need better ways to facilitate the widespread use of middleware. Others noted in OMG's defense that there is a fallacy of comparing what VB is trying to do and what CORBA is trying to do.

Interlanguage interoperability.  Another strand of discussion covered interlanguage interoperability.  One comment:  OMG language bindings are a straight jacket. If you commit to CORBA, the IDL type system pervades everything you do. IDL provides common ground over some domains of heterogeneity. A counter argument:  there is a tradeoff of programming convenience and expressing this convenience. If dealing with a multilingual system, your type mismatches go up without IDL. So you are hedging against an unknown future. There seems to be the presumption that choosing CORBA is the right medicine to ward off later sicknesses, or said another way, pay me now, not pay me later.  But history has often sided with those who would pay me later (Java).  So a challenge for the OMG community (and us all) is how to have our cake and eat it too -- get both the immediate gratification of a single language solution (simplicity and tools) and the flexibility of language-independence.

Summary of suggestions:

IV-2 Towards Web Object Models

Moderator: Ora Lassila, Nokia Research Center and W3C

Scribe: Andre Goforth, NASA Ames Research Center




Frank Manola presented his paper, "Towards a Web Object Model". His central point was there is a need to increase the web's information structuring power. The current web object model is "weak" because it is difficult to extract object state out of HTML and to express "deeper" semantics (behavior).  He discussed how current efforts such as XML, RDF and DOM are addressing this need.  This led to a discussion of how well these standards provide enhanced structuring power and of a comparison of the Web's technologies with OMG's CORBA. The session ended with a summary of what the Web can learn from OMG and OMG from the Web.

Here are some of the salient discussion points:

Ora commented that XML's contribution for strengthening the web's object model is overblown; it is more of a transfer mechanism that addresses syntax and not semantics. There was no major disagreement with this point of view. Frank commented that users with large document repositories want a markup language that will outlive today's web technology.

There were several comments that DOM provides sufficient features to support object discovery and visualization. DOM provides a generic API so a client can recreate an object and push it to the server and let the server use it; this gives a "window" on the web. It was questioned why the client/server distinction is important: you could support a peer-to-peer interaction model.

There was a question about ways to reference objects on the web. Ora replied that he thinks that RDF will be sufficient to provide object discovery. Also, the issue of different type systems was raised; for example, DOM has its own type hierarchy. Ora pointed out that RDF does not provide a large built-in type hierarchy but gives the user the ability to create one; he went on to point out that RDF does not care about DTDs. Someone commented that DTDs may serve as XML schemas.

Someone commented that it is feasible to represent CORBA IDL in XML. Response was that one might be able to define a DTD for OMG's IDL description.

Ora commented that he had suggested that RDF transfer syntax be in done in terms of S-expressions and was met with a lot of resistance. There were some cries of derision and comments about religious syntax wars in the breakout session, all in good jest.

Bill Janssen (Xerox PARC) asked why CORBA doesn't have a human representation "get" method for objects? There was discussion on what made the web so popular. The consensus was that it provides instant object visualization.  It was pointed out that the web provides only one instance of an information system whereas CORBA has the generality to support a broad range of systems.

At this point, the discussion turned to whether W3C was going to overtake OMG. The rate of growth of the web is phenomenal, whereas it seems to take OMG forever to come out with a new standard and even when it does it takes a long time for it to be implemented. It was pointed out though that membership in OMG is growing briskly and shows no signs of slowing down. It was also pointed out that if working with CORBA was as "visual" as working with the Web then OMG would experience the same popularity and widespread growth that the Web is experiencing.  One suggestion was that CORBA provide a visual component for all objects. Current CORBA APIs are designed for program instead of programmer visibility.

The discussion returned to W3C versus OMG. Somebody was of the opinion that OMG will eventually be overwhelmed by W3C. Even with the limited object model of the web, a large number of enterprising souls are building distributed systems (using web technology) that typically would be considered material for a CORBA implementation. Users are pushing the application of Web technology harder and further than CORBA has ever been pushed.

The discussion then turned to the shortcomings of the Web. What do you "show" as an object using the Web? Get and Put provide you with a sea of documents, i.e. pages uniquely identified by their URLs. Someone pointed out that the Web has limited ability to do reflection; HTML's "header" does not cut it. It was pointed out that the web has to better address the intersection of object management, object introspection and object streaming.

To move the discussion along, Ora posed one of the breakout session's suggested discussion topics: Is the Web's metadata useful for other uses such as component models? The discussion was limited due to time. It was pointed out that the combination of different metadata standards may cause needless overhead and round trip messaging.

At this point, the consensus of the participants was to list what the Web and Corba could learn from each other. Here is the list that resulted:

Improvements for CORBA Inspired by Web Experience:

Improvements for the Web Inspired by the Distributed Object Experience:

The final discussion topic was on agents. What are agents? There was a consensus that the term means different things to different people so broadly that any feature in an information system could be called an agent or an artifact of an agent. Nobody really knows what they are. When is something an agent and not a smart object? It was noted that agents keep popping up all over the place and that there appears to be a good deal of research funding for them. Ora commented that agents are postulated when there is a need to fill the gap ...and then some magic happens" in describing the functionality or behavior of a system.

Finally, Ora presented this additional summary in the final plenary session of the workshop:

Lack of Semantics

Procedural Semantics


IV-3 Standardized Domain Object Models

Moderator: Fred Waskiewicz, SEMATECH

Scribe: Gio Wiederhold, Stanford

Topics:  What criteria should be used to partition and package domain spaces?



This session did not end up attracting a large enough crowd for a full length discussion.

IV-4 Reifying Communication Paths

Moderator:  David Wells, OBJS

Scribe:  Kevin Sullivan, University of Virginia

Topics:  Several workshop papers indicate that one inserts ilities into architectures by reifying communication paths and inserting ility behaviors there. But many questions remain.



This session focused on the use of event (implicit invocation) mechanisms to extend the behaviors of systems after their initial design is complete, so as to satisfy new functional and non-functional requirements.  The discussion ranged from basic design philosophy through articulating the design space for implicit invocation mechanisms to a discussion of tactics for exploiting observable events within specific middleware infrastructures.  We also surveyed participants for examples of successful uses of the approach.  A hypothesis was offered (Balzer): Systems comprise communicating components and the only way to "add ilities" is  with implicit invocation, moreover in many cases it has to be synchronous."

At the philosophical (design principles) level, the discussion focused on the question of whether engineers should exploit only intentional events, i.e., that are observable in the system as a result of intentional architectural decisions; or whether it is acceptable to depend on accidental events—those that happen to be observable as a result of detailed decisions left to detailed designers and implementers.

Sullivan’s Abstract Behavioral Types (ABTs) [c.f., TOSEM 94, TSE 96] elevate events to the level of architectural design abstractions, making them dual to and as fully general as operations (e.g., having names, type signatures, semantics).  By contrast, procedure call activities that can be intercepted within ORBs are accidentally observable events. The following positions were taken: first, architectural boundaries are natural places to monitor event occurrences and should always be monitorable; second some techniques provide full visibility of events occurring within the underlying program interpreter.

On both sides of the discussion it was agreed that the set of events observable in a system limits the space of feasible transparent behavioral modifications.  The crux of the matter was the question whether system architects can anticipate all events  that a subsequent maintainer might need in order to effect post facto desired behavioral extensions.  No one disagreed that the answer is no: Architects can’t anticipate all future requirements.  On the other hand, it was clear that neither can a maintainer depend on accidental events being sufficient to enable desired extensions.  For example, the sets of events visible as local function calls is not the same as the set of events visible as "ORB crossings," and it’s possible that either of these sets, both, or neither is sufficient to enable a given extensions.

The dual view is that a given extension requirement implies the need for observability of certain events.  Then the question is how do you get it?  I.e., what architectural features or design details can you exploit to gain necessary visibility to key event occurrences? One design principle was offered: that you should go back and change the architecture to incorporate the required events as architectural abstractions; the other was that you can exploit accidentally observable  events directly, if they suffice to enable satisfaction of the new extension requirements. The scribe has attached a post facto analysis of this issue below.

A key point was that attaching behavior to an event might compromise the underlying system: in synchronization, real time, security, etc.

Another point was made that generally there are many useful views of complex systems, e.g., one for base functionality, another for management and operations, and that different views might have different observable event sets permitting different kinds of extensions.

We devoted considerable time to elaborating the design space for detailed aspects of event mechanisms, especially what parameters are passed with event notifications.  Suggestions included the following: a fixed set of arguments obtained from the probed event (e.g., the caller and parameter list for a procedure invocation); a user-specified subset of that fixed set; a subset picked by a dynamically evaluated predicate; a set of parameters generated by a program registered with events; events as dual to operations (abstract behavioral types).  It was noted that the duality between operations and events doesn’t hold up in real-time systems because "it’s all events" in such systems.

It was noted that events are usually used to effect behavioral extension leaving the probed behavior undisturbed, but that sometimes it is useful for a triggered procedure to change the state of the triggering entity.

Next we turned to the question of experience reports.  To get the discussion going, the following questions were posed, What should we not try to do with these mechanisms; what ilities have you added using such mechanisms; how seamlessly; how much effort did it take; and what is the status of the resulting system?

Responses included the following:

Sullivan Commentary on Essential vs. Accidental Events

As to whether designers should depend only on events for which architects anticipate needs, or on events that are observable owing to more or less arbitrary design decisions, it appears to the scribe that the following observations can be made.  Exploiting accidental events, as for any implementation details, provides opportunities for immediate gain but with two costs: increased complexity owing to the breaking of abstraction boundaries; and difficulty in long-term evolution, owing to increased complexity, but also because system integrity comes to depend on implementation decisions that, being the prerogative of the implementor, are subject to change without notice.  Yet often, the maintainer/user of a system has no way to make architectural changes, and so can be left with the exploitation of accidental events as the only real opportunity to effect desired behavioral extensions.

The decision to exploit accidental events end up as an engineering decision that must account for both short- and long-term evolutionary benefits and costs.  The exploitation of accidental events procures a present benefit with uncertain future costs.  On the other hand, exploiting only those events that are observable as a result of architectural design reflects an assumption that it’s better to pay more now to avoid greater future costs.  Again, though, sometime—perhaps especially in the worlds of legacy systems and of commercial off-the-shelf componentry—architectural changes just might not be feasible.

Finally, it is possible to elevate what are today accidentally observable events to the level of architecturally sanctioned events, through standardization (de jure of de facto).  For example, if a standard stipulates that all function invocations that pass through an ORB shall be observable, then systems architects who choose to use such a standard are forced to accept the observability of such events as a part of their system architectures, and to reason about the implications of that being the case.  One implication in the given example is that maintainers have a right to use procedure call events with architectural authority.  This approach imposes an interesting constraints and obligations on system architects.  For example, the use of a remote procedure calls comes to imply architectural commitments.

Closing Remarks

Summary Statement from Dave Curtis, OMG

Dave Curtis commented that lots of OMG members participated in the workshop and many have influence over OMG direction so we can expect some changes from their actions. He told workshop participants that one specific and timely way to be involved is to review the OMG Component Model RFPs and send feedback to RFP authors including Umesh Bellur from Oracle.  The next OMG meeting is in Salt Lake City on February 9-13 1998.

Summary Statement from Todd Carrico, DARPA

Todd Carrico thanked all for coming. He stated that this workshop was a "first of a kind" in pulling DARPA researchers and other industry researchers and practitioners together. He cited as workshop benefits the wide community represented by the participation, consensus building across communities, and consequent increased understanding of fundamental issues. In the area of achieving system-wide ilities, we now know more. Specifically what can we do? from the DARPA perspective, the workshop helps get DARPA more involved in industry and helps transfer DARPA technologies to industry.  There are a number of ways DARPA and OMG might interact, some covered in session IV-1.

Closing Remarks from Craig Thompson, OBJS

Craig Thompson thanked everyone for coming.  He stated that just as -ilities cut across the functional decomposition of a system, so too has the workshop attracted several communities that have not traditionally talked enough to each other -- but the workshop may have helped to form a new community with some common understanding of the workshop problem domains, ilities and web-object integration.  At very least the workshop has been educational, serving to alert everyone to a number of other interesting projects related to their work.  In fact, there seems to have been surprising consensus in some areas, leading to the hope that a common framework for ilities might be possible, and that some forms of object-web integration might happen a little sooner.  Several workshop participants have asked about follow-on workshops -- it might make sense to do this again in a year or so or to choose a related theme.  Next time, we'll need to find a good way to drill down on some specific architectural approaches and understand them in much more detail.  And we'll also have to provide more hallway time between session -- the sessions were pretty densely packed together.

Craig wished all a safe trip home -- and reminded any who had not yet take advantage of Monterey in January that it is the beginning of whale watching season, the weather's nice, and Pt. Lobos is close by and beautiful.

Next Steps

Send ideas on next steps and concrete ways to accelerate progress in workshop focus areas to