The meeting followed this agenda:
Minutes were taken by David Wells, Craig Thompson, and Dennis Finn.
Thompson introduced the presentations. At this early stage of the Internet SIG (ISIG), we will have a variety of presentations on various projects that provide "experience reports" showing ways to combine OMG and Internet/Web technologies, the goal being to identify a few standard ways to do this.
Slides are document OMG document internet/96-01-02.
Also see a recent trip report on the December IETF meeting sent to OMG TC and ISIG. (also available as OMG document internet/96-01-06). The next IETF meeting will be in Los Angeles on March 4-8, 1996.
IETF is a large organization with as many or more attendees to IETF meetings as OMG attracts. Only three people in the ISIG meeting are up on IETF. Thompson commented that one of the roles of ISIG is to provide liaison reports to other relevant communities like IETF. Still, it is interesting that so few people are following both OMG and IETF.
Q: How many familiar with Myrinet? No one.
Q: What is IETF's policy on standards commercialization. IETF requires multiple commercial interoperating implemented systems before a standard.
Q: How does IETF help if working groups do not agree. IETF has conflict resolution procedures. SNMP is an example where more is needed. In general, IETF allows competing standards and lets the marketplace make the decision.
Q: Should ISIG research where IETF and OMG intersects, that is, do research for the rest of OMG. Thompson replied yes, that is, if ISIG members do the work to identify the intersection and how to align the two organizations. He noted that the recent IETF Trip Report identifies a number of areas where OMG and IETF potentially intersect but work would be needed in each area to say how. At present, this is not an ISIG work item. Volunteers?
Q: IETF favors wire specs vs. OMG's APIs. Why? Demour thinks this is not an issue, but complementary approaches. Dan Connolly says he will talk about this in this talk. Thompson said it seems wire formats provide a way to share information and are often based on some sort of BNF. They do allow multiple APIs on either end of a wire, say in different programming languages. On the other hand, it might make sense for there to be a standard MIME or HTML API/class library that programmers could use. That is, are wire formats and APIs duals. Not sure. Seems so.
Q: What weaknesses of OMG specs do IETF people see? One, IIOP is not sufficiently compact. Two, CORBA needs to support asynchronous message sends. PostModern is not experiencing a problem with using IIOP as a native protocol. Dennis Finn asked if that will hold true for very busy CORBAs. Netscape site has 10**6 hits a day. Someone commented, IOP is a start, not meant as a native protocol. OMG will need to get more sophisticated. OMG does need to address this. IIOP is OMG's binary comm. protocol. Dan Connolly says IIOP could be the whole focus for ISIG. Thompson asked if this may be a job for ORBTF. This needs more discussion on firstname.lastname@example.org with ORB Task Force.
Q: on the notion of competing protocols, should we allow competing protocols in OMG?
Q: does OMG need different protocols for palmtop to WAN to LAN?
Introduction to W3C. W3C was founded in September 1994. Tim Berners-Lee (MIT), who invented WWW while at CERN, subsequently formed W3C, which now has 100 members, including Netscape, Sun, Digital, IBM, and members from the publishing community (Elsevier, ...) and electronic commerce (CyberCash, ...), that is, anyone with an interest in which way the web goes. Their focus is on interoperability and evolvability and their mode of operation is to develop precompetitive technologies, freely available implementations, and eventually a suite of standards. W3C staff is currently 12-15 people at MIT and similar at Inria.
Connolly's IETF Experience. Connolly organized and shepherded the standardization of the HTML spec through IETF starting in July 1994 running through 1995 when it was published. He stated it is not necessarily a clear fit in IETF. W3C may be the better home. W3C has a standards process. IETF's is rough consensus and working code. W3C is based on member payment like OMG's. Connolly concurs that IETF does not have expertise in APIs.
What OMG and W3C Bring to Each Other. As a recent development, W3C is an OMG member, and vice versa. OMG has staff and facilities to discuss and standardize both platform technology (CORBA: IDL, ORB, ...) and domain technology (manufacturing, medical imaging, finance, ...). W3C focus is on web infrastructure. We would like the Web infrastructure to be interoperable with OMG platform technology, so that we can share domain technology. W3C staff expertise is in evolvable systems and web formats. For instance, the amount of agreement to play the web game is very little, parties must agree on URLs. The base facilities in CORBA are richer -- more enabling, but at the same time more constraining. We feel there's a synergy somewhere in between.
Distributed Objects on the Internet. W3C is working in the OOP area. WWW and CORBA. It's clear that we want to be able to take services specified in OMG IDL and access them via the ubiquitous base of web clients. Further, we want to increase the amount of shared technology in order to facilitate applications development. Finally, we want to integrate emerging mobile code technologies like Java, Safe-Tcl, and Telescript with this distributed object infrastructure.
Why not take CORBA IIOP as is? Jim Gettys is a W3C principal from X-windows system and has done protocol stubs many times. He wants: asynchrony--global mobile queued support. Disconnected operation. Performance. Bill Janssen (Xerox, ILU) commented he hopes the Arch Board will fix this. HTTP-NG will probably include an ILU protocol/transport that can be used as a CORBA transport, which is tuned for wide-area usage.
What will W3C focus on? In addition to his protocol optimization experience, Gettys has experience in firewalls. Firewall administrators want publicly reviewed source code. Firewall administrators need source to insert local policies. Helpful to have network communities. Not a deficiency in IIOP, just a reason to use ILU. Another possibility is a standard runtime. ILU has a rich API for sliding in new protocols and transports. And the implementation has no licensing restrictions. The interfaces/services that we will focus on are HTTP (obviously), plus CCI and CGI.
Related concerns. Mobile code comes to you and negotiates set of services (threads, ...) and policies (can it read and write to files for persistence), GUI API (OpenDoc? MFC? AWT?). Whoever controls these APIs will have immense power in the Internet marketplace. Licensing restrictions affects technical evolvability of a system. W3C is positioning itself as a neutral convenor.
What can ISIG do to focus this? IIOP is one natural focus. What is missing? Microsoft Internet strategy. Both server and client are extensible component based. Should OMG do this? Should OMG standardize object API to web services?
Slides are document OMG document internet/96-01-03.
[Slide 1] Talk Cover Slide. This talk overviews some of the work currently being conducted by the ComponentWare(R) Consortium. The ComponentWare Consortium was started in June 94 and is a child of a strategic partnership between I-Kinetics, IONA and Sun. Sun has dropped out. See Appendix A slides for overview of CWC.
[Slide 2] Outline. The consortium focus is to migrate legacy systems to object-based systems. Key CWC strategic user sites for the technology and products of the CWC are: NAVSEA; Computer Aided Logistics JPL; Small mission operations Pratt & Whitney: 2D->3D turbine design.; Siemans, which is reengineering all systems with a budget estimated at $1B+; and some capital market firms that can't be disclosed because of the competitive nature of their business.
[Slide 3] Internet Today. Bruce stated that intranetting components within a corporation is happening NOW on a massive scale and it only takes a company a few months to get up and running. Internetting, that is, sharing information across organization/company boundaries is going to happen more slowly over the next five years due to the need for security. Intranets are not inhibited by security. Security within the firewall is not an issue. Bruce noted that, like it or not, many customer sites see web technology as competitive with CORBA and web browsers as the next OS, not understanding the two technologies are complementary (a risk to OMG).
[Slide 4] Internet + CORBA Working Together. LANs and client-server often are often 5-50 workstations huddled around a server. NAVSEA has 100,00 nodes. So solutions must scale. Must accommodate versions. Eventually, we must remotely debug 1000's of $5 components and continuously upgrade to version y. We must assemble systems on demand and build ephemeral and virtual enterprises.
[Slide 5] Technical Requirements for Internet Components. Internet ComponentWare consists of autonomous, loosely-coupled, shrink-wrapped objects that can roam across machines and live on networks. ComponentWare requires four significant, advanced capabilities: Migration, Security, composability and Groups. Each one of the Intranet requirements in Migration, Security, Composability, Groups and Life Cycle Management will cause a paradigm shift for CORBA. Bruce examined the needs for Internet ComponentWare based on CORBA.
[Slide 6] Component Migration.. Performance and robustness are the key quality measures of components for distributed systems. Efficient load balancing is an especially important on systems in which multiple users with different workload requirements share resources with long-lived distributed applications. Migration - a component must be able to migrate part or all of itself without requiring code or functional modification of the component. Installation management - a component must be able to manage its installation as well as its complete removal (un-install). Persistence-a component must be able to save its state in a persistent store and later restore it.
[Slide 7] CORBA DB Component. Interface is exported to Desktop client. The object executable (process)
resides on a remote server. Methods are invoked by client on server. In this example, to retrieve the value from an array cell, a remote call is executed. This is unacceptable in today's Internet. Thus the need for Migration.
[Slide 8] Internet ComponentWare. The CWDynaset is migrated to the client. Object method calls and properties are executed locally, while the CWDatabase object is local to the legacy Database it wraps.
[Slide 9] Security. Authentication-a component must protect itself and its resources from outside threats. It must authenticate itself to its clients, and vice versa. It must provide access controls. Licensing-a component must be able to support a wide range of external licensing policies as well as its own internal licensing polices. Audit and Diagnostic Notification - a component must supply audit event for external audit management and diagnostics for event logging. Versioning - a component must have supply a sufficient amount of version and capability data so that the client can determine if it can use the component safely. Internal Verification (Diagnostics) - A component should supply a minimum set of self-diagnostics tests for isolating any functional faults or performance degradation. External Verification - a component should supply a complete functional specification and a verification test suite that ensures that the component is behaving properly. In the area of security, the key immediate needs to be user authentication and just as important for software vendors is licensing.
[Slide 10] Composability. Components can be assembled in unpredictable combinations. A component can be used in ways that were totally unanticipated by the original developer. Composability is the ability of components to be combined with other components. Plug & play components, combining components without coding achieves completely functional composability. Control Integration - a component must support a range of common control protocols. Data Integration - a component must support a range of common data formats. Metadata - a component must be self-describing in any environment it finds itself located. The component metadata should include a specification of its object model and the method and property signatures of each object.
[Slide 11] Component Groups. Bruce mentioned technology the consortium is building on: Orbix + ISIS which provides fault tolerant component groups, which enables data warehouses at different sites.
[Slide 12]Survey of Internet + CORBA Activity
[Slide 13-18] CWC Virtual Enterprise Testbed. The CWC has commissioned an Internet-based Virtual Enterprise CORBA/Component testbed. In the testbed, technology providers interact with strategic partners (early adopters). There are multiple autonomous independent centers. In the Virtual Enterprise, recovering a machine or resource is the limiting factor. They want to replicate critical components and automate workflow. This view causes a complete change in all costs estimates. The view leads to a virtual warehouse of components. Virtual Application Warehouse outlines the technology approach behind the key points of our business plan
[Slide 19] Summary. The 90's will be the decade of infrastructures. Internet + CORBA is (yet another) emerging infrastructure. Internet + Components is a "no brainer".
[Slide 20-21] Further Reading
[Slide 24-28] Appendix A Overview CWC. The ComponentWare Consortium (CWC), comprising eight leading companies in object-based distributed computing, won $4 million in TRP funding to greater facilitate software reuse in the 90's. The Technology Reinvestment Project (TRP) is a federal initiative to integrate the commercial and defense sectors through cooperative R&D and commercialization of critical high-technology. I-Kinetics will manage the CWC, whose membership includes BBN, Interactive Objects, IONA Technologies, NetLinks Technology, Siemens, and Pratt & Whitney. CWC plans to do for information systems what chip standardization did for computer manufacturing. The goal of CWC is to package data and applications as standard, re-usable software components ComponentWare (R)). ComponentWare will do this by taking advantage of emerging object frameworks such as the OMG's CORBA and Microsoft's OLE/COM. This allows end users to assemble custom applications using components from anywhere on the network. This approach requires interoperability between different DOM frameworks, a key development goal of the CWC. CWC will submit their developments to the OMG for possible adoption. CWC is working with the two other TRP funded consortia, led by Andersen Consulting and Template Software, to advance object infrastructure interoperability. Total research and development by members will exceed $30,000,000 over the next two years. CWC will advance CORBA in the areas of legacy integration tools, concurrency control, security, and support for high availability systems. The biggest problem that early adopters are facing is that they have to redevelop their current systems. CWC will provide component-ware that enables organizations to use their existing applications, rather than having to tear everything down and build it from scratch. The CWC's goal is to be a catalyst for the emerging CORBA market.
[Slide 29-48] Appendix B ComponentWare(R) for CORBA+OLE. Nirvana for all these customers is, given that systems change as fast as organizations, how to reconfigure to get "faster better cheaper." So the approach is to leverage existing systems. So what is componentware? Developers develop and users assemble on the fly. Compare to class library, where you can mostly only reuse code, components deliver on the promise to also reuse design, analysis and verification in a platform-independent way. Bruce identified some classes of reusable components that can be packaged as plug and play and shrink wrapped and that can be shipped and installed on CORBA or OLE or both frameworks, DB Component(TM) for Oracle(R), Sybase(R), Informix(R), OODBMS, Powerbuilder, ocx, and bridges for ole/corba, corbaX, corbaY. (see ComponentWare Architecture White Paper )
Then he focused on ObjectPump, an input/output wrapper that captures I/O of production systems that themselves cannot be perturbed. It is interoperable with OLE/COM. So a legacy application wrapped with CORBA can now talk to Excel on a PC.
Slides are document OMG document internet/96-01-04. Also see Teknowledge URL and navigate to the JTF/ATD web server specification.
The Web Server project is part of the JTF/ATD reference architecture for command and control (C2). Many ARPA military projects have adopted this architecture or are extending it for use in domains such as modeling and simulation. In the architecture, various military task force commander domain-oriented collections of applications called "anchor desks" (e.g., planning, logistics, weather, others) depend on an infrastructure of services described in IDL and implemented in C++. Collections of services (called servers) provide capabilities for maintaining communications in a bandwidth-limited environment, access to heterogeneous data sources, web management, map creation, display and manipulation, plan management, situation assessment, etc. A C2 schema (several thousand object classes) is described in IDL and represents the description of shared command and control entities.
The JTF/ATD project is using Iona Orbix and GOTS (government off-the-shelf) CORBA product, CORBUS, developed by MITRE. Both are being integrated and others may be considered later. A dedicated Iona representative in their Boston office is supporting the project as problems are encountered as this is one of the largest Orbix development projects currently in progress.
The JTF ATD Web Server project has defined a number of structured web types for plans, situations, maps, and models, but arbitrary webs can be constructed on-the-fly. The Web Server is actually a set of IDL and C++ libraries that are compiled into other servers and applications which inherit data structures and methods for creating, accessing and managing webs. Webs represent arbitrary graphs consisting of specialized node and link components which can be named, typed and navigated bi-directionally (from any node/web to any other node/web connected by a link). The Web Server does not have any knowledge of the internal contents of nodes as it is the responsibility of the servers and applications to extend the general data structures and methods contained in the Web Server for capabilities such as display, evaluation or manipulation of contents. Web Server webs and nodes are not WWW objects directly, but can be published in HTML or other representations that can be accessed by WWW browsers through the Web Server WWW Gateway. Military requirements for the Web Server include security, robustness, reliability, and replication of webs and their constituent collections of nodes and links in a bandwidth-adaptive manner. Webs and nodes are versionable to help provide consistency among different views of information and what-iffing. Monitors can be set on any specified nodes and webs to detect specific kinds of changes and triggers start particular actions based on the changes detected. There is a web editing tool (the Webber) which can operate on the specialized JTF ATD webs or WWW webs, using a point-and-click GUI for establishing and editing links between any nodes and webs to form larger webs. The JTF ATD project is using CORBA 1.2 services and implementing others as needed until these are addressed by CORBA 2.0 (or later) compliant products. JTF ATD developers found it necessary to divide the collection of object class definitions into broad, high-level classes defined in IDL, and more specific, fine-grained classes defined in C++ due to IDL compiler limitations in the Sun Solaris and HP/UX environments at around 2000 classes.
While the webs which the Web Server manages are not directly WWW compatible, a bi-directional gateway between the JTF ATD and WWW environments has been implemented to allow conversion of webs in either direction. Version 1.0 was recently released within ARPA and DoD and access to the prototype software and documentation can be requested by contacting Lee Erman at Teknowledge (email@example.com, (415) 424-0500 x 422). This work is being sponsored by ARPA at the Navy Command, Control and Ocean Surveillance Center (NCCOSC) Research, Development, Test and Evaluation Division (NRaD) on Point Loma in San Diego. The project is currently funded through FY96 and its deliverables are expected to continue to be enhanced in the future within the JTF ATD and other ARPA/DoD programs, including the Leading Edge Services (LES) for the Global Command and Control System (GCCS). The project is on a fast path to deployment and some parts may be delivered to the CJTF and subordinates in Bosnia by April 96.
Slides are document OMG document internet/96-01-05.
Rick Goodwin showed a slide of the F-22 showing lots of information systems that must be interconnected. The command and control problem is similar, but geographically distributed, that is, to put easy to use information systems in place to keep the top brass off their knees working on maps during battles. The JTF/ATD vision is to bring lots of sites together in a WAN in a manner that is faster (years to months), better, and cheaper. Technically, the problem involves linking together many data sources and getting information to flow between them. CORBA is the backplane hooking information sources together. This includes wrappers for SQL-based relational DBMS systems, OSQL-based OODBs (Objectivity sql++ queries and getfirstobject, getnextobject, and getallobjects), and flat files as well as "mediators" which federate multiple data sources. They have developed login, data loction, and schema services including a shared C2 schema in IDL . Legacy dbms systems do not know about objects. The IDL-based schema contains mappings to data source objects. Queries result in in-memory collections of in-memory fine-grained C++ objects, which are needed for applications like simulations. Refinements include: query scheduling, notification, caching, triggers, and comm bandwidth adaptivity.
JTF is an early adopter of ORB technology -- "before ORB technology was ready." They are working on moving query and collection classes from Orbix to Objectbroker to HP Orbplus. Now they want a global yellow pages. Also, they developed their own object services implementations for naming and transactions; some of these will be replaced when service providers provide these. They are working with others in the ARPA community on on distributed query optimization and query relaxation.
Steps for the next year are to focus on unstructured information sources including images, intelligent distribution and positioning of objects in video (working with CMU), working on natural language interfaces, information retrieval, GIS (working with Open GIS), updates. Goodwin introduced Mike Dean (BBN) who is working on the JTF map server. They need object views so different C2 roles (e.g., staff, field) see different views of situations. They are working on CGI queries and have not yet moved to Java.
They are working on a CASE tool to generate IDL and then map to OOPLs.
We reviewed the OMG ISIG Charter and Mission Statements.
OMG and Internet technologies are already meeting and joining together to provide the architectural basis for enterprise integration. Both are pervasive technologies that lie at the heart of industry plans for better next generation application and information integration. OMG is the key industrial organization developing open, interoperable, component-based interface standards based on object technology. The Internet Society provides a ubiquitous base for distributed networking as well as tool suites that are increasingly linking global information sources. The Internet is quickly becoming the preferred medium for the electronic exchange of information, and, as a medium for the exchange of messages among distributed-objects, it has vast potential.
Internet and OMG technologies are complementary: the Internet provides tools for unstructured and semi-structured applications; OMG provides tools for semi-structured and structured applications. A union may provide a unification of information sources, making it considerably easier to access and operate on the wide range of data, information and knowledge. The OMG CORBA 2.0 specification (e.g., IIOP) provides one way that OMG and the Internet combine but we can identify others as well: use of OMG services to locate, query, and share Internet information sources; use of Internet tools like Mosaic to view structured and semi-structured OMG information bases; additions to OMG and Internet architectures for supporting business rules and agent scripting; additions to subsume repositories, workflow, CASE, DBMS, KBMS, and simulations; and more. It is clear that these pervasive technologies could gracefully interoperate at several architectural levels.
The mission of the OMG Internet Special Interest Group (ISIG) is to identify development work needed to better align the OMG Object Management Architecture with the Internet, World Wide Web, and various Internet tools and facilities. It is intended to bring enhanced interoperability, reusability, application portability, etc. to the Internet based on OMG technologies. At the same time, the ISIG will bring challenges to OMG from the Internet community on scalability of OMG architectures and mechanisms to make OMG technology pervasive. This potential can best be realized through the cooperative efforts of the OMG, the Internet Society, the W3 Consortium (W3C), and others.
The Internet SIG shall:
ISIG has several decisions pending regarding its future. We discussed the following:
Thompson reviewed the draft Call for Participation for a planned Joint W3C/OMG Workshop On Distributed Objects And Mobile Code. The workshop will be held in Boston, Massachusetts on June 24-25, 1996. A 1-page position paper is due to firstname.lastname@example.org by March 11, 1996. There will be a 50 person limit on participants. See the forthcoming separate email to email@example.com for the official workshop announcement.
See document OMG document internet/96-01-07.
Thompson handed out a skeletal draft of an OMG Internet SIG White paper intended to capture requirements, architecture, issues, and experience reports for ISIG. The idea is to use this paper to scope the ISIG area and identify an initial list of services, facilities, or interfaces that might make sense as a program of work for ISIG.
A follow on action item is for ISIG members to review the white paper, determine if it is the right next step for ISIG. An alternative is for ISIG to become a Task Force and issue an RFI to receive similar information in more depth from a few responders. Our plan for now is to work on the white paper and simultaneously draft an RFI aimed for the Washington D.C. ISIG meeting. The White Paper provides a good start toward an Internet Architecture document to augment the OMG Object Management Architecture in the same way OSA and CFA augment it.
Back to ISIG Homepage