[Attached is a draft of the Minutes of Meeting
#5 for OMG Internet SIG. Speakers are invited to revise the notes
on their presentations sending revisions to Craig Thompson
and also to provide Shel Sutton
an online presentation. These should be available as a revision
of these minutes.]
These are notes taken during the meeting and may contain some inaccuracies. On line presentations will be available soon for most or all presentations.
Shel Sutton (OMG Internet SIG co-chair, MITRE) opened the meeting by stating that this is a joint meeting with the DoD Intelligence community's Intelink Engineering Board. Shel introduced speakers during the presentational part of the meeting and Craig Thompson (OMG Internet SIG co-chair, OBJS) took minutes.
In 1993 work began to apply Internet to Intelligence community problems. The Deputy Director of Defense co-authored a memo that Intelink would be the key strategic backbone for military intelligence. Secretary Perry's memo in 1993 and the Woolsey/Deutsch memo in 1994 establish Intelink as strategic for the intelligence community (e.g., "go use it"). Today, the Intelligence Systems Board and Intelligence Systems S.. (IISB/ISS) derive their authority from the Chief of the Intelligence Community and DEPSECDEF/ASD (C3I). Panels do the work via member organizations. IEB makes policies for Internet technology standards within the intelligence community. Intelink is not yet the DoD's service of first choice.
The larger context is: DoD has lots of standalone (isolated) and stove pipe systems with diverse interfaces. There are need-to-know, cultural, mission-specific, geographic, quality of service, and downsizing issues. Interoperability is the key need and the approach is to use Internet and Web technologies and maybe object technologies if they fit. Requirements are any-to-any connectivity, subject to the rules of security, single interface to the community (producer and consumer), with the goal of information sharing within these constraints.
There are four Intelink systems:
Intelink is a service (application on top of JWICS stack). In implementation, it uses existing networks (JWICS). They did not have to build new infrastructure but just add on to what was there. Intelink is one of the front ends for Global Command and Control System (GCCS).
A main goal is information transfer. Statistics: 45 gigabytes per year, 150,000 accesses per week. 80% of NSA production available within 2 hours. Intelink-S is growing 50% a month. From initial 19 sites there are now 70 sites and 109 servers with 50,000 users.
Intelink uses a subset of normal Internet tools (web, ftp, gopher, ...). It uses a common set or profile or stack of (COTS where possible and GOTS) standards:
Q: do your standards integrate security or wrap? A: The vision is flow down and up of information of various security classifications. Right now, however, we use standard COTS tools not specially augmented with multilevel security (since these tools do not provide the right hooks). Different levels of security are provided by different physical levels of security so upgrading and downgrading information between security levels is an issue.
As an introduction to OMG for the Intelink community, Thompson explained the overall OMG OMA architecture including ORB and ORB interoperability, Object Services, Common Facilities, Domain Objects, Application Objects, and how we might add Internet Services to the horizontal OMG services. He described the progress of the Internet SIG to date as well as responsibilities of OMG SIGs and Task Forces. Finally, he requested attendees to read and provide feedback on a draft OMG Internet Task Force Missions Statement and RFI that we will be proposing to the OMG Platform Technical Committee to reconstitute OMG Internet SIG as a Task Force. [See section After the Meeting below-we did issue the RFI but deferred indefinitely the request to become a task force.
Internet SIG hosted presentations by the DoD Command and Control JTF/ATD program at Meeting #3 in San Diego in January 1996. JTF is addressing command and control in a new era - the problem areas are smaller, come-as-you-are situations, both military and humanitarian.
Many DARPA and Rome projects funnel their technology into JTF. In general, R&D moves to Advanced Technology Demonstrations then to Advanced Concept Technology Demonstrations then to the ARPA/DISA program office then to Global Command and Control Systems (GCCS).
Mike reviewed the overall JTF architecture (overall emphasis is planning).
Their work is WAN based. They use CORBA, C++, Web, and now Java. They note a progression: Stove Pipe --> frontend + app + backend data access --> component based. In their environment, bandwidth is a scarce resource -- applications become aware of QoS and negotiate this in a circuit oriented or datagram oriented way. [Note: OMG needs a QoS service along this line. -cwt]
Mike described their experiences with OMG technology:
JTF/ATD is influencing DII/GCCS and is on the bleeding edge. Emergence of Java is making them re-think some of their architecture and separating hype from reality is an issue.
Q: what is pain size for applet size? A: bandwidth limited.
Web* is a $14MR&D project to develop a patient information environment. Goals of the project are to leverage heterogeneous information systems and connect via CORBA and Web. The first generation architecture and implementation are deployed in two clinics in two hospitals. Now they are on the second round.
The use the Web as a front end and CORBA on the server. They started using Orbix when it came out in June 1993. Web*'s architecture is based on CGI script. But healthcare is session oriented where http is stateless. So they developed a State Survival feature; now Netscape has cookies. They also developed TclDii -- they intersperse Tcl into HTML and a CGI interpreter expands. The backend communicates to DBMSs. TclDii interfaces to TCL Library, Scheme, and CORBA libraries.
The approach they took was successful and provides a uniform interface to the entire clinic via forms.
Juggy recommends burying CGI.
The competition is, a doctor can scan an entire patient record in 10 seconds versus the 5 seconds per each page that Web pages take. So they prefetched patient record by hacking Mosaic. But saving dictation still takes too long. The current implementation saves dictation to Oracle directly but CORBA sets up the negotiation.
Not every ORB claiming IIOP is so at present. Juggy suggests making IIOP into another protocol supported by the Web and major vendors. He suggests IIOP URL as an object reference. So how does client side proxy make use of this?
Java IDL plugins -- part of JOE. All IDL to Java mappings and OrbixWeb mapping and all are slightly different. Problem is getting locked into client process and portability. JavaSoft has distanced from SunSoft -- way different. Being beta for JOE is what you are looking for.
The DISCUS project does R&D into OO software infrastructure (4-5 years, 1-2 M per year). They have several Government users and do tech transfer into NSA, NIMA, others, as well as injecting government requirements into commercial use.
The 1993 view of DISCUS was as an interoperability backbone. The system was demoed in 1994. Discus provides three special services for imagery, map, and text. 1995 work ties in the Intelink web browser to access CORBA via DISCUS wrappers. Now they are beginning to define applets for functionality. They define DWO = discuss web objects.
They commissioned a study on DISCUS and OLE/COM and concluded that investment in CORBA is worthwhile for the next 5 years. Work in progress is on DISCUS/Microsoft integration.
With respect to Tech Transfer, many contractors have the DISCUS framework. All source is given to tech transfer partners. New versions appear in a 1-1.5 year cycle. The next version of DISCUS will appear in October 96. It uses Orbix 2.0 and ObjectBroker.
Challenges: can't share object services across ORB implementations; OMG is not interoperable with Microsoft; must solve DCE integration issue; Java binding, availability of commercial products with IDL.
The ILU project use a central object model and generates stubs to it. They took this approach to generate their own IDL and then support many RPCs. Also, they support many transport protocols. They decided not to make new standards of their own. They wanted to define existing services and talk to it via ILU (e.g., framemaker, printer, ...) so each service preserves semantics and bit patterns. So ILU should work efficiently within same environment and/or optimized across different cases. Result: they have the only ORB that works across multiple languages. ILU supports POSIX, Windows, MacOS (decayed), and provides a free ORB.
ILU ISL is IDL plus some other things.
ILU's CORBA IIOP protocol works well and can find bugs in other IIOP implementations. ILU has run-time registration for new RPC protocols.
Transport protocols use a streams layering approach and layers can be composed, e.g., rfc1831rm | security | rfc1831rm | tcp. New transport protocols can be added at runtime.
The ILU team has written seven name services - their conclusion is, name services are application specific. So they provided simple binding to find a name service object. So people use that as a name service!
CORBA IDL has deficiencies due to its C++ heritage, e.g., optionality and /include is bad idea.
ILU extends CORBA with ... long list including different languages in same address space.
The ILU team is planning to add real attributes and client-side caching. They want to support call-by-value. They don't want to support any (a bad idea).
ILU uses standard C due to portability and standard compilers.
Many different projects use ILU including the W3C (Jim Gettys) super efficient RPC in http-NG.
Three Digital Libraries projects are using ILU. Some places to start on the web are
The key problem in Digital Libraries is large scale decentralized management. The architecture they have settled on distinguishes digital library objects, how to name Internet resources, and how to talk to them in repositories (repository access protocol).
DLO (digital library objects) involve name + metadata + content blob(s) + signature + security model. Naming is discussed at IETF meetings (location independence, globally unique, persistence of name over long time, fast resolution, decentralized admin. and control). A client interacts with handle and repository systems independently. Mirroring, caching, ... is the difference between working and not working. The Internet security model is important. So combining negotiated access combining access and security is a main idea.
They completed design work done during the summer, 1995 and a demo system was completed in Dec95. They used ILU 1.8 (and are moving to 2.0), Shore (from Wisconsin), C++, Netscape CGI, and Python script. The system has been bug free and productivity gains are apparent.
Q: how are addresses resolved for known items. A: first with DLS and then CORBA.
Q: how to integrate this into web crawler. How is meta data made available to web crawler? A: Much of work is in D-Lib magazine (www.dlib.org).
Q: about meta data A: Warwick framework was the conclusion from a recent workshop. Suggested decentralized meta data.
The master plan for modeling and simulation has yielded Common Technical Framework, which applies to Weapons simulation as well as engineering simulation.
The process started in 1994 to identify a next generation simulation architecture for DoD. It is folded with several other ARPA architectures. Now the work is handed to the DDR&E architecture group to go through a set of prototypes, so it is tested and usable across domains.
In the recent past, simulations made interoperable were so idiosyncratically. Now the move is toward a distributed operating environment run time infrastructure consisting of
The object model template documents two kinds of object models. Federation of simulations. So federation has its own object model, includes a contract of shared runtime information. The Simulation object model (different sort of OM) is a brochure of meta data telling about a given simulation. These become available via libraries on the Internet.
There are various federations -- one includes an engineering federation with people and machines in the simulation loop; another is faster than real time. A goal is a common run time infrastructure across broad DoD.
[Richard Weatherly takes over for the rest of the presentation.]
They are using IDL. DMSO is neutral using CORBA. There is one universal RTI Executive, from which a federation execution object is created. This is used to create or destroy federations. So there are five categories of service. They have several QoS attributes, many outside of today's CORBA. So they have a hybrid. In the comm. layer, the goal is fast comm. so what this gives is setting in place and managing the simulation though running it fast and not in CORBA (so far). They are doing many Inter ORB experiments. Experience has been mixed. ANY is bad. They must go out of the CORBA framework for the visualization community.
Q: Can DMSO simulation result in new OMG SIG. A: there is interest in using OMG as a tech transfer vehicle.
Q: Fortran support for ILU for large simulations. And modsym, c++, ... others. A: Techncial specifications at http://www.dmso.mil where there is a wealth of specifications. One stop shopping for DoD DMSO.
Check out: http://corbanet.dstc.edu.au
The purpose of CORBAnet is to demonstrate the interoperability of ORBs. The application shows many companies, each has company meeting room, with set of bookings, each booking is a single meeting. Each company models company, room objects, meetings, ... . The demo shows many ORBs working together. You select whichever you want for your client.
The architectures supported:
The lessons learned are:
DTSC has been using Black Widow and Sun JOE recently. The Java language binding will go through OMG as an RFC providing a Java language binding within OMG. But this must happen in Netscape and IETF. Last week, Netscape killed their internal ORB and are moving toward Black Widow's Java binding.
Free software -- students need to have free software. CORBA must be free.
Q: Does CORBAnet use IIOP? A: waiting soft software for outside the firewall. Still has an Orbix only implementation this minute.
Q: How rich is IDL used in demo. A: Not incredibly rich--no union or any. Just enums and sequences. Would be interesting to extend it.
Q: What is next? A: Interoperability testing on a number of services. Many ship with naming. No sense in doing this for persistence. But yes for naming, transactions, queries, especially security, ....
Bhavani's talk identified a lot of research issues for the traditional DBMS community based on recent meetings in Milan and Hong Kong. They want to set up an IEEE conference on DBMS and the Internet and are working on a White Paper for September and a conference by April 1997.
There is increasing demand for accessing DBMS over Internet --- text to date, now relational and OO. But there are outstanding issues. So they are looking at implications of DBMS and ORBs. The picture is Internet connecting many DBMSs. Issues are object management, query management, transaction management, storage management, security management, integrity management, and metadata management. A reference is the June 1996 CACM on electronic commerce.
What is the impact of the Internet on these issues:
CORBA provides ways to interoperate. But CORBA does not solve DBMS heterogeneity of different data models. Current work is on coarse grained encapsulation of whole data sources but work on fine-grained encapsulation is still needed.
Q: need to add QoS to list if you will playback streaming data.
Comment: CORBA + services = more than a DBMS. So we will see richer architectures.
Rainer's collaborator is V. Ivonnikov (Institute of System Programming, Moscow).
Rainer presented a theory view of how to add behaviors to object technology.
Object covers provide hierarchical nesting of objects. Identity and Location are meta information shared objects in an object space. Cover is the word they use for a context so a location cover fault moves objects across boundaries. Objects are also covered by a class (cover), orthogonal to the other two covers. C++ -> Smalltalk is a class cover fault. Other covers are security covers, persistence, activity, ... Java thread is a possibility. A question was, should type equivalency (e.g., of Fahrenheit and Centigrade) be considered a cover? [Other covers not mentioned in the talk: versions, replication, transactions/concurrency, distribution, --cwt.]
Q: what happens if we transport objects across cover boundaries. Repositories raises the issue: what happens when we ask a query that crosses cover boundaries? Are repositories themselves the same as object engines? They are working on understanding this better.
Q: how do I plan when converting between representations. A: A binding time issue is, binding at design time and statically or dynamically and even via traders. Do we move object to class or class to object?
Q: is this related to transactions? related to optimistic transactions and lazy update.
How do you add a new system into an existing environment. As a CIO, when do you commit $50M to a technology, when do you make that transition? "God created the world in seven days but had the advantage of no installed base when he did so." DoD Command and Control System of Systems is like a multinational company that is continuously making various subsystems interoperate - there are many stovepipe systems and funding fiefdoms. The interoperability problem is, can a person get the information he needs though the mix and match of systems changes with Bosnia, Haiti, ... One solution is a Common Infrastructure Approach -- you mandate standards before they are true standards (like the CIO that bet his company on a technology). Plus, you need to evolve the infrastructure. So how do we acquire systems? The old view was, You write a spec but when SW is ready the environment has evolved so you spend 60% on maintenance. DoD SW expenditure was $33B in '94. A second view is Evolution or chaotic system acquisition.
What you want is interoperability and diversity. It must reduce training and solve user functionality problems. And you need a roadmap or plan to predict future spending. And you need a way to evolve the functionality and replace parts. One idea is a community of interest. Within a group, do what you want but between groups then communicate this way.
The Web with a common interface via CGI or Java to any backend DBMSs gives us a real leg up on interoperability. CORBA encapsulation converts a complex of boxes view into a simpler view of a backplane and a collection of modules (much simpler picture). So the transition for a system-of-systems is (a) web like and (b) CORBA like with system of systems becoming collection of services.
Advantages of OMG architecture are:
Advantages of Web
Q: CGI scripts are minutes in investments; CORBA is much, much more--what is the answer?
See http://www.amsinc.com or contact Bob via email at firstname.lastname@example.org
People with problems currently have many pieces of solutions to choose or fit together: apps, 4GL, Web Browser, Web Server - ORB client - middleware - ORB server - legacy systems, etc. Few will architect the solution, many will develop Perl scripts. So the winner so far is the Web and Mobile code -- rapid implementation. We can expect to wind up with tons of legacy systems.
Bob talked about the upcoming W3C-OMG workshop and about the future of CGI including Netscape extensions, Microsoft extensions, and Oracle (which just crashed its web group).
Bob talked about various Web architectures:
Bob provided simple definitions for service, server, client, broker (middleware SW that facilitates communication across distributed clients and services), two-tier, three-tier. Things can be brokers and servers at once.
Examples of middleware are Web, ORB, transaction, messaging, data access, agent, trader. These are all the things you can put in the middle of the architecture. Orbix with ISIS as combinations. MQSeries defers messages to later (e.g., at night). Almost all groups want more than a few of these. So a challenge in architecture is how to put together and manage various middlewares.
Web talks to Java/Web client and downloads transaction client or DBMS client via federation of brokers. Now maintenance occurs from the desktop. Use of CORBA in the middle required higher level of training. Right now, no one is saying, the only way to do this is with objects.
The objective is to define a series of APIs specific to the Imagery community (a domain). Standard APIs will enhance interoperability and portability, facilitate sharing imagery and services, and facilitate insertion of low-cost commercial technology. They are avoiding tie to any one technology, even OMG, though they are using IDL and would use OMG services.
They are aiming for open systems but note that no matter how open you think you are, you are a stove pipe to someone else.
They have defined a Reference Architecture. They draw the OMG picture but interpret the Distributed Computing Infrastructure to include CORBA, DCE, Network OLE, ... In their viewgraphs, they use inheritance so Common Facilities inherit from Common Services and Imagery services inherit from both, as appropriate.
Services in the imagery area include collection, processing, dissemination, exploitation, library element (which contains metadata catalog), video imagery and access, profiling of standing queries that are populated as new items appear in the library, mensuration, zoom, histograms, ATR, registration, image understanding, geopositioning, fly through 3D areas, and more.
The plan is to submit these standards to the appropriate standards bodies.
For more information, see http://www.itsi.disa.mil/ismc/ciiwg/ciiwg.html. You can also subscribe to email@example.com with message body subscribe nitf name and you can email comments to firstname.lastname@example.org.
Q: how about a SIG in this area? A: we tried this once before via requesting a GIS SIG and were unsuccessful but the time may be right now. One camp says, it is too late to standardize APIs and OO will be too inefficient. But doing nothing about interoperability is definitely not the answer.
Sankar traced trends - from centralized computing to decentralized mini-computers with the next wave to be agile and web oriented. Agile organizations are project oriented; they cross distance, time, and computing infrastructure boundaries; they are cross functional, cross organizational, intelligent, collaborating communities.
He traced the WWW evolution: apps become WWW-capable, WWW publishing first, then comes collaboration.
Crystaliz is developing a product called LogicWare. Today's focus is information discovery -- to find partners. Companies provide products and agents hold "conversation" about products. In their Virtual Enterprise Transactions, simple agents (autonomous business entities) negotiate costs. Think of an enhancement of EDI. They view task management as change management in a distributed communication environment with agents traveling from server to server. Agents subscribe to other agents that watch for change events.
Sankar identified two approaches for mobile agents:
OMG has mobile agent facility whose requirements are:
Many of these services must be on the server side. The vision involves mobile code, agents and mediation. The vision involves domain specific generic libraries usable by communities.
Sankar suggests the need for various services:
Mobile agent is like a mail message - so it is asynchronous and requires queuing and the ability to restart a process.
They have developed the Java ORB (JOE). Several other groups are doing this too. So the mix of a Java environment plus CORBA fit pretty well.
Scenario: a little travel company uses IDL to encapsulate travel services. They have Java ORB classes and stubs on their http server (any stock variety). A "document" is downloaded containing an applet to your client. Now your Java objects that are stubs are talking to your travel app on the server side, directly not via CGI.
The current version uses NEO protocol for JOE in which you have downloaded a CORBA client.
Why a Java ORB? because it extends your CORBA clients to all the machines on the WWW. So this means central maintenance and distributed delivery along with platform transparency.
This decreases the requirement of getting the IDL or application right the first time since it is downloaded over and over.
JOE is a Java ORB. Also provided is a set of productivity classes and advanced development tools. For instance, simple access to naming.
The idea is to make Java simple for programmers. No overhead to think in IDL. An IDL module maps to a Java package. Out parameters map to Java classes called holders. Q: will OMG and the commercial (sun, orbix, postmodern) endorse the bindings. A: The mappings are close now.
Q: as Java ORBs grow in use, should we have mapping of Java VM to OMG? e.g., do you want the mapping to go to the CORBA backbone or go directly through Java VM. Not working on this now, but instead are working on stubs and IIOP. The JOE Development Environment includes IDL to Java mapping
Sun is not making money on Java ORB but on Sun hardware and servers.
Q: what should OMG be doing in standardizing in the next year or two. A: Not helper classes until we have more experience. Yes for firewalls.
We considered three items of business during this half day session:
We considered the issue of whether to escalate the Internet SIG to become an OMG task force.
Other considerations are:
We asked Richard Soley if he felt the TF would run into opposition. He was not sure, indicated it would not if its charter was sufficiently distinct from ORBOS and CFTF, but recommended that the group lobby with ORBOS and CFTF to expose as many people as possible to the possibilities of a TF and/or RFI before making a motion on Thursday for these.
We reviewed a draft strawman mission statement for the proposed OMG Internet TF and spent some time wordsmithing the short document. The result is here.
We reviewed a draft strawman Request for Information (RFI) for the proposed OMG Internet TF and spent some time wordsmithing the document.
Subsequent discussions with members of the OMG Architecture Board and others led us to withdraw the idea of recommending a Internet Task Force at this meeting and instead focus on just the Internet Services RFI. It is felt we should make sure there is sufficient interest in the RFI and that the results are sufficiently non-overlapping with ORBOS and CFTF before starting up yet another Task Force.
Based on revisions to the RFI made during the ISIG meeting, Thompson and Sutton revised it, made copies, and developed a short infomational briefing on the RFI for presentation to CFTF and ORBOS TF.
On Wednesday, Thompson made a presentation to CF Task Force on the Internet RFI. That group agreed by a white ballot vote to recommend the RFI to the OMG TC. A number of people felt that the RFI might best be issued by ORBOS since it is at least partly related to plumbing. Following this meeting, Sutton and Thompson made another round of recommended changes that resulted from the meeting (e.g., remove references to the Internet TF).
On Thursday morning, Thompson presented the RFI to ORBOS Task Force. They recommended a few more changes (e.g., take out "quality of service," reference the RFP template) and then voted by white ballot to recommend to the TC that the RFI be issued by ORBOS.
On Thursday at the TC, Thompson made a short report on Internet SIG meeting. Later, Richard Soley, who chairs the Platform TC, called the question on a vote to issue the Internet Services RFI. Bill Jannsen (Xerox PARC) requested some final changes to remove some remaining language that indicated an Internet Task Force would subsequently be formed and RFPs issued by it. Bill Cox (Novel) agreed to help us make the changes and the motion to issue the RFI was passed by white ballot.
Back to OMG Internet SIG