(from DOE 2000 Workshop)
Many DoE projects want QoS yesterday. There is a decade of research on QoS and we know many ways to do it. Low level ways to do this are in routers. Does not give end-to-end QoS. RSVP is a protocol to create low level end, its still at the packets level, its of limited value and is not a good fit to upper level QoS needs. Van Jacobson is working on QoS for ESNet. Gives you end-to-end QoS but works in administratively heterogeneous environment. If there is a single point of control then QoS is not as hard a problem. Routers cannot make decisions based on organizational hierarchy, a different topology than network topology. You need management to decide bandwidth questions - for instance, X and Y both need bandwidth Z in overlapping times. So you need that bandwidth. So Van Jacobson is turning on the class-based queueing in Cisco routers. Write bandwidth broker that controls access to the lower level machinery. Premium up to some bandwidth vs. Best-effort. Physical network must always have premium amount to guarantee no maxing out of those high priority applications. You can set up bandwidth broker to more than the physical bandwidth - that is a policy decision. Push policing out to edges of an administration boundary. ESNet expects LBL's boundary to insure the right priorities.
(from Minutes of Internet SIG Meeting #10)
Reference: InfoSleuth: Semantic Integration of Information in Open and Dynamic Environments, Sigmod '97. Also see MCC Infosleuth project web page.
Overview: Infosleuth, a project in its third year at MCC completing in June 1997 with an Infosleuth II on the horizon, provides an agent-based framework for accessing heterogeneous data sources over the Internet. Most of Infosleuth is implemented in Java so think of an agent as a stylized Java process that follows certain protocols.
The problem and approach: Most DBMS people have tried to solve the multi-database problem by mapping database DBi to Integrated_Schema (a data centric approach). Not a very scaleable approach - it gets unwieldy as you add DBn+1. Infosleuth uses an ontology in place of the integrated schema. You first define the entities of interest (stocks, portfolio, …). To connect to DBi, you define a resource agent last after first defining the ontology (a user-centric approach). Note: the architecture looks the same except ontology replaces integrated schema.
Ontologies: Ontologies represent semantic concepts, are defined independently of the actual data, Infosleuth uses a frame-slot data model with standard data types: integer, float, string, date, frames, and relationships. They have defined a healthcare, stock market, politics ontology, ontology ontology, … but not overlapping ontologies.
Agents: What is an agent? an "object with an attitude."
Agents are independent processes, each is a specialist, they exist in a
community. None exist solo. There are agents you know about that are in
your community and others you do not know about.
Infosleuth's architecture: a web client interacts with a web server via http, and RMI registry, and RMI-based user agents. These latter use KQML to interact (send messages) to various kinds of agents (ontology agent, broker agent, task planning and execution agent, query decomposition agent, and data mining agent). These in turn talk to each other and to resource agents that effectively wrap data sources (SQL, LDL++, and WAIS). A monitor agent can be turned on to log messages.
More on each sort of agent:
Q: What can cause a resource agent to go away? A: autonomy, at the whim of the person who puts them there.
Q: Do query agents do query optimization? No. Lots of issues, hard to do, do not have stable underlying dbms. Try to process joins in some good orders. But its very slow. Broker is good about pruning off irrelevant parts of query using semantic query optimization.
Example to give feel for some of the issues illustrating a periodic query: every evening at 5PM, select name, exchange, closing provide from stocks where the stock is international and closing price is up at least 2. Run it in London but not NYSE (since not international) but in Warsaw if it is up or is tomorrow.
Advertising information: domain information: name, host, port, and protocol; agent type like broker, execution, resource; agent capabilities e.g., ask, update, subscribe. All is described in LDL (MCC's Prolog-like language). Agents talk to each other via KQML. The agent specifies what languages they talk (SQL, KIF, …) and ontologies the agent understands (frames it knows about, slots it knows about, constraints on frames and slots like NYSE is not International, and closing prices after January 1970).
Brokering: a broker's job is to find resources that contain information relevant to a query. Broker makes some kinds of inferences like NYSE is not international but London is. It looks at what the agents advertise: respond to query in SQL? Stock exchange ontology? Closing price frame? Returns the names of al agents whose advertisements intersect the query constraints.
So in our query: applet periodically communicates to user agent that talks to execution agent that talks to multi-resource query agent that "asks" broker and "asks" London stock marker.
Agent interaction standards: standardize individual messages, what they say, what they mean. Standardize the flow of messages between agents (conversations). Standardize how communities of agents cooperate.
Layered architecture: (1) agent application layer talks via conversations, requests and replies to (2) conversation layer talks via KQML performatives to (3) comm/KQML layer talks via TCP/IP or HTTP to (4) remote agent.
The idea behind conversations: agents send messages to each other: e.g, ask_all or KQML performatives. There are legal and illegal sequences of performatives pertaining to a specific task. Conversations define and enforce legal performative sequences. Conversation layer defines a standard set of conversations used by all agents. Each conversation is a state machine with messages sent and received on transitions. Each conversations is implemented as an out-thread (initiator) and and in-thread (responder) in Java.
OutConversations: Initiated by call to initConversationOut(…), remote agent responds using a call to addNewReply(CNVRemoteResponse), and application can alter the conversation using interrupt(CNVRequest).
InConversation: is initiated by remote agent. Request comes to application in the form of a process(CNVRequest) message, application responds with a sendApplReply(CNVReply) message, and application cannot interrupt the conversation.
Generic Infosleuth Interfaces (in progress): Every agent contains the broker interface (to handle generic advertising and queries to broker), ontology interface (to handle queries to ontologies and parses and caches ontologies), monitor interface, and all operating on top of the conversation layer.
Interconnecting Agent Communities (in progress): multiple peer brokers, need an inter-broker protocol (as with IIOP), now agents advertise their meta-information to their broker. In future brokers will advertise to other brokers.
System is in pre-alpha state. Sponsoring companies all have copies of Infosleuth. No cases where it has spread across firewalls. Not used outside a firewall.
Q: Are you connected to OMG? A: No, instead active in KQML community. CORBA was never semantically rich enough.
Q: Sun uses Java RMI and Java ORB, why not use IIOP? A: They never go out of the Java world in the prototype so RMI is convenient.
Q: Why not use chron job and objects, why agents? A: Agent flexibility is good since it is closer to way we think but is also more complex, as is true with all higher level abstractions.
(from Minutes of Internet SIG Meeting #10)
Shel presented on OTAM and led the discussion. OTAM is a proposed Information Access Facility that would provide uniform access to information sources like file systems and database records. It is a facility because it bundles several OMG services. It is called OTAM (Object Transfer and Management) by analogy to the ISO standard FTAM (File transfer and Management). FTAM is an ISO specification, in five volumes. Ulysses Black's book on OSI has a section of FTAM.
The architecture of OTAM consists of
You do not have to know beforehand the schemata of the file store or DBMS. Metadata is fundamental, you are operating on the metadata. There are four categories of metadata
Another concept is service regimes, a period of time in which
a common state is valid for the client and server. Regimes provide protocols
for object discovery, object selection, object access, data transfer, and
recovery. Q: is this similar to the idea of contexts to maintain state?
A: These are regimes within an invocation.
Built on all this is concept of OTAM services:
How to pursue this idea?
Q: will meta objects (MOF) feed this? A: Hopefully
Q: Is this the semantic or object file system? A: Yes, can make changes to file or DBMS in place, addressed at the record level, without having to down load information explicitly
Q: Is this related to the Persistent Service? A: Probably builds on persistent service, trader, lifecycle, externalize, concurrency and transactions, query, security, naming, maybe more.
Q: Is FTAM exporting just the file abstraction or also the object-collection-queryable collection abstraction? Seems like it is the former only.
Q: is this similar to thinking about the web where the object is by analogy a page on the web, however created. A: similar but the object is a blob (file) though it might have types (maybe MIME types or IDL types)
Q: how is it related to an OODB? might be similar, not so monolithic, more distributed.
After some discussion, we made the decision to draft a white paper to collect together what we know about OTAM. We outlined the white paper, selected an editor and section authors, encoded as DC = Dave Chi, SS = Shel Sutton, CT = Craig Thompson.
White Paper (Editor - DC)
Issues with OTAM
(from OMG ISIG Minutes of Meeting #9)
Both distributed geographically and across platforms, the distributed simulation community is taking off, especially in training including across allied forces, also in FAA air traffic management training, with wider use by industry expected as tools become cheaper. The time is ripe for standards. U.S. DoD has adopted/mandated High Level Architecture for simulation. HLA is a prime candidate for adoption of an object environment. There is interest in OMG and WWW-NG unification as a base infrastructure.
A Distributed Simulation SIG could contribute. HLA is defined as a set of services. An OMG DSIM SIG would coordinate with Real-time, Internet, and C4I. The SIG could add requirements on CORBA architecture. Some OA&D proposals cover some distributed simulation--their context is executable models. There will be an organizational meeting on Wed 1-5 room 515. Thurs 2:30-5. Captain Hollenbach , Director of DMSO at 1:30 Thursday. He reports to Dr. Anita Jones. Fred has a briefing on HLA with him, also a Mission statement and next moves. There have been two implementations of HLA to date: RTI (Run Time Infrastructure) 0.1 used Orbix. In DMSO Familiarization F0, the decision was made not to use CORBA. This was because some federates are only single threaded. Also, they wanted to be more efficient than CORBA by using asynchronous messaging.
some additional notes on Security