Using OBJS Prototypes in ALP-CoABS TIEs

Frank Manola
Object Services and Consulting, Inc. (OBJS)
July 7, 1999



Abstract

This report provides an overview of how three OBJS prototype software packages, developed under the CoABS program, could be used in head-start TIEs with the ALP program.


Contents


Introduction

A draft report ALP-CoABS Initial Technical Exchange Areas <http://www.objs.com/agility/tech-reports/990430-ALP-CoABS-TXA1-report.html> described a proposed initial set of Technical Exchange Areas (TXAs) between ALP and CoABS.  The TXAs identified were: These were referred to as "Technology Exchange Areas" rather than "TIEs" because they did not really correspond to, e.g., the current set of CoABS TIEs.  Instead, they were areas within which specific TIEs would be defined.  The present report provides an overview of how several current OBJS prototype software packages, developed under the CoABS program, could be used in one or more specific TIEs within these TXAs.

The idea behind defining TIEs with ALP using only OBJS software from the CoABS side is twofold:

The TIE ideas presented here represent fairly straightforward mappings between OBJS prototypes and specific roles identified in the TXAs.  They will need further work to flesh out specific details.  There is a separate section for each OBJS prototype.  In each section, there is a brief description of the prototype, followed by a description of potential TIE activities using it.


WebTrader prototype

prototype description:

One of the more interesting middleware capabilities of many agent systems is the ability to provide a run-time matchmaking or discovery capability. Traders (or matchmakers) mediate between clients and services, thus providing a loosely coupled architecture where the binding between service and client can be dynamically established or changed. Services publish their capabilities and availability by registering service advertisements  with the trader (a.k.a. exporting to the trader). Service advertisements contain one or more service handles (multiple handles for multi-portal services) and an offer descriptor.  The offer descriptor describes a service offer  in terms of the service type description (including access methods), and a set of named  properties that capture unique characteristics of the service instance typicallyas (name,value) pairs. Service characteristics that are described in instance properties may include the  geographic location of a network gateway service, the fidelity and accuracy of an interactive map service, service cost and payment options of a stock advisory service, and the languages supported by a text-to-speech translation service.  A client queries the trader by supplying a service template (a.k.a. imports service descriptors from the trader). In response, it receives a collection of matching service descriptors. The service template specifies the desired service type, and property predicates representing the selection criteria for service instances.

OBJS WebTrader is a trader that uses Web search engines to locate trader ads for services and information sources that can dynamically be bound and used by a client.  The WebTrader follows the canonical trader architecture in providing an interchange format for services to advertise their capabilities, an offer repository to store collections of service advertisement, and a matchmaker to match client queries to service advertisements. Client queries and service advertisements are represented in XML, and hence these are human readable and editable, and can be embedded in ordinary HTML pages.  WebTrader uses Web search engines as scalable, widely available, and industrial-strength offer repositories and search mechanisms.

Web-based services (and information sources) describe their capabilities using an XML service advertisement. Services make themselves known to WebTrader instances by publishing an HTML service advertisement page (SAP) containing one or more embedded service advertisements to search engines (or by making those pages accessible to search engine crawlers). WebTraders query search engines to retrieve SAPs, and provide matchmaking services to their clients using the service advertisements contained in these SAPs.  The XML Document Type Definition (DTD) for a service advertisement has sections for describing the interface, metadata (properties) and searchKeywords.  The interface section describes the operational interface of the service, including its access protocol(s) and access methods. The metadata section allows sets of instance-specific service properties to be specified as name-value pairs.  The searchKeywords section provides keywords that are to be used by the search engine to access the SAP. In cases where the crawlers support HTML META tags, the terms in the searchKeywords section are replicated as META tags in the SAP.

A client queries the WebTrader by advertising its needs in a client advertisement. The XML structure of a client advertisement is very similar to that of a service advertisement, with the difference being in the interpretation. Metadata in the client advertisement states what the client needs as type and property predicates. Type predicates indicate the interface requirements that a matching service instance must satisfy, and property predicates specify hard and soft requirements on the properties of a matching service instance. A client queries the WebTrader by publishing the client advertisement to the WebTrader, and receiving a page of matching bindings in return.  This binding data provides all the information a client needs to bind to a matching service.

Multiple traders can cooperate by sharing service advertisements via a search engine (indirect federation), or by advertising themselves as services that can be used by other WebTraders (direct federation). In indirect trader federation, a search engine operates as the common advertisement repository that is shared by multiple traders. In direct federation, a trader that is unable to find a satisfactory match for the client uses the trading mechanism to find (and bind to) other WebTraders.  Since a WebTrader service is itself a service, direct trader federation uses standard trading mechanisms. In case of globally accessible search engines this offers an approach to global trading of services. Local search engines that operate on a web site or over an intranet provide service instances with a way to limit the scope of their advertisement. The multiplicity of search engines, however, creates a fragmentation problem in that the services a trader may want to trade with may be advertised over a mix of search engines varying from portals to web sites. DeepSearch is an application of WebTrader where trader ads are for other search services that can be recursively searched.  DeepSearch provides a mechanism for a WebTrader to view this collection of search engines as a single logical advertisement repository. It does so by allowing search engines to advertise themselves as services to other search engines. A WebTrader can then collect advertisements from a single search engine (including ads to other search engines) and choose to recursively search as many search engines as its matchmaking algorithm wants to propagate through.

An advantage of WebTrader's use of Web technology is that the Web provides a built-in infrastructure for implementing much of the required functionality.  For example, existing search engines provide the data storage and retrieval functionality, while COTS Web browsers provide the basic user interface.  The use of Web technology also provides the basis for integrating new Web technology (based on commercial products) as it matures, such as technology being developed in the Web context for describing and accessing Web services (see, e.g., Webmethods' B2B white paper http://www.webmethods.com/products/b2b/b2b_wp.html).  Use of the Web also provides scaleability.  For example, the work of providing ads (in the ALP context, descriptions of clusters, plugins, and services) would distributed among cluster and plugin implementors, rather than falling on one or a few organizations.

WebTrader is being used in DARPA's Control of Agent-Based Systems (CoABS) program, as part of an evacuation scenario demonstration. Efficiently evacuating civilians from a country in regional conflict requires tracing down and contacting the individuals, and executing their evacuation in a planned manner. Much of the information for such an operation such as the consulate "white pages" of civilians in the country, the addresses of the hotels they are staying in, a map of the city and locations of these hotels, are accessible as web-based services. The WebTrader is used by the agent computation to locate and access these services in a coordinated manner.  For more details, see Venu Vasudevan and Tom Bannon, WebTrader:  Discovery and Programmed Access to Web-Based Services <http://www.objs.com/agility/tech-reports/9812-web-trader-paper/WebTraderPaper.html>.

potential WebTrader-ALP TIE activities:

The most straightforward TIE activities for WebTrader involve adding it as an ALP component and using it as a trader or matchmaker in investigating dynamic configuration ideas described within TXA2.  The WebTrader could initially function within a small set of clusters (i.e., a small society, or in a community within a larger society).  A possible scenario might be one involving a deployment to an unanticipated locality, and a need to find local resources (e.g., shippers) via Web interfaces they have published for EDI (based on XML-EDI scenarios now being widely discussed in the trade press).

It will be necessary to determine exactly how to integrate the WebTrader into the cluster architecture.  There is more than one option.  One approach would be to wrap the WebTrader as a cluster using the "Service Manager" model discussed in one of the ALP presentations (WebTrader has already been wrapped to function within one agent architecture:  WebTrader Agent is an SRI Open Agent Architecture (OAA)-wrapped WebTrader).  Several other clusters would be set up to have the WebTrader cluster as one of their resources, which their allocators would be set up to access to dynamically find other resources.  Alternatively, the WebTrader could be used as part of an Allocator plugin.  In this case, when the Allocator plugin executed, it would use the WebTrader to expand the set of resources available to assign to tasks to include those accessible via WebTrader.  It would presumably also be possible to expand the normal WebTrader access pattern (access at bind time) to have the Allocator describe the characteristics of resources it wants to WebTrader, have the WebTrader search for them asynchronously, and add discovered resources to the cluster's resources in the LogPlan.

As noted in the discussion of TXA2, providing dynamic configuration capabilities in ALP requires changes in addition to the presence of WebTrader itself, specifically:

Since ALP clusters correspond most closely to agents in other agent architectures, initially it might be most straightforward to consider clusters as the only resources to be represented in WebTrader.  However, in the long run it would make sense to describe plugins, as well as Web-accessible resources/service, in WebTrader as well (and provide for ALP to make use of WebTrader in accessing them).

For ALP-specific purposes, it might make sense to provide a specialized repository for ALP service ads with its own search engine (perhaps a COTS one) for use in this TIE.  This could evolve into a specialized logistics search engine for use with ALP that only indexes "logistics-relevant" information;  this would reduce the amount of information that might have to be otherwise filtered for ALP's application.

A related piece of work would be to use WebTrader's XML-based service language to investigate representation of cluster and plugin capabilities as XML-based "ads" (develop a language for describing cluster and plugin capabilities).   This is necessary anyway in order to use the WebTrader for ALP trading, but could also be a somewhat independent activity.  The language could be used by other traders as a work product independently of the WebTrader software, and at the same time, service description languages other than the WebTrader's could also be investigated (and potentially used by an enhanced WebTrader).    This work potentially interacts with language issues arising in connection with eGent (see below).

Another related piece of work would be investigating more dynamic interoperability between the ALP architecture and non-cluster or plugin resources that might be found on the Web using WebTrader (such as B2B, WIDL, or XML-RPC interfaces).  This is important since potentially useful services on the Web would not necessarily directly support cluster or plugin interfaces.  Since plugins serve to wrap external resources in ALP, a possible approach to doing this might be to look at a form of dynamically-tailorable plugin.  Such a plugin would make use of XML's self-description capabilities to dynamically map between these forms of external interface and the plugin/LogPlan interfaces (e.g., there might be a mapping defined for each type of interface:  WIDL, XML-RPC, etc.).

A TIE involving both WebTrader and eGent could be developed to deal with downloadable components.  Initially, the WebTrader TIE might describe only interfaces of external resources (clusters, etc.) in the repositories accessed by WebTrader.  The resource descriptions could be extended to indicate whether the service was a remote service (interface) or a downloadable component (such as a plugin that could be remotely installed).  The client's request could indicate whether it wanted its selected resource downloaded or not.  Transmission of the resource for downloading could use straight Java technology, or use eGent to do the actual downloading.  This providing for downloading involves addressing a number of the issues described in connection with TIEs involving eGent (mobility) below.
 


eGent prototype

prototype description:

OBJS eGent is a lightweight scalable agent framework that supports agent communication using a subset of FIPA's Agent Communication Language (ACL) encoded in XML. eGent ACL wrappers wrap Java Beans which serve as eGent agents. The eGent transport is 2-tier, using different mechanisms for ACL transport across machines (and therefore across java virtual machines) and within a java virtual machine. The wide-area message transport uses XML and SMTP to transport ACL "documents" across machines. On a single machine, ACL performative "objects" are transmitted between agents using Java bean mechanisms.  Each machine, corresponding to a person or agent, has a running agent station(a Java virtual machine running eGent), which runs multiple agents. The station receives (and forwards) emailed performatives to its agents. The agent station also behaves like a lifecycle service, starting up (and potentially shutting down) agents on demand. So a message can be sent to "agent A on station Z", where stations have unique email addresses. If there is no agent A running on station Z, Z will spawn such an agent and hand it the performative ("agent faulting").

The idea is similar in spirit to JATLite, which provides an agent communication channel for Java applet agents over TCP/IP connections.  However, the use of e-mail as a computation infrastructure provides a simpler and more general infrastructure for the following reasons:

Other projects, such as AT&T's VisitorBot, provide a good real-life application for which e-mail is a suitable computational infrastructure.  MIT's SodaBot provides an e-mail based agent system.

eGent can be ACL-interoperable with any other agent system. eGent supports a "performative as document" metaphor. From the point of view of any agent, anything that reads and writes XML-ACL is an agent, including humans ("on the Internet, no one knows you're an agent"). This supports mixed-mode systems in which some of the agents are humans who read and compose ACL messages.  In an agent system consisting of multiple interacting agents, one could transparently swap a human with an agent (or vice-versa), e.g., to get either greater automation or greater debuggability, or support human mediation.

In the current eGent prototype, the agent station (the place where one or more agents are located, and execute) is a POP3 email client (eventually, IMAP could also be supported). This client is a separate component from the agents on the platform, simply providing a way to get the mail from the server onto the platform.  A eGent agent URL is its (email address + agent-id), e.g., fmanola@objs.com#fman_agent1. The email address is the address of the POP3 mailbox to which messages to that agent are sent, not the tcp/ip address of a platform where the receiving agent has to be situated. Hence the address is somewhat location-independent, in the sense that the agent (or its station) could potentially access the email from anywhere.    The client periodically connects to its mail server (at the ISP) to check for any new mail (which would have been sent to a mailbox managed by that server) intended for its agents.   For example, instead of Netscape's email client polling the mail server, the eGent platform (mail client) polls it (potentially this could be done on demand) . The current prototype assumes that each user has 2 email addresses, one for him/her, and one for his/her agents. Email addresses are seen as a cheap resource. However, alternatives are possible, e.g., the user and his agent share a mailbox. In this case, either they have to coordinate access or eGent can be made to access mail filtered to a client's "agent" mailbox.

eGent agents are Java components that are ACL-enabled by a lightweight wrapper infrastructure. eGent wrappers are themselves Javabeans that receive/send performatives from the eGent messaging infrastructure. The mapping between a performative and the implementation of the performative is maintained as a per-wrapper microscript. The microscripting language is a simple language which maps each performative (and its parameters) to a sequence of Java method calls to the agent implementation (the Java component).  When a wrapper receives a performative, it executes the microscript by making these method calls in sequence and returns the ACL-ized results to the sender agent.  Current eGent wrappers depend on java reflection to dynamically assemble and invoke methods on the java component. However, this is an engineering decision that could be changed. E.g. it should be possible for an agent implementation to be a CORBA component, and for the wrapper to use DII (and the interface repository ....) to map ACL messages to CORBA calls.  One could conceivably plug-in different microscript interpreters into the wrapper that specialize in java calls, CORBA calls etc. Dealing with humans should be straightforward.

The eGent prototype was motivated by the CoABS objective to quickly tie distributed software components together.  Both the encoding of FIPA ACL in XML and the JavaMail/MAPI API appear to be candidates for agent standards, and are the basis of an OBJS submission to FIPA.  See Venu Vasudevan, FIPA E-Gents: Agents over Computational E-mail <http://www.objs.com/agility/tech-reports/9812-FIPA-Comp-Email-Agents.html>.

potential eGent-ALPTIE activities:

One potential eGent-ALP TIE activity would be to use eGent as a lightweight cluster communications mechanism to support certain types of mobility and disconnected operations in TXA3, e.g., in a scenario involving one or more mobile or disconnected clusters (serving as either task sources or resources).  Another potential TIE activity would be to use eGent as a software distribution mechanism for some scenarios in TXA2, although this would require some extensions to the eGent prototype as it currently exists.  Venu Vasudevan has noted that, in the medium/long run, eGent could support both mobility and s/w distribution. The mobility solution would likely be XML-based (perhaps integrating INRIA's Koala for this purpose). Agents would move (or be moved) by transporting their state around as XML documents which are shipped as emails. The notion of "agent faulting" has also been considered (similar to OODB object faulting) in eGent, as a way of software distribution on-demand. For example, Marshall Brinn mentioned a scenario in which a plugin is downloaded automatically to support a new type of task (previously unknown to the cluster).  This could be considered a variant of eGent's "agent faulting", but at the plugin level.  In a straightforward implementation of agent faulting in eGent, an invocation of agent X on platform Y causes an instance of X to be dynamically installed on platform Y. As a result, the client need not know whether an agent was already there, or put in place in response to the invocation. Implementing agent faulting requires a software distribution infrastructure, which in turn requires configuration information to be stored and shipped around, potentially based on an XML-based vocabulary (similar in spirit to Marimba's OSD specification).

Use of eGent in ALP for disconnected operations would involve changing the communications mechanism between components to use email, and the identity used for components to use email addresses.  An issue would be whether the ALP infrastructure would be changed so all components used email, or arranging for only some components to use email (probably the latter).  There would also be an issue as to whether both normal ALP communications and email were to be available and, if so, whether the use of the alternatives was to be transparently controlled by the infrastructure, or explicitly controlled by the component (in which case communications would not be transparent, and logic to control which communications mechanism was to be used would have to be added to selected components).  It should also be noted that ALP already supports communications with components that do not always receive messages (the sending cluster saves the messages and re-sends them).  There would be two variant types of scenarios for disconnected operations:

In the latter case, the communication is internal to the cluster.  Changing cluster-plugin communication would probably be harder, as it requires changing built-in interfaces;  changing plugin-resource communication would be easier, as this need not use the ALP infrastructure and can be whatever the plugin and resource agree on.

Supporting mobility would be somewhat trickier architecturally, as more issues are involved ("mobility" here means moving agents/clusters/plugins between platforms, not moving the platform).  There are two major variants here:

As noted in the TXA2 discussion, the idea of a plugin repository from which a cluster could retrieve plugins it needed to do new tasks, or downloading required plugins with the tasks, has been raised in discussions with Marshall Brinn.   Downloading a required plugin with a new task would potentially require no fundamental architectural changes;  e.g., a cluster could automatically load any plugin it received with a task, and continue to work as usual.  However, the logic required to decide to do this would have to be built into the tasking clusters.  Moreover, the changed capabilities of the tasked cluster due to the additional plugin would be unknown to other clusters.   The use of a plugin repository would require plugins to have service descriptions which described their capabilities, and clusters to contain logic (not currently present) that could determine when the cluster needed an additional plugin, and arrange to retrieve it from the repository.

It has also been suggested that, via a user interface, a user might be able to direct a cluster to, e.g., load a given plugin.  However, such manual changes to the configuration generally requires other configuration changes as well.  For example, unless the added plugin provides no new functionality that other clusters would care about, changing the capability of a given cluster would require corresponding changes to other clusters so they would recognize and use the new capability in the changed cluster.

One aspect of dealing with more dynamic plugin loading in ALP involves providing additional plugin control mechanisms.  At the present time, ALP plugins somewhat resemble the rules in a rule-based system.  They define predicates which indicate which sorts of tasks they are interested in, and when the predicate is matched by a task, they can begin working on it.  Plugins are totally independent of each other, and there is no built-in mechanism to prevent two plugins from working on the same task, or which requires that a given task be worked on by some plugin. Instead, the plugins must be designed in such a way that they work together without such conflicts, and clusters designed so that they don't send tasks to a cluster that the receiving cluster cannot handle.  In a more dynamic architecture, where, e.g., the collection of plugins in a cluster might dynamically change, additional mechanisms are necessary to reduce semantic coupling between plugins, enable them to be developed more independently, and reused in more flexible combinations.  Explicit service descriptions for plugins would be part of addressing this problem.  In addition, for example, some form of "conflict resolution" might be provided to control cases where multiple plugins match the same task (or when no plugin wants to work on a task).

ALP currently assumes that plugins are local to their clusters.  Changing this assumption (to support mobility of plugins independently of the clusters with which they are associated) would require reworking the plugin interface definitions to support remote messaging between plugins and clusters.  It is not clear that this level of mobility is really necessary.  (However, this does not mean that, e.g., external systems wrapped by plugins cannot be remote from clusters, since the plugins can use whatever remote access capabilities they want in accessing systems they wrap).

A useful related piece of work would be translation of ALP's task/directive language to an XML representation used in eGent messages.  Another would be to see how effective mail-based logging via BCC would be (as a lightweight logging service);  this might be further developed into some form of notification service too.  Simple programs like "vacation" could be used to inform sending agents that the receiver was, e.g., disconnected or otherwise unavailable, or to forward messages to alternative mailboxes (in case multiple agents were not reading off the same mailbox to handle requests).

It is also interesting to note that the microscript in a eGent wrapper is architecturally similar to the workflow an ALP expander produces when it receives a task;  in the eGent case, an ACL performative is "expanded" into a sequence of method calls on the wrapped Java object.  An ALP expander expands a task it receives into a workflow of subtasks to be handled by "resources" it knows about, either physical resources like trucks, or other clusters (agents) that it knows can handle various specialized tasks.  (Of course, ALP can use large-scale components such as domain planners as task expanders, so while the architectural abstraction is the same, the mapping complexity is much greater).

Both ALP and Agility are making extensive use of Java technology, and hence this could provide a vehicle for useful technology exchange.  For example, since ALP plugins are defined as JavaBeans, they have explicit introspective interfaces already, which could be used as the basis of further development of service interfaces for these components.  In particular, this could be the basis of using eGent's wrapping capabilities.
 


MBNLI/AgentGram prototype

prototype description:

Agents communicate with each other through sub-languages like agent communication languages, but there must be means for them to communicate with people as well.  This can be done in a variety of ways including use of  command languages, graphical user interfaces, and eventually agent-based multimodal user interface frameworks that integrate and coordinate a variety of ways for agents to interact with each other and people.  Since agents are often acting for humans, it would seem natural if people could communicate with agents using natural language.  For example, agents might receive commands from people in natural language and might explain their behavior to people in natural language.  They might even use natural language or restricted special languages to communicate with each other (e.g., toys interacting).

This does not change a long-standing problem with the use of natural language interface technology, namely habitability.  It remains difficult for people to understand the limitations (e.g., limited lexicons, grammars, user models, and domain models) of the natural language an agent (application) might use to communicate with people.   As a result, people are often frustrated in using unrestricted speech or type-in natural language interfaces.  As part of our research in agent coordination frameworks, we have been exploring a technology called Menu-Based Natural Language Interfaces (MBNLI).  In this approach, the user creates a sentence (query or command) by selecting words and phrases from menus.  The menus and menu options available at a given point in a user interaction are based on a grammar driven by a domain model (such as a collection of DBMS relations), which guides the user so that he/she is constrained to specify only sentences the system can understand.

The current language translation supported by MBNLI is from a restricted natural language (not limited to English) to SQL. The content of the restricted natural language is defined in a lexicon derived from an individual database schema's relations, attributes, and joins. An associated grammar defines the syntactic relationships between these parts and the translation rules required to produce SQL.  At runtime MBNLI imports the specific grammar and lexicon to be used, and therefore is not restricted to database-related languages or translations only to SQL. For example, a grammar could be produced to translate English-like sentences to ACL or XML. While language generation is not supported, it is conceivable that companion grammars could be written to enable that.

AgentGram applies this idea to agents, allowing agents to communicate with humans and each other using MBNLI technology, by translating between (restricted) English and ACLs.  Our research is focusing on how to use MBNLI for human-agent and agent-agent interactions, especially how to attach MBNLI grammars and ontologies as metadata to agents so that agents who "do not know each other" can talk to each other (using the OBJS WebTrader to advertise and discover grammars supporting specific agents would be one mechanism for doing this). Use of ACLs, XML, and other human-readable formats has been touted as an important benefit for debugging, but not necessarily for general user consumption. MBNLI could be used to produce user-understandable (versus developer-understandable) messages enabling the receiving agent, human or program, to understand them.  Another focus of AgentGram is to support the construction of sentences which involve segments of grammars and lexicons of multiple distributed agents.  A final sentence production may be the result of combining segments created using the lexicons of several different agents. During sentence construction, the partial construction of a sentence may be used to determine the legal next agents whose lexicons (and grammars) can be made available to further complete the sentence in progress.

A MBNLI component has been prototyped and a MBNLI OAA-based agent developed for use in the CoABS NEO TIE #2 as a test to see if agent-based grammars make sense.

potential MBNLI-ALP TIE activities:

The obvious place to use MBNLI technology in ALP TIEs would be in developing user interface TIEs in TXA4.  MBNLI could be used to allow users to access or query ALP data sources, the log plan itself, or to interact with clusters, possibly as an interface to ALP's tasking language.

Even though ALP clusters may have their own tailored UIs, MBNLI-based interfaces might provide an additional interesting interface.   In addition, using AgentGram, it might be possible to develop user interfaces to control multiple clusters.  That is, since clusters (or communities) have tailored vocabularies, it may be possible to use AgentGram ideas in integrating them in a common UI, and make it easier for users to control collections of clusters or communities as a single "entity", instead of potentially having to manually break what would ordinarily be a single message into separate messages (possibly using separate interfaces) because the message crosses a cluster or community vocabulary boundary.  These user interfaces would have to be capable of expressing policies ("don't overfly France") and possibly assumptions ("I'm assuming I can overfly France"), as well as commands.

A more speculative use of the technology might be in the control of plugins.  Discussion above noted that plugins resemble in some sense rules in a rule-based system, and there is a possible need for "conflict resolution".  It would be interesting to see if the AgentGram technology, which supports constraining sentences involving multiple agents, could be driven in reverse, so that given a sentence (possibly constructed some other way), the sentence could be broken up into separate tasks for separate plugins (and so handle part of the conflict resolution problem).  It might also be possible to use the technology in specifying component configurations in TXA2.  Configuring a system can potentially be handled via a specialized grammar/lex in NLI that can be used to define a system's attributes and then cause that system to be instantiated and configured per the constructed description.  AgentGram could also be used to produce a system-independent English-like agent language to map between ACL and ALP's tasking language in TXA1. Inter-agent communication would be in English-like sentences which could be translated to ALP's tasking language or ACL using translation rules developed for NLI's translator.