[Etzioni, 1993] argues that software agents are an ideal ``foundation
for core AI research.'' While we agree with this conclusion, we do
not accept the arguments he uses to reach it (see [Coen, 1994]).
Regardless, Etzioni et al's work on Unix ``softbots''
([Etzioni et al., 1994][Etzioni and Segal,\- 1992][Etzioni et al., 1993][Etzioni et al., 1992a]) provides a very
interesting foundation for exploring many central issues in
traditional core AI, particularly in planning. There are many
differences between this work and our own. Softbots are intended for
much more system-administration oriented applications than are
SodaBots; therefore, the softbot level of discourse is in terms of
(low-level) Unix primitives. Softbot agents do not seem to interact
with anything other than their owners, and thus, their capabilities do
not extend to inter-agent communication.
Finally, the softbot system does not seem to have
any provisions for assisting with distribution of softbot agents or
their UWL plans.
The Darpa Knowledge Sharing Effort ([Neches et al., 1991]) has encouraged much agent-based research into knowledge representation and communication languages. This effort has led to the design of an agent communication language (ACL) intended as a universal medium for agent discourse. Genesereth et al. ([Genesereth and Ketchpel, 1994][Genesereth and Singh, 1994]) present a ``federation'' agent architecture that employs this ACL, and [Genesereth 1994] discusses these agents obtaining arbitrary software programs from other agents by advertising their required specifications written in ACL.
It is worth noting that work on ACL has yet not been completed, so agent systems which communicate in ACL do not yet exist. We also remain highly skeptical of this ACL's ontological sufficiency and soundness. Furthermore, agents would have to ``know'' a program existed before they could advertise for it; this type of distribution does not address how novel programs are spread among networked agents. Finally, this work makes no mention of the practical consequences its type of distribution would entail, nor does it discuss the required effort to realize the described hypothetical agents.
The work of [Vere and Bickmore, 1990] is quite unusual. Their ``basic agent'' has a remarkably wide core AI foundation, drawing on a broader range of research areas than any other system with which we are familiar. However, their domain is so narrow and their application so involved that it bears little resemblance to any current work in software agents.