Software agents are ontologically grounded in their role in the agent community. Agents have beliefs, commitments, obligations, intentions, and perhaps even confusion, stubbornness, etc. Exactly what these agents do with all their commitments, obligations, intentions, etc, has not necessarily been made particularly clear, but what's supposed to be important is that we have a motivated vocabulary for describing coordinated agent interaction, e.g. Agent1sent Agent2 e-mail because it felt ``obligated,'' or perhaps Agent1 crashed the network because it was ``confused.''
[Shoham, 1993] has defined an formal language for describing agents' ``mental states'' in terms of epistemic logic. He also presents a corresponding agent programming language called AGENT-0 ([Torrance and Viola, 1991]) which is semantically grounded in this mental state language. AGENT-0 very much resembles Prolog, but it has primitives which are well-suited for communication of obligations, beliefs, and capabilities between agents.
Whereas SodaBot is intended for assisting with practical, on-line tasks, AGENT-0 is suited for researching the interaction of coordinated cognitively-based agents, i.e. agents that think, but don't do much else. It would seem that neither system would be particularly adept at handling the job of the other. His approach does not necessarily conflict with our own. In fact, it would be very interesting to try combining aspects of both systems by providing BSAs with some type of formal intentional state.
We note that there is much other theoretical research into agent cognition, such as [Doyle et al., 1991]. Again, it would be very interesting to ground this work by implementing it in a realized system.