Jump to content

Computer Programming/Agent Oriented Programming

From Wikibooks, open books for an open world

Agent Oriented Programming

[edit | edit source]

Agent-oriented programming is a fairly new programming paradigm that supports a societal view of computation. In AOP, objects known as agents interact to achieve individual goals. Agents can exist in a structure as complex as a global internet or one as simple as a module of a common program. Agents can be autonomous entities, deciding their next step without the interference of a user, or they can be controllable, serving as a mediary between the user and another agent.

An agent, according to (Shoham, 1993) is "an entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices, and commitments". This definition is cryptic at best and useless at worst. In his paper, Shoham narrows this definition by applying the criterion of Dennet and McCarthy. Dennet recognized that many simple, inanimate objects can be viewed as falling into this framework (which he calls the intentional stance). McCarthy suggests that mental attributes should be ascribed to objects when they are "useful".

Consider, for example, the agent 'gravity'. Gravity can be viewed as an entity that is capable of applying force to objects and chooses to do so to every object it encounters. Taking the example even further, we can say that gravity can choose to apply this force in any direction and with any strength; it merely chooses to do so in a 'down' direction at 9.8m/s2 (on Earth when the second object has negligible mass compared with the first). This is not too far from the belief of many individuals before Galileo's experiments. These days, however, this view gains us nothing; it is not useful.

Consider, now, an autonomous rover on the surface of Jupiter's moon, Europa. The robot can be viewed as an agent that believes that the places with the roughest ice patches are the thinnest layers of ice. It might desire to set up drilling in those areas. It might not be capable of navigating very rocky terrain, and might therefore want to avoid such situations. These are reasonable, and possibly useful, ascriptions of mental attributes. Indeed, it might be useful to produce code with this perspective in mind. This is the reasoning behind agent-oriented programming.

Shoham suggests that an AOP system needs each of three elements to be complete :

  • A formal language with clear syntax for describing the mental state. This would likely include structure for stating beliefs (e.g. the predicate calculus), passing messages, etc. Most of the research in the field has been on this portion and stresses the semantic issues related to artificial intelligence.
  • A programming language in which to define agents. The semantics of this language should be closely related to those of the formal language. In many situations, functionality will be difficult or impossible to implement and further limitations will be introduced. We will see this in more than one case.
  • A method for converting neutral applications into agents. This sort of tool would allow an agent to communicate with a non-agent by attributing intentions much the same way as we did with the inter-planetary rover above. This portion of the specification is not related to the language issues and will not be discussed here.