Skip to content

What is Artificial Intelligent Agent?

What is Artificial Intelligent Agent?

This article begins with a discussion of different definitions proposed for describing agents and Intelligent Agents (IAs). An IA is an autonomous computational entity that can perceive the environment and plan a series of tasks to achieve a certain goal(s). The main characteristics of an IA include autonomy, reactivity, proactivity, and social ability. IAs may also be equipped with other capabilities including mobility, learning, and rationality. These characteristics are briefly explained in this article.

What is an Agent?

There are many definitions of what an agent is. The major reason for this variance is the exponential growth of diversity and functionality of agents in different domains. In general terms, the word ‘agent’ is defined by the Oxford dictionary; as “a person or thing that takes an active role or produces a specified effect”. 

An agent can be defined as an entity within a system that “takes sensory input from the environment and autonomously produces outcomes that affect its environment” [1]. For instance, a thermostat can be considered a hardware agent that senses the temperature in the environment and switches heating or cooling devices on or off to maintain a temperature close to a certain value. 

The concept of software agents can be traced back to the Concurrent Actor Model developed by Hewitt [2, 3]. This mathematical model introduced the concept of Self-contained Actors that execute functions concurrently and communicate with each other through messages. Nwana [4] describes agents as “software or hardware components within a system that are capable of accomplishing tasks on behalf of its source”, based on information received from the environment. 

Characteristics of Intelligent Agents

There are distinctions between an agent and an Intelligent Agent (IA). The level of autonomy and other properties distinguish IAs from agents. The Wooldridge and Jennings definition of weak and strong agencies currently dominates most literature [5]. The weaker notion defines the term agent as having the ability to provide simple autonomy, reactivity, proactivity, and sociability. 

  • Autonomy is the most important property of an IA and is defined as the ability of an agent to make decisions and control its actions and internal states without direct intervention from other entities (human or machine). In other words, an IA is independent and makes its own decisions. 
  • Reactivity refers to the ability of an agent to perceive and react to environmental changes in order to achieve the goal(s). 
  • Proactivity is the ability of an agent to plan and perform the required actions to achieve its goal(s). 
  • Social Ability enables agents to communicate and interact with each other and other entities in the environment. This interaction can be in the form of coordination, cooperation, negotiation, and even competition. 

The Stronger Notion of Agency

The stronger notion of agency is more descriptive and refers to computer systems that extend the above properties, as either abstract or personified concepts. It is quite common in an IA to characterise an agent using cognitive notions including knowledge, belief, intention, and obligation. In the strong notion of agency, agents are considered to have more human-like characteristics and mental attitudes, including rationality, learning, mobility, cooperation, and coordination. 

  • Mobility is the agent’s ability to move from its origin to other machines across a network and perform design objectives locally on remote hosts [6]. Mobile agents can increase the processing speeds of the system as a whole and reduce network traffic and communication costs. 
  • Rationality is the ability of an agent to make decisions that are dynamically based on the state of the environment [7]. A detailed analysis of what rationality means can be found in [8]. This analysis forms the basis of the Beliefs, Desires, and Intentions (BDI) model [9, 10] for software agents. 
  • Learning is the ability of an agent to learn from interactions and changes in the environment through experience in order to improve its performance over time [6]. With a learning ability, an agent is able to add and improve its features dynamically.
  • Cooperation is establishing a voluntary relationship with another agent to adopt its goal. Cooperation with an agent enables the two agents to establish a voluntary relationship with each other to adopt mutual goals and form a combined team. 
  • Coordination is the ability to manage the interdependencies between humans or other agents and form a team with them. Depending on the application and purpose of where and how agents are used, these properties can be desirable or undesirable. With the aforementioned characteristics, agents can socially interact with each other and collaborate to perform a task. Future research needs to be undertaken to improve ‘agent teaming and learning’ to further develop social IAs. With intelligence and interaction, agents can act as a bridge between humans and machines. A combination of these characteristics may be integrated into agents, which are then referred to as Hybrid Agents.

In the next article, we’ll look at Multi-Agent System (MAS), which is a group of agents or humans and agents that interact with each other and the environment to achieve goals.

References and Credits

  • [1] M. Wooldridge. An Introduction to MultiAgent Systems. John Wiley and Sons Ltd, Chichester, UK, 2002.
  • [2] C. Hewitt. Viewing control structures as patterns of passing messages. Artificial Intelligence, 8(3):323–364, 1977.
  • [3] C. Hewitt, P. Bishop, and R. Steiger. A universal modular actor formalism for artificial intelligence. In IJCAI, pages 235–245, 1973.
  • [4] H. S. Nwana. Software agents: An overview. In: McBurney, P. (ed.) The Knowledge Engineering Review, Cambridge Journals, Simon Parsons, City University of New York, USA, 11(3), 1996.
  • [5] M. Wooldridge and N. R. Jennings. Agent theories, architectures, and languages: A survey. In Proceedings of the Workshop on Agent Theories, Architectures, and Languages on Intelligent Agents (ECAI’94), pages 1–39, Springer-Verlag, New York, 1995.
  • [6] N. Jennings and M. Wooldridge. Software agents. IEE Review, The Institution of Engineering and Technology (IET), United Kingdom, 42(1):17–20, 1996.
  • [7] M. J. Wooldridge. Reasoning about Rational Agents. The MIT Press, Cambridge, Massachusetts, London, England, 2000.
  • [8] M. Bratman. Intention, Plans, and Practical Reason. Harvard University Press, 1987.
  • [9] A. S. Rao and M. P. Georgeff. An abstract architecture for rational agents. In Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning (KR ’92), pages 439–449, 1992.
  • [10] A. S. Rao and M. P.  Georgeff.  BDI agents:  from theory to practice.  In  1st International Conference on Multi-Agent Systems, pages 312–319. San Francisco, CA, 1995.
  • Thumbnail Photo by ThisIsEngineering from Pexels

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.