Papers
arxiv:2409.11393

LLM-Agent-UMF: LLM-based Agent Unified Modeling Framework for Seamless Integration of Multi Active/Passive Core-Agents

Published on Sep 17
· Submitted by amine-bh on Sep 23
Authors:
,

Abstract

The integration of tools in LLM-based agents overcame the difficulties of standalone LLMs and traditional agents' limited capabilities. However, the conjunction of these technologies and the proposed enhancements in several state-of-the-art works followed a non-unified software architecture resulting in a lack of modularity. Indeed, they focused mainly on functionalities and overlooked the definition of the component's boundaries within the agent. This caused terminological and architectural ambiguities between researchers which we addressed in this paper by proposing a unified framework that establishes a clear foundation for LLM-based agents' development from both functional and software architectural perspectives. Our framework, LLM-Agent-UMF (LLM-based Agent Unified Modeling Framework), clearly distinguishes between the different components of an agent, setting LLMs, and tools apart from a newly introduced element: the core-agent, playing the role of the central coordinator of the agent which comprises five modules: planning, memory, profile, action, and security, the latter often neglected in previous works. Differences in the internal structure of core-agents led us to classify them into a taxonomy of passive and active types. Based on this, we proposed different multi-core agent architectures combining unique characteristics of various individual agents. For evaluation purposes, we applied this framework to a selection of state-of-the-art agents, thereby demonstrating its alignment with their functionalities and clarifying the overlooked architectural aspects. Moreover, we thoroughly assessed four of our proposed architectures by integrating distinctive agents into hybrid active/passive core-agents' systems. This analysis provided clear insights into potential improvements and highlighted the challenges involved in the combination of specific agents.

Community

Paper author Paper submitter

This paper proposes a unified framework that establishes a clear foundation for LLM-based agents' development from both functional and software architectural perspectives.
Compared with previous works, the 5 main contributions of this paper can be summarized as follows:

  1. Introducing a new terminology, core-agent, as a structural sub-component in LLM-based agents to improve modularity and promote more effective and precise communication among researchers and contributors in the field of agents and LLM technologies.
  2. Modelling the internal structure of a core-agent by adapting the framework suggested by [4] that was originally meant to describe the whole agent from an abstract functional perspective.
  3. Improving our modeling framework by augmenting the core-agent with a security module and introducing new methods within other modules.
  4. Classifying core-agents into active core-agents and passive core-agents, explaining their differences and similarities, and highlighting their unique advantages.
  5. Finally, introducing various multi-core agent architectures emphasizing that the hybrid one-active-many-passive core-agent architecture is the optimal setup for LLM-based agents.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2409.11393 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2409.11393 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2409.11393 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.