Skip to main content
OpenConf small logo

Providing all your submission and review needs
Abstract and paper submission, peer-review, discussion, shepherding, program, proceedings, and much more

Worldwide & Multilingual
OpenConf has powered thousands of events and journals in over 100 countries and more than a dozen languages.

Large Language Models For Message Mediation In Multi-Agent Networks Environments

This research investigates using large language models (LLMs) to facilitate communication among agents that utilize different protocols or languages in a multi-agent network environment. With their advanced natural language processing capabilities, LLMs offer a promising approach to interpreting, standardizing, and translating messages in real time, bridging gaps between disparate communication systems. Our experiment evaluates the performance of models like GPT-Neo 2.7B and DistilGPT-2, focusing on their ability to mediate effectively under varying network conditions and protocols. The results demonstrate that LLM-based mediation can improve agents' interoperability, precision, and transmission efficiency. Specifically, GPT-Neo 2.7B achieved a success rate of up to 95% and maintained a precision of 1.0, even under 5% packet loss conditions and with 1000 agents. In contrast, DistilGPT-2 reduced precision to 0.71 under the same conditions and a drop to 0.62 with 10% packet loss. Through optimized prompts and appropriate configuration, our approach highlights the potential of LLMs to serve as universal mediators in multi-agent networks, ensuring smoother and more efficient interactions and improving information exchange in complex and heterogeneous network environments.

Myke Valadão
Federal University of Amazonas
Brazil

Celso Carvalho
Federal University of Amazonas
Brazil

Waldir Sabino
Federal University of Amazonas
Brazil