Large Language Models For Message Mediation In Multi-Agent Networks Environments
This research investigates using large language models (LLMs) to facilitate communication among agents that utilize different protocols or languages in a multi-agent network environment. With their advanced natural language processing capabilities, LLMs offer a promising approach to interpreting, standardizing, and translating messages in real time, bridging gaps between disparate communication systems. Our experiment evaluates the performance of models like GPT-Neo 2.7B and DistilGPT-2, focusing on their ability to mediate effectively under varying network conditions and protocols. The results demonstrate that LLM-based mediation can improve agents' interoperability, precision, and transmission efficiency. Specifically, GPT-Neo 2.7B achieved a success rate of up to 95% and maintained a precision of 1.0, even under 5% packet loss conditions and with 1000 agents. In contrast, DistilGPT-2 reduced precision to 0.71 under the same conditions and a drop to 0.62 with 10% packet loss. Through optimized prompts and appropriate configuration, our approach highlights the potential of LLMs to serve as universal mediators in multi-agent networks, ensuring smoother and more efficient interactions and improving information exchange in complex and heterogeneous network environments.