Papers
arxiv:2505.23844

Enabling Flexible Multi-LLM Integration for Scalable Knowledge Aggregation

Published on May 28
· Submitted by TonyK on Jun 2
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

A framework for adaptive selection and dynamic weighted fusion of knowledge from multiple LLMs reduces interference and improves scalability in knowledge aggregation.

AI-generated summary

Large language models (LLMs) have shown remarkable promise but remain challenging to continually improve through traditional finetuning, particularly when integrating capabilities from other specialized LLMs. Popular methods like ensemble and weight merging require substantial memory and struggle to adapt to changing data environments. Recent efforts have transferred knowledge from multiple LLMs into a single target model; however, they suffer from interference and degraded performance among tasks, largely due to limited flexibility in candidate selection and training pipelines. To address these issues, we propose a framework that adaptively selects and aggregates knowledge from diverse LLMs to build a single, stronger model, avoiding the high memory overhead of ensemble and inflexible weight merging. Specifically, we design an adaptive selection network that identifies the most relevant source LLMs based on their scores, thereby reducing knowledge interference. We further propose a dynamic weighted fusion strategy that accounts for the inherent strengths of candidate LLMs, along with a feedback-driven loss function that prevents the selector from converging on a single subset of sources. Experimental results demonstrate that our method can enable a more stable and scalable knowledge aggregation process while reducing knowledge interference by up to 50% compared to existing approaches. Code is avaliable at https://github.com/ZLKong/LLM_Integration

Community

Paper submitter
edited 3 days ago

The contributions are as follows:
• The paper finds that merely increasing the number of fusion candidates and expanding the source model pool does not necessarily enhance the fusion process, a selective strategy is more effective in
minimizing knowledge interference.
• The paper proposes a novel dynamic integration framework that adaptively selects LLMs for integration, leveraging an adaptive selection network, a dynamic weighted fusion strategy, and a feedback-driven loss function to alleviate interference issues and enhance performance.
• The model shows improvement in accuracy across multiple benchmarks as more models are
integrated, while reducing knowledge interference by up to 50% compared to previous methods.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.23844 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.23844 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.23844 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.