Papers
arxiv:2506.14824

FedNano: Toward Lightweight Federated Tuning for Pretrained Multimodal Large Language Models

Published on Jun 12
· Submitted by ShuoChen99 on Jun 19
Authors:
,
,
,
,

Abstract

FedNano is a federated learning framework that centralizes large language models on servers and uses NanoEdge modules for client-specific adaptation, addressing scalability and privacy issues.

AI-generated summary

Multimodal Large Language Models (MLLMs) excel in tasks like multimodal reasoning and cross-modal retrieval but face deployment challenges in real-world scenarios due to distributed multimodal data and strict privacy requirements. Federated Learning (FL) offers a solution by enabling collaborative model training without centralizing data. However, realizing FL for MLLMs presents significant challenges, including high computational demands, limited client capacity, substantial communication costs, and heterogeneous client data. Existing FL methods assume client-side deployment of full models, an assumption that breaks down for large-scale MLLMs due to their massive size and communication demands. To address these limitations, we propose FedNano, the first FL framework that centralizes the LLM on the server while introducing NanoEdge, a lightweight module for client-specific adaptation. NanoEdge employs modality-specific encoders, connectors, and trainable NanoAdapters with low-rank adaptation. This design eliminates the need to deploy LLM on clients, reducing client-side storage by 95%, and limiting communication overhead to only 0.01% of the model parameters. By transmitting only compact NanoAdapter updates, FedNano handles heterogeneous client data and resource constraints while preserving privacy. Experiments demonstrate that FedNano outperforms prior FL baselines, bridging the gap between MLLM scale and FL feasibility, and enabling scalable, decentralized multimodal AI systems.

Community

Paper submitter

FedNano is the first federated learning framework tailored for large multimodal LLMs that avoids deploying the LLM on clients. Instead, it centralizes the LLM on the server and introduces a lightweight, client-side module called NanoEdge, which uses low-rank NanoAdapters for modality-specific tuning. This reduces client storage by over 95% and communication costs by 99%, while maintaining strong performance under heterogeneous, non-IID data via Fisher-guided aggregation. FedNano enables practical, scalable, and privacy-preserving deployment of MLLMs across decentralized environments.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.14824 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.14824 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.14824 in a Space README.md to link it from this page.

Collections including this paper 3