Reasoning Language Models for Root Cause Analysis in 5G Wireless Networks
Abstract
A lightweight framework using Large Language Models (LLMs) with TeleLogs dataset and a two-stage training methodology improves Root Cause Analysis (RCA) in mobile networks by enhancing interpretability and reasoning quality.
Root Cause Analysis (RCA) in mobile networks remains a challenging task due to the need for interpretability, domain expertise, and causal reasoning. In this work, we propose a lightweight framework that leverages Large Language Models (LLMs) for RCA. To do so, we introduce TeleLogs, a curated dataset of annotated troubleshooting problems designed to benchmark RCA capabilities. Our evaluation reveals that existing open-source reasoning LLMs struggle with these problems, underscoring the need for domain-specific adaptation. To address this issue, we propose a two-stage training methodology that combines supervised fine-tuning with reinforcement learning to improve the accuracy and reasoning quality of LLMs. The proposed approach fine-tunes a series of RCA models to integrate domain knowledge and generate structured, multi-step diagnostic explanations, improving both interpretability and effectiveness. Extensive experiments across multiple LLM sizes show significant performance gains over state-of-the-art reasoning and non-reasoning models, including strong generalization to randomized test variants. These results demonstrate the promise of domain-adapted, reasoning-enhanced LLMs for practical and explainable RCA in network operation and management.
Community
We propose a lightweight root cause analysis framework based on reasoning LLMs, trained with a mix of supervised and reinforcement learning on the TeleLogs dataset. Our method achieves over 95% accuracy with Qwen2.5-32B-Instruct and outperforms strong baselines, showing promise for explainable, LLM-driven diagnostics in complex network scenarios.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DeepForm: Reasoning Large Language Model for Communication System Formulation (2025)
- RCA Copilot: Transforming Network Data into Actionable Insights via Large Language Models (2025)
- Time-RA: Towards Time Series Reasoning for Anomaly with LLM Feedback (2025)
- TimeMaster: Training Time-Series Multimodal LLMs to Reason via Reinforcement Learning (2025)
- MMAT-1M: A Large Reasoning Dataset for Multimodal Agent Tuning (2025)
- A Wireless Foundation Model for Multi-Task Prediction (2025)
- SoundMind: RL-Incentivized Logic Reasoning for Audio-Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper