Explainable AI for Network Anomaly Detection: Trust in Autonomous Networks
Dr. Thomas Bonald, Dr. Aline Carneiro Viana
Telecom Paris / Inria
Abstract
We address the critical need for explainability in AI-driven network anomaly detection systems. Our framework combines a high-accuracy anomaly detector with a post-hoc explanation module that provides human-understandable reasons for each alert. The explanation module uses SHAP values adapted for time-series network data, achieving 94% anomaly detection accuracy while providing explanations that network operators rate as helpful 87% of the time.
AI Summary
- Explainable AI framework for network anomaly detection with human-readable explanations.
- 94% detection accuracy with explanations rated helpful 87% of the time by operators.
- SHAP-based explanation module adapted for time-series network data.
- Bridges the trust gap between AI systems and network operators.
Key Findings
- 1Operators take action 3x faster when anomaly alerts include explanations.
- 2Explanations reveal model biases that can be corrected through retraining.
- 3False positive rate decreases 25% when operators can verify AI reasoning.
Industry Implications
Essential for operator trust in autonomous 6G network management.
Regulatory compliance may require AI explainability in critical infrastructure.
Enables continuous improvement of AI models through operator feedback.
Read the Original Paper
Access the full paper on arXiv for complete methodology, results, and references.
Open on arXivRelated Papers
AI-Native Air Interface Design: End-to-End Learning for 6G Physical Layer
University of Stuttgart — 41 citations
AI + Network PapersDigital Twin Networks: AI-Driven Real-Time Network Simulation for 6G
Oulu University / Ruhr University Bochum — 29 citations
AI + Network PapersIntent-Based Network Management with Large Language Models
Universidad Carlos III de Madrid — 16 citations