AI + Network Papers13 min read13 citations

Explainable AI for Network Anomaly Detection: Trust in Autonomous Networks

Dr. Thomas Bonald, Dr. Aline Carneiro Viana

Telecom Paris / Inria

Jan 25, 2026View on arXiv

Abstract

We address the critical need for explainability in AI-driven network anomaly detection systems. Our framework combines a high-accuracy anomaly detector with a post-hoc explanation module that provides human-understandable reasons for each alert. The explanation module uses SHAP values adapted for time-series network data, achieving 94% anomaly detection accuracy while providing explanations that network operators rate as helpful 87% of the time.

AI Summary

AI-Generated Summary
  • Explainable AI framework for network anomaly detection with human-readable explanations.
  • 94% detection accuracy with explanations rated helpful 87% of the time by operators.
  • SHAP-based explanation module adapted for time-series network data.
  • Bridges the trust gap between AI systems and network operators.

Key Findings

  • 1Operators take action 3x faster when anomaly alerts include explanations.
  • 2Explanations reveal model biases that can be corrected through retraining.
  • 3False positive rate decreases 25% when operators can verify AI reasoning.

Industry Implications

Essential for operator trust in autonomous 6G network management.

Regulatory compliance may require AI explainability in critical infrastructure.

Enables continuous improvement of AI models through operator feedback.

Explainable AIAnomaly DetectionTrustNetwork Security

Read the Original Paper

Access the full paper on arXiv for complete methodology, results, and references.

Open on arXiv

Related Papers