Federated Split Learning for Privacy-Preserving AI in Multi-Operator Networks
Dr. Kaibin Huang, Dr. Deniz Gunduz
University of Hong Kong / Imperial College London
Abstract
We propose federated split learning (FSL) as a privacy-preserving AI framework for multi-operator 6G network optimization. FSL splits the neural network model between operator premises and a neutral aggregation server, with only intermediate representations (not raw data) shared. This provides stronger privacy than standard federated learning while reducing on-device computation. Applied to multi-operator spectrum sharing, FSL achieves 95% of the performance of centralized training while provably protecting each operator's proprietary data.
AI Summary
- Federated split learning for privacy-preserving multi-operator AI.
- Achieves 95% of centralized training performance with provable data protection.
- Stronger privacy than standard federated learning.
- Applied to multi-operator spectrum sharing optimization.
Key Findings
- 1Split point selection critically affects the privacy-accuracy tradeoff.
- 2FSL reduces operator-side computation by 60% compared to full federated learning.
- 3Privacy guarantees hold even against honest-but-curious aggregation servers.
Industry Implications
Enables AI collaboration between competing operators without data exposure.
Supports 6G spectrum sharing and interference management across operators.
Applicable to any multi-stakeholder network optimization scenario.
Read the Original Paper
Access the full paper on arXiv for complete methodology, results, and references.
Open on arXivRelated Papers
Diffusion-Based Generative Models for Synthetic Network Traffic Generation
Michigan State University — 11 citations
AI/ML PapersFederated Reinforcement Learning for Distributed Network Optimization
Stanford University — 8 citations
AI + Network PapersAI-Native Air Interface Design: End-to-End Learning for 6G Physical Layer
University of Stuttgart — 41 citations