AI + Network Papers16 min read10 citations

Federated Split Learning for Privacy-Preserving AI in Multi-Operator Networks

Dr. Kaibin Huang, Dr. Deniz Gunduz

University of Hong Kong / Imperial College London

Jan 31, 2026View on arXiv

Abstract

We propose federated split learning (FSL) as a privacy-preserving AI framework for multi-operator 6G network optimization. FSL splits the neural network model between operator premises and a neutral aggregation server, with only intermediate representations (not raw data) shared. This provides stronger privacy than standard federated learning while reducing on-device computation. Applied to multi-operator spectrum sharing, FSL achieves 95% of the performance of centralized training while provably protecting each operator's proprietary data.

AI Summary

AI-Generated Summary
  • Federated split learning for privacy-preserving multi-operator AI.
  • Achieves 95% of centralized training performance with provable data protection.
  • Stronger privacy than standard federated learning.
  • Applied to multi-operator spectrum sharing optimization.

Key Findings

  • 1Split point selection critically affects the privacy-accuracy tradeoff.
  • 2FSL reduces operator-side computation by 60% compared to full federated learning.
  • 3Privacy guarantees hold even against honest-but-curious aggregation servers.

Industry Implications

Enables AI collaboration between competing operators without data exposure.

Supports 6G spectrum sharing and interference management across operators.

Applicable to any multi-stakeholder network optimization scenario.

Federated LearningSplit LearningPrivacyMulti-Operator

Read the Original Paper

Access the full paper on arXiv for complete methodology, results, and references.

Open on arXiv

Related Papers