Efficient On-Device Training for Adaptive 6G Receivers
Dr. Junmo Kim, Prof. Youngchul Sung
KAIST
Abstract
We propose an efficient on-device training framework that enables 6G receiver neural networks to continuously adapt to changing channel conditions without cloud connectivity. Using a combination of pruned backpropagation and mixed-precision training, our approach reduces on-device training memory by 8x and energy by 5x compared to standard backpropagation, while maintaining adaptation quality. This enables continuous learning on mobile device chipsets with limited resources.
AI Summary
- On-device training framework for adaptive 6G receiver neural networks.
- 8x memory reduction and 5x energy reduction versus standard backpropagation.
- Maintains adaptation quality while fitting mobile device constraints.
- Enables continuous learning without cloud connectivity.
Key Findings
- 1Pruned backpropagation skips gradient computation for 80% of parameters without accuracy loss.
- 2Mixed-precision training at INT8 for most operations preserves model quality.
- 3Adaptation converges within 100 iterations, taking less than 1 second on mobile hardware.
Industry Implications
Enables truly adaptive AI receivers for 6G without cloud dependency.
Supports the 6G vision of device-level intelligence.
Reduces latency and energy cost of AI model updates in the field.
Read the Original Paper
Access the full paper on arXiv for complete methodology, results, and references.
Open on arXivRelated Papers
Transformer-Based Channel Estimation for Massive MIMO Systems
Tsinghua University — 12 citations
AI/ML PapersFederated Reinforcement Learning for Distributed Network Optimization
Stanford University — 8 citations
AI/ML PapersNeural Architecture Search for Efficient Edge AI in Wireless Networks
Samsung AI Center Seoul — 5 citations