Neural Architecture Search for Efficient Edge AI in Wireless Networks
Dr. Sunghoon Kim, Dr. Jihun Park
Samsung AI Center Seoul
Abstract
This work applies neural architecture search (NAS) to automatically discover compact yet high-performance AI models tailored for wireless network edge devices. Our hardware-aware NAS framework jointly optimizes model accuracy and inference latency on target edge hardware. The discovered architectures achieve comparable accuracy to manually designed models while requiring 5x less computation and fitting within 2MB memory constraints.
AI Summary
- Applies hardware-aware NAS to find optimal AI models for wireless edge devices.
- Discovered models achieve parity with hand-designed networks at 5x less compute.
- All models fit within 2MB memory, suitable for resource-constrained base station controllers.
- Framework is open-sourced for the research community.
Key Findings
- 1Automated architecture search outperforms manual design for edge deployment scenarios.
- 2Joint optimization of accuracy and latency yields Pareto-optimal model families.
- 3Transfer learning from NAS-discovered architectures accelerates adaptation to new network conditions.
Industry Implications
Accelerates deployment of AI at the network edge for real-time decision making.
Reduces the need for AI expertise in designing models for telecom edge use cases.
Enables 6G vision of ubiquitous AI at every network node.
Read the Original Paper
Access the full paper on arXiv for complete methodology, results, and references.
Open on arXivRelated Papers
Mixture-of-Experts Transformers for Scalable 6G Signal Processing
Imperial College London — 6 citations
AI/ML PapersTransformer-Based Channel Estimation for Massive MIMO Systems
Tsinghua University — 12 citations
AI/ML PapersFederated Reinforcement Learning for Distributed Network Optimization
Stanford University — 8 citations