Research

Latest academic paper digests, data charts, and research insights from the 6G and AI research community.

16 papers

AI/ML Papers15 min read

State Space Models for Wireless Channel Prediction

Dr. Lukas Mauch, Prof. Hans Schotten RPTU Kaiserslautern / 6G Research Hub

We apply structured state space models (S4/Mamba) to wireless channel prediction, demonstrating that these architectures offer superior long-range sequence modeling compared to transformers and LSTMs for time-varying channels. Our S4-Channel model predicts channel state information 10-50ms into the future with 40% lower prediction error than transformer baselines, while requiring linear rather than quadratic computation in sequence length. This enables predictive beamforming and proactive resource allocation.

Feb 6, 2026
4 citations
State Space ModelChannel PredictionMamba
AI/ML Papers17 min read

Distributed Foundation Models for Heterogeneous Network Optimization

Dr. Mingzhe Chen, Prof. Walid Saad et al. Virginia Tech / Beijing University of Posts and Telecommunications

We introduce a distributed foundation model framework for optimizing heterogeneous wireless networks comprising macro cells, small cells, and Wi-Fi access points. A pre-trained base model is split across network tiers with tier-specific adapter modules, enabling coordinated optimization without sharing raw data between tiers. The framework achieves 30% better network-wide throughput than independent per-tier optimization while reducing inter-tier interference by 45%.

Feb 4, 2026
9 citations
Foundation ModelHetNetDistributed AI
AI/ML Papers16 min read

Physics-Informed Neural Networks for Radio Propagation Modeling

Dr. Andreas Molisch, Dr. Dawei Ying University of Southern California

We develop physics-informed neural networks (PINNs) for radio propagation modeling that incorporate Maxwell's equations as soft constraints during training. By encoding electromagnetic wave physics directly into the loss function, our PINNs predict path loss and multipath characteristics with 5x less training data than purely data-driven approaches while maintaining equivalent accuracy. The model generalizes to unseen environments 3x better than standard neural network propagation models.

Feb 1, 2026
14 citations
PINNPropagationPhysics-Informed
AI/ML Papers14 min read

Token-Free Language Models for Efficient Telecom Log Analysis

Dr. Chen Li, Dr. Marco Fiore IMDEA Networks / NEC Laboratories Europe

Traditional LLMs struggle with telecom network logs due to their technical vocabulary and structured format not aligning well with standard tokenization. We propose a byte-level token-free language model specifically designed for telecom log analysis. Our model processes raw byte sequences directly, avoiding out-of-vocabulary issues common with standard tokenizers on network log data. On a benchmark of 1M real operator logs, our approach achieves 91% fault classification accuracy and generates root cause explanations that experts rate as helpful 85% of the time.

Jan 29, 2026
7 citations
LLMLog AnalysisFault Diagnosis
AI/ML Papers15 min read

Reward Shaping for Safe Reinforcement Learning in Network Control

Dr. Tianyu Wang, Prof. Robert Schober University of Erlangen-Nuremberg

Deploying RL agents in live networks carries the risk of unsafe actions that degrade service. We propose a reward shaping framework that incorporates safety constraints from network SLAs directly into the RL training process. Our constrained RL approach guarantees that QoS violations remain below 0.1% while still achieving 90% of the throughput optimality of unconstrained agents. We validate on a commercial 5G testbed with 50 active users.

Jan 26, 2026
11 citations
Safe RLReward ShapingNetwork Control
AI/ML Papers14 min read

Vision Transformers for RF Spectrum Monitoring and Classification

Dr. Tim O'Shea, Dr. Nathan West DeepSig Inc.

We apply Vision Transformers (ViT) to RF spectrum monitoring by treating spectrograms as images. Our ViT-RF model classifies 24 modulation types with 98.5% accuracy at 10 dB SNR, outperforming CNN baselines by 3.2%. The attention mechanism provides interpretable visualization of which time-frequency regions drive classification decisions. The model runs at 2ms per spectrogram on edge GPU hardware, enabling real-time spectrum monitoring for 6G cognitive radio applications.

Jan 23, 2026
8 citations
Vision TransformerSpectrum MonitoringClassification
AI/ML Papers15 min read

Efficient On-Device Training for Adaptive 6G Receivers

Dr. Junmo Kim, Prof. Youngchul Sung KAIST

We propose an efficient on-device training framework that enables 6G receiver neural networks to continuously adapt to changing channel conditions without cloud connectivity. Using a combination of pruned backpropagation and mixed-precision training, our approach reduces on-device training memory by 8x and energy by 5x compared to standard backpropagation, while maintaining adaptation quality. This enables continuous learning on mobile device chipsets with limited resources.

Jan 20, 2026
5 citations
On-Device TrainingAdaptive ReceiverEfficient Training