Towards Scalable O-RAN Resource Management: Graph-Augmented Proximal Policy Optimization

Source: arXiv AI Papers

The emergence of Open Radio Access Network (O-RAN) architectures offers improved flexibility and cost-effectiveness in mobile networks. However, this flexibility brings significant challenges regarding resource management, which have traditionally been addressed in isolation. The proposed Graph-Augmented Proximal Policy Optimization (GPPO) framework utilizes Graph Neural Networks for enhanced topology-aware feature extraction and incorporates action masking to efficiently explore the decision space.

Extensive experiments on both small- and large-scale O-RAN scenarios demonstrate GPPO’s superiority over existing solutions, achieving up to 18% lower deployment costs and 25% higher rewards in generalization tests. The implications of these findings indicate that GPPO not only improves resource allocation efficiency but also enhances scalability for real-world deployments. However, the complex nature of O-RAN environments necessitates further exploration into the long-term reliability and adaptability of such frameworks.

👉 Pročitaj original: arXiv AI Papers