Abstract
This paper proposes an approach for learning call admission control (CAC) policies in a cellular network that handles several classes of traffic with different resource requirements. The performance measures in cellular networks are long term revenue, utility, call blocking rate (CBR) and handoff failure rate (CDR). Reinforcement Learning (RL) can be used to provide the optimal solution, however such method fails when the state space and action space are huge. We apply a form of NeuroEvolution (NE) algorithm to inductively learn the CAC policies, which is called CN (Call Admission Control scheme using NE). Comparing with the Q-Learning based CAC scheme in the constant traffic load shows that CN can not only approximate the optimal solution very well but also optimize the CBR and CDR in a more flexibility way. Additionally the simulation results demonstrate that the proposed scheme is capable of keeping the handoff dropping rate below a pre-specified value while still maintaining an acceptable CBR in the presence of smoothly varying arrival rates of traffic, in which the state space is too large for practical deployment of the other learning scheme.
Original language | English |
---|---|
Pages (from-to) | 186-191 |
Number of pages | 6 |
Journal | IJCAI International Joint Conference on Artificial Intelligence |
Publication status | Published - 2007 |
Event | 20th International Joint Conference on Artificial Intelligence, IJCAI 2007 - Hyderabad, India Duration: 6 Jan 2007 → 12 Jan 2007 |