Author Name | Affiliation | Nampally Vijay Kumar | School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India Department of Computer Science and Business System,B V RAJU Institute of Technology, Narsapur 502313,Telangana, India | Satarupa Mohanty | School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India | Prasant Kumar Pattnaik | School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India |
|
Abstract: |
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency. A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning (DRL) model to predict and allocate incoming workloads flexibly. The proposed methodology integrates workload prediction utilising long short-term memory (LSTM) networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms. By continuously analysing current and historical data, the model can efficiently allocate resources, prioritizing speed and energy preservation. The experimental results demonstrate that our load balancing system, which utilises DRL, significantly reduces average response times and energy usage compared to traditional methods. This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance. It consistently provides reliable and durable performance across a range of dynamic workloads. |
Key words: DRL LSTM cloud computing load balancing Q-Learning |
DOI:10.11916/j.issn.1005-9113.2024053 |
Clc Number:TP3 |
Fund: |