Author Name | Affiliation | Postcode | Nampally Vijay Kumar* | 1. School of Computer Engineering, KIITs Deemed to be University, Bhubaneswar 751024, Odisha, India 2 Department of Computer Science and Business System,.B V RAJU Institute of Technology, Narspur 502313,Telangana, India | 751024 | Satarupa Mohanty | 1. School of Computer Engineering, KIITs Deemed to be University, Bhubaneswar 751024, Odisha, India | 751024 | Prasant Kumar Pattnaik | 1. School of Computer Engineering, KIITs Deemed to be University, Bhubaneswar 751024, Odisha, India | 751024 |
|
Abstract: |
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency. A novel approach is introduced in this study to decrease a system""s overall delay and energy consumption by utilising a deep reinforcement learning (DRL) model to predict and allocate incoming workloads flexibly. The proposed methodology integrates workload prediction utilising long short-term memory (LSTM) networks with efficient load-balancing techniques led by deep Q-learning and Actor-Critic algorithms. By continuously analysing current and historical data, the model can efficiently allocate resources, prioritizing speed and energy preservation. The experimental findings demonstrate that our load balancing system, which utilises DRL, significantly reduces average response times and energy usage compared to traditional methods. This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance. It consistently provides reliable and durable performance across a range of dynamic workloads. |
Key words: DRL, LSTM, cloud computing, load balancing, Q-Learning |
DOI:10.11916/j.issn.1005-9113.2024053 |
Clc Number:TP3 |
Fund: |
|
Descriptions in Chinese: |
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency. A novel approach is introduced in this study to decrease a system""s overall delay and energy consumption by utilising a deep reinforcement learning (DRL) model to predict and allocate incoming workloads flexibly. The proposed methodology integrates workload prediction utilising long short-term memory (LSTM) networks with efficient load-balancing techniques led by deep Q-learning and Actor-Critic algorithms. By continuously analysing current and historical data, the model can efficiently allocate resources, prioritizing speed and energy preservation. The experimental findings demonstrate that our load balancing system, which utilises DRL, significantly reduces average response times and energy usage compared to traditional methods. This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance. It consistently provides reliable and durable performance across a range of dynamic workloads. |