Please submit manuscripts in either of the following two submission systems

    ScholarOne Manuscripts

  • ScholarOne
  • 勤云稿件系统

  • 登录

Search by Issue

  • 2025 Vol.32
  • 2024 Vol.31
  • 2023 Vol.30
  • 2022 Vol.29
  • 2021 Vol.28
  • 2020 Vol.27
  • 2019 Vol.26
  • 2018 Vol.25
  • 2017 Vol.24
  • 2016 vol.23
  • 2015 vol.22
  • 2014 vol.21
  • 2013 vol.20
  • 2012 vol.19
  • 2011 vol.18
  • 2010 vol.17
  • 2009 vol.16
  • No.1
  • No.2

Supervised by Ministry of Industry and Information Technology of The People's Republic of China Sponsored by Harbin Institute of Technology Editor-in-chief Yu Zhou ISSNISSN 1005-9113 CNCN 23-1378/T

期刊网站二维码
微信公众号二维码
Related citation:
【Print】   【HTML】   【PDF download】   View/Add Comment  Download reader   Close
Back Issue    Advanced Search
This paper has been: browsed 124times   downloaded 25times  
Shared by: Wechat More
A Dynamic Workload Prediction and Distribution in Cloud Computing using Deep Reinforcement Learning and LSTM
Author NameAffiliationPostcode
Nampally Vijay Kumar* 1. School of Computer Engineering, KIITs Deemed to be University, Bhubaneswar 751024, Odisha, India 2 Department of Computer Science and Business System,.B V RAJU Institute of Technology, Narspur 502313,Telangana, India 751024
Satarupa Mohanty 1. School of Computer Engineering, KIITs Deemed to be University, Bhubaneswar 751024, Odisha, India 751024
Prasant Kumar Pattnaik 1. School of Computer Engineering, KIITs Deemed to be University, Bhubaneswar 751024, Odisha, India 751024
Abstract:
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency. A novel approach is introduced in this study to decrease a system""s overall delay and energy consumption by utilising a deep reinforcement learning (DRL) model to predict and allocate incoming workloads flexibly. The proposed methodology integrates workload prediction utilising long short-term memory (LSTM) networks with efficient load-balancing techniques led by deep Q-learning and Actor-Critic algorithms. By continuously analysing current and historical data, the model can efficiently allocate resources, prioritizing speed and energy preservation. The experimental findings demonstrate that our load balancing system, which utilises DRL, significantly reduces average response times and energy usage compared to traditional methods. This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance. It consistently provides reliable and durable performance across a range of dynamic workloads.
Key words:  DRL, LSTM, cloud computing, load balancing, Q-Learning
DOI:10.11916/j.issn.1005-9113.2024053
Clc Number:TP3
Fund:
Descriptions in Chinese:
  Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency. A novel approach is introduced in this study to decrease a system""s overall delay and energy consumption by utilising a deep reinforcement learning (DRL) model to predict and allocate incoming workloads flexibly. The proposed methodology integrates workload prediction utilising long short-term memory (LSTM) networks with efficient load-balancing techniques led by deep Q-learning and Actor-Critic algorithms. By continuously analysing current and historical data, the model can efficiently allocate resources, prioritizing speed and energy preservation. The experimental findings demonstrate that our load balancing system, which utilises DRL, significantly reduces average response times and energy usage compared to traditional methods. This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance. It consistently provides reliable and durable performance across a range of dynamic workloads.

LINKS