Document Type

Article

Publication Date

1-1-2019

Abstract

Cloud computing has emerged in recent years as one of the most interesting developments in technology. With the gaining popularity of cloud-based solutions, more and more applications are migrating into the Cloud and thus have highly demanding critical requirements for networking resources. Virtual technology associated with a Data Center consists of a set of servers, storage and network devices, power systems, cooling systems, etc., and makes it possible for the resource management of physical machines to be more finely tuned and thus support multiple virtual machines well. The growing challenge, however, is how to efficiently provision these resources to meet the requirements of the different qualities of service levels. This paper offers and investigates a general situation wherein a datacenter can determine the cost of using resources and a Cloud service user can decide whether it will pay the price for the resource or not for an incoming task. By establishing a Continuous-Time Markov Decision Process model for both an average reward model and a discounted expected reward model, the optimal policy of each model for admitting tasks can be verified to be a State-related control limit (threshold) policy, respectively. Further, a detailed statement and verification of the upper boundaries for such an optimal policy, and a comprehensive set of experiments on the various cases to validate this proposed solution are provided. Particularly, the machine learning method is implemented to obtain the optimal threshold values by using a feed-forward neural network model. Several numerical examples are also provided on how to derive optimal threshold values. The results offered in this paper can be easily utilized to help datacenter operate in an economically optimal way when providing different needed application services to Cloud service users.

Share

COinS