Performance Analysis of Cloud Computing Centers Serving Parallelizable Rendering Jobs Using M/M/c/r Queuing Systems
Xiulin Li, Li Pan, Jiwei Huang, Shijun Liu, Yuliang Shi and Calton Pu
Shandong University, Shandong University, Beijing University of Posts and Telecommunications, Shandong University, Shandong University, Georgia Institute of Technology

Performance analysis is crucial to the successful development of cloud computing paradigm. And it is especially important for a cloud computing center serving parallelizable application jobs, for determining a proper degree of parallelism could reduce the mean service response time and thus improve the performance of cloud computing obviously. In this paper, taking the cloud based rendering service platform as an example application, we propose an approximate analytical model for cloud computing centers serving parallelizable jobs using M=M=c=r queuing systems, by modeling the rendering service platform as a multi-station multi-server system. We solve the proposed analytical model to obtain a complete probability distribution of response time, blocking probability and other important performance metrics for given cloud system settings. Thus this model can guide cloud operators to determine a proper setting, such as the number of servers, the buffer size and the degree of parallelism, for achieving specific performance levels. Through extensive simulations based on both synthetic data and real-world workload traces, we show that our proposed analytical model can provide approximate performance prediction results for cloud computing centers serving parallelizable jobs, even those job arrivals follow different distributions.