You are on page 1of 20

CloudRank: A QoS Driven Component Ranking framework for cloud computing

Research Problem
How to estimate the quality of cloud application? Design a QoS driven component ranking framework to provide personalized cloud component ranking

Challenge for making QoS driven component quality ranking


The component quality ranking of the user cannot be transferred directly to another user The quality of a cloud component perceived by on user is different form other due to communication link.

c1 Network congestion

Too much waiting.. Quality is worst.

Wow!!Nice component.. 10 on 10.

NEED OF PERSONALIZED COMPONENT QUALITY RANKING

APPROACHES OF QUALITY RANKING


 APPROACH 1:Evaluation at the user side and rank the components based on the observed QoS performance. Drawbacks:Time consuming Resource consuming  APPROACH 2:Collaborative QoS driven ranking framework to predict ranking from past experience.
CLOUD RANK

CUSTOMER

CLOUD
Functional Equivalent Component APPLICATION DESIGNER (COMPONENT USER)
Cloud component a1

Cloud Application 1

Cloud Application 2

I N T E R N E T

Cloud component a2 Cloud component am

Functional Equivalent Component


Cloud component b1

Cloud component b2 Cloud component bm

Cloud Application n SERVER SIDE

CLIENT SIDE

System Architecture

Use of QoS
Figure out Functionally Equivalent Component Helps in deciding optimal component Figure out Functionally Nonequivalent Component Helps in detecting poor performance components

QoS can be measured at Server side and Client side Server side consideration:- Capacities of components Client side consideration:-response time, throughput, failure probability and user ranking

Input to the CloudRank algorithm Performance information of cloud components collected from different cloud application

Collaborative Quality Ranking Framework


PHASE 1

RANKING SIMILARITY COMPUTATION

PHASE 2

IDENTIFYING THE SIMILAR USERS

PHASE 3

DEFINE A PREFERENCE FUNCTION

PHASE 4

GREEDY APPROACH TO RANK THE EMPLOYED AS WELL AS UNEMPLOYED COMPONENTS

EXPERIMENTS

Important Characteristics of the dataset


No of web service:-100 No of service users:-150 QoS properties used:- response time and throughput Response Time:-Time duration between the user sending out a request to a component and receiving a response Throughput:-The data transfer rate over the network

Representation of QoS values


Web Services 0 0 Service Users 1 .. .. .. 149 1 .. .. 99

QoS values

150 x 100 user item matrix

Large part of response values between 0.2 s to 1.6 sec

Large part of throughput values between 0.4 kBps s to 3.2 kBps.

Evaluation metric
Normalized Discounted Cumulative Gain(NDCG) The NDCG performance of the top K ranked components can be calculated by

DCGK =The discounted cumulative gain values of the top K components of the predicted component ranking IDCGK =The discounted cumulative gain values of the top K components of the ideal component ranking The NDCGk value lies in the interval of 0.0 to 1.0. The larger value stands for better ranking accuracy.

Discounted cumulative gain (DCG) is a measure of effectiveness of a Web search engine algorithm or related applications.

Where reli is the graded relevance of the component at position i in the ranking The premise of DCG is that high quality component appearing lower in a ranking list should be penalized as the graded relevance value is reduced logarithmically proportional to the position of the result.

Example
 Suppose there are 6 documents ranked as follows by an algorithm D1,D2,D3,D4,D5,D6  Each document is to be judged on a scale of 0-3 with 0 meaning irrelevant, 3 meaning completely relevant, and 1 and 2 meaning "somewhere in between Suppose the user provides the following relevance scores: 3,2,3,0,1,2

DCGK i 1 2 3 4 5 6 reli 3 2 3 0 1 2 log2 i 0 1 1.59 2.0 2.32 2.59 reli / log2 i N/A 2 1.887 0 0.431 0.772 i 1 2 3 4 5 6 reli 3 3 2 2 1 0

IDCGK log2 i 0 1 1.59 2.0 2.32 2.59 reli / log2 i N/A 3 1.265 1 0.431 0

Monotonically sort of the relevance judgments which is: 3,3,2,2,1,0

Vector similarity method Pearson correlation method

VARIOUS RANKINK APPROACHES USER BASED MODEL UVS UPCC User based collaborative filtering method using VECTOR SIMILARITY User based collaborative filtering method using PEARSON CORREALATION COEFFICIENT Item based collaborative filtering method using VECTOR SIMILARITY Item based collaborative filtering method using PEARSON CORREALATION COEFFICIENT User and Item based collaborative filtering method using VECTOR SIMILARITY User and Item based collaborative filtering method using PEARSON CORREALATION COEFFICIENT

ITEM BASED MODEL

IVS IPCC

USER AND ITEM BASED MODEL

UIVS

UIPCC

GREEDY