Вы находитесь на странице: 1из 6

An Intelligent Agent-Based Framework for Privacy Payoff Negotiation in Virtual Environments

Abdulsalam Yassine, Shervin Shirmohammadi DISCOVER Laboratory School of Information Technology and Engineering University of Ottawa, Canada ayassine@discover.uottawa.ca, shervin@discover.uottawa.ca
AbstractWith the rapid development of applications in open distributed virtual environments, such as e-Business and virtual games, privacy is becoming a critical issue. This paper presents an intelligent agent-based framework for privacy payoff in virtual environments, with special focus on capabilitybased negotiation services. These services take into consideration users entitlement to benefit from revealing their personal information. In such model, users of the virtual environment who share same interests have the opportunity to form personal information lists that can be used later for group bargaining. The intelligent negotiation agent acts on their behalf to maximize their benefit. The overall framework is described, and a particular example is presented to show the interaction among the agents.

I. INTRODUCTION

n the past decade, there has been a great interest in virtual environments potential. As the number of Internet users increases, the interest for the development of distributed virtual environments increases as well. Consequently, systems architects, designers, and administrators are faced with the challenge of protecting users privacy and preferences [2]. In virtual environment applications such as virtual games, virtual reality, e-business, etc, users behavior is often tracked and recorded. For example in virtual boutiques, an e-business application, users taste and preferences are particularly valuable to the growing commercial markets inside the virtual worlds [1] [7]. Another example is game controllers - owners of online virtual games and their business affiliates are privy to the entire business scope of gamers profile construction. Even though the information derived from the play in the virtual world might appear harmless, it might lead to annoying advertisements or it might lead to powerful inferences and use it to the players detriment [3]. According to [3] and [4] the virtual world includes large amount of private data about users, usually assembled in sophisticated databases, changed hands or ownership, as part of online transactions and other firms strategic decisions which often include selling users information lists to other firms. The key commercial value of users personal information derives from the ability of e-businesses to identify users and charge them personalized prices. A look at our present-day reveals that users profile data is now

considered among the most valuable assets owned by businesses in the virtual world [5]. As a result, virtual environments are experiencing a flourishing market of personal information. In such unfair information market, users cannot participate; firstly, they have no instrument to capture a share of their private data asset or receive compensations. This is due to the fact that current systems for processing transactions in the virtual world are designed to facilitate a one-time surrender of control over personal information. Secondly, under current law, the ownership right to personal information is given to the collector of that information and not to the individual to whom the information refers [6] [5]. The above dilemma has motivated our research. We have started the argument in [8] as follows: if users private data is a valuable asset, should not users entitled to capitalize on their own asset as well. This paper extends our previous work by proposing an intelligent agent-based framework for privacy payoff negotiation in virtual environments. Intelligent agents in our system collaboratively work on behalf of users for the goal of maximizing their benefit and protect the use of their private data. In reality, individuals face enormous challenges assessing how their personal information is being used in the virtual world. The problem is not whether users value online privacy. It is obvious that people value online privacy [9], but they cannot appreciate the magnitude of privacy threat when revealing their private data or its impact later on their lives [10]. Many researchers, for example [5] [6] and [11], believe that the solution of the privacy problem in the virtual world requires a technologically-mediated electronic markets where an equilibrium can be reached based on the privacy practices that most users prefer. However, to make this a concrete possibility, users require technical instruments to be able to manage their data, define their privacy preferences, and track the use of their private data. In this paper, we fill this gap by proposing an intelligent agent-based framework in which agents work on behalf of users, collect their private data, categorize them, add privacy risk weights defined by users, and finally negotiate with e-businesses a tradeoff value in return for the dissemination of the information. We strongly believe that intelligent agent technology is a very promising design for brokering privacy concerns in distributed virtual environments. Here we view intelligent agent-orientation as a metaphorical conceptualization tool at

978-1-4244-2772-7/09/$25.00 2009 IEEE

a high level of abstraction that captures, supports and implements features related to privacy that are useful for distributed computation in open environments such as the virtual world. The rest of the paper is organized as follows: in the next Section we state the intended contribution. In Section III, we present the framework architecture. In Section IV, a case study to evaluate our approach is presented. Finally, in Section V, we provide conclusions of our paper, and discuss plans for future work. II. CONTRIBUTION The intended contribution of our work is to employ intelligent agent-based solution to privacy problem in virtual environments through private data valuation, privacy risks quantification, and then negotiation. We also intent to prove through a simulated case study that our approach of allowing users to participate in the market of private data in the virtual world will help make it a privacy-friendly environment. In particular we: Propose a novel intelligent agent-based framework where agents collaborate with each other for one goal, i.e. privacy protection and users payoff Employ a game theoretic negotiation model supported by a reasoning mechanism based on MADM (Multi Attribute Decision Making) To our best knowledge, this approach has not been considered in the open scientific literature by anybody yet. III. FRAMEWORK ARCHITECTURE Figure 1 depicts the high level architecture of the framework. It is based on the following: 1) Users open accounts that record their tastes, preferences, and personal data 2) Each agent is an independent entity with its own goal and information. However, the information under control by an individual agent is not sufficient to satisfy its goals, and so the agent must interact with other agents 3) The architecture allow the interaction with external agents for information sharing The proposed architecture consists of five types of core modules; 1) Facilitator agent, 2) Database agent, 3) Reputation agent 4) Payoff agent, 5) Negotiation agent. The facilitator agent manages the interaction of the agents and orchestrates the order of tasks execution and acts as a single point of contact with agents inside and outside our system. The database agent receives data specification from users and classifies them into categories and derives privacy risk quantification based on the private data sensitivity. The reputation agent collaborates with other agents in the virtual environment thus creating a network where they exchange transaction ratings about service providers whom their users have dealt with, this is called a users coalition [13]. The payoff agent employs different payoff models that can be used to determine the value of the private information according to the risk quantification and the privacy context weight. Finally, the negotiation agent negotiates with agents

representing e-businesses or service providers a tradeoff value in return for the dissemination of the information.
DB Agent ACL
Internet

user : Alice

Database

Payoff Agent ACL

Web Portal Reputation Agent

Facilitator Agent ACL ACL

Negotiation Agent

Fig.1. High level framework architecture

A. Agent communication The cooperation between agents should be made by means of a common message through which the request for a work will be able to be made and the result be sent. The communication paradigm is based on asynchronous message passing. Thus, each agent has a message queue and a message parser. In our system, the message is based on ACL (Agent Communication Language) of FIPA-ACL standards [17] that is similar to the KQML (Knowledge Query and Manipulation Language). Each message includes the following fields: The sender of the message The list of receivers The communication act, called performative, indicating what the sender intend by sending the message. If performative is REQUEST, the sender wants the receiver to do something, if it is INFORM the sender wants the receiver to know something, if it is a PROPOSE, the sender wants to enter into a negotiation. The content containing the actual information to be exchanged by the actual message The content language indicates the syntax used to express the content. Both the sender and the receiver must be able to encode/decode the syntax All agents in our system have a communication layer for communication with the facilitator agent, and a message parse layer to convert ACL messages. B. Facilitator Agent The facilitator agent manages the interaction of the agents and orchestrates the order of tasks execution and acts as a single point of contact for agents inside and outside our system. The facilitator agent depicted in figure 2 consists of three main components (namely, the message manager, task manager, and the decision manager), which run concurrently and intercommunicate by exchanging internal, i.e. intra agent, messages. When the message received at the message manager component it is then relayed to the task manager using a message queuing mechanism. All components implement the same behavior and remain inactive while no messages to be processed are available. The message manager of the facilitator agent is responsible for the agents interaction with its environment that is the user, agents in the system, and the agent representing the service provider. The task manager handles

the parts of the interaction protocol of (i) the agents and the user (ii) the agent and the online sellers agent (iii) the order of task execution among the agents In addition, the task manager helps construct the task that needs to be handled by the agents, monitor the current tasks and agents finished tasks (that is, the history of task execution). As shown in figure 2, the task manger interacts with both the message manager and the decision manager.

Task Manager

Decision Manager
User, external agent, internal agent

build the reputation database of the services. Agents create a network where they exchange transaction ratings about service providers whom their users have dealt with, this is called a users coalition. In this way agents are involved in a joint recommendation process. There are many different factors that can be included in the reputation assessment such as compliancy to privacy statement, transaction protection, reliability, security and privacy strategies, etc. the reader can refer to [13] and [19] for reputation options and metrics. D. Database Agent The database agent receives data specification and classifies them into categories, derives privacy risk quantification based on the private data sensitivity, and process conversion of ACL messages and queries. Once users open their accounts and record their tastes, preferences, and personal data, the agent determines private data objects that are perceived to be valuable to capitalize on them for the goal of maximizing their value once the user decides to reveal them. The agent follows the following methods to address the problem of private data valuation [18]: Define data sets taking into account the different overall sensitivity levels towards each private data and different importance one may assign to a specific data; Define context-dependent weights to fit different situation as one user rule about any single private data item may not fit all situations; Determine the privacy risk value under each context which will be used later to valuate the users payoff once he decided to reveal the data; 1) Data classifications: The agent receives data specification from users which reflects their true personal information and then classifies them into (M) different categories (C1...Cm), such as personal identification, contact information, address, hobbies, tastes, and salary. In every category, private data are further divided into subsets (Sij, i=1M), each subset may contain one or more private data as shown in the example below. Example: Category (contact) = {Subset (telephone number), Subset (email)} Subset (telephone number) = {Private Date (work phone number), Private Date (home phone number), Private Data (cellular phone)} Subset (email) = {Private Date (work email), Private Date (personal email)} The privacy sensitivity of each private data depends on parameters ( Wij , ij ) given in the following definitions: Context-dependent weight Wij : it is a value specific for each user. It represents the users valuation for each context. This value is drawn from the acquired knowledge the user had about the service provider gathered from by the reputation agent. It will help the user makes an informed decision about the release strategy of his private data and the weight he wishes to give for the context of the transaction.

Message Manager
Fig.2. Facilitator agent architecture

For instance, each time the decision manager needs to interact with the user, it first sends a message to the task manager which, in turn, attaches additional information (if required) and forwards it to the message manager. Similarly, the task manger may filter the content of a received message before forwarding the related data to the decision manager. Finally, the decision manager uses a knowledge base which contains the preferences and rules that dictate the final decisions to be made. The main functionality of the decision making component is ultimately to find the best offer (to be then recommended to the user), according to the users choices, sensitivity to privacy, and the information at hand. C. Reputation Agent The reputation of a service provider in the virtual world is a measure of trustworthiness. It is defined as the amount of trust inspired by the particular provider in the specific domain of interest, which is in our case handling private data. Let ti,j represent a transaction that agent i has with provider j. Let Q1 (ti,j), .. , Qn (ti,j) be the associated n reputation factors assigned by agent i to provider j related to agents is experience with provider j, where each reputation factor is 0 and 1. Then agent i assigns provider j a reputation component R(ti,j) as follows:

R(t i , j ) =

1 n Qk (ti , j ) n k =1

(1)

Over the course of m transaction ti,j, agent i assigns provider j a reputation Ri,j as follows:

Ri , j =

1 R(t i, j ) m ti , j
R (t i , j ), Ri , j 1 .

(2)

Notice that 0

There are a number of parameters that can be taken into consideration in which a reputation metrics can be formed to assess the reputation of the service provider. In return, a user provides its agent with ratings about a transaction in order to

According to [12], users have various global valuation levels for each transaction that involves private data. The contextdependent weight represents the users type. Each user provides a weight value for private information category i under context j. Privacy risk ij : is the overall weighted privacy risk value of revealing private data of category i under context j. The presented data classification has three interesting characteristics in the context of profile data subsets that are related to each other: first, data in different categories may have different context-dependent risk weights. Since the value of the private data may differ from one context to another also its composition may have different implications on the level of revelation. Second, the substitution rate of private data in the same subset is constant and independent from the current level of revealed data, i.e. assuming that is one of the private data has been revealed, revealing the rest of the data in the same subset will not increase the disclosure risk of privacy. For instance, users age information can be expressed by age, year of birth or high school graduation year. Knowing all of them at the same time allows only marginal improvements. This will allow us to consider each private data subset as a one unit. Third, data in different subsets are not substitutable; revealing any one of them will increase the privacy risk. 2) Risk quantification: Let Ti be the private data size: i.e. the number of subsets in category i. If fi subsets have been revealed, then the weighted privacy risk of revealing private data fi normalized over the number of subsets in category i is calculated as follows:

a process (t ) , 0 t representing the discounted payoff of benefitting from the data at time t, and a class of admissible learning times with values in [0, T]. The problem, then, is to find the optimal expected discounted payoff sup E[ (t )] (4) Our problem is similar to pricing a stock option [14]. The only difference is that in the case of private data the learning effect will almost always generate a benefit to the service provider, while in the case of the option the value of the underlying asset my go up or down. The general form of the learning process is as follows:

1 b Vij = max hi ( xij ), ijk V i +1,k b k =1


b 1 V0 = V 1k b k 1

(5)

(6)

Where

Vij denotes the estimated value at each learning node

(step) and

hi ( xij ) denotes the payoff function for the

provider at time ti. Xij follows a Markov process that denotes the jth node at the ith point, for i=1,,m and j=1,,b. The main issue that we need to address in this formulation is the selection of the weight jk . For similar problems that are
i

ij =

fi Wij Ti

(3)

related to option pricing [14], simulation methods are used to determine the value of the weight. In this paper, and for the sake of simplicity, we will be considering that the privacy risk weight is constant at each time interval. F. Negotiation Agent The negotiation agent depicts in figure 3 below includes three components; the negotiation strategy, the reasoning mechanism and offer construction graph. Below is a description of each component.
Negotiation Agent Negotiation Strategy Offer Construction Graph Reasoning Mechanism

Where the following properties apply: - Wij is linear in the interval [0,1]; The sum of context privacy risk weights for all categories under each context is 1. To put the privacy risk of each category in the range of [0, 1], the privacy risk in (3) is further normalized over the privacy risk value of each category under context j. The reader might refer to [8], for a complete example illustrating how the privacy value is calculated. E. Payoff Agent One approach to privacy protection is to impose costs (risk premium) on the use of information so that service providers in the virtual world are more conservative when handling users personal information as privacy risk penalties and reputation consequences on violators of users presumed privacy rights are more likely to be costly. In [8] we presented a payoff module based on linear correlation between the risk of using the private data and the compensation payoff. In this paper, we consider a module in which the information acquired today may have a learning effect in the future and hence the service provider may benefit from selling or using the information at different time intervals. The goal of this model is to find a payoff value for the private data at the current time. Let us consider

Fig.3. Negotiation agent architecture

1) Negotiation Strategy: The negotiation strategy is a mean of analyzing the opponents negotiation strategy, thus understanding his proposal, and offering a responding proposal. In our system, we follow a negotiation process based on a game theoretic model. Game theory is a branch of applied mathematics that is often used in the context of economics [15]. To illustrate the negotiation strategy, consider that the agent is working on behalf of a list of users. The agent objective is to negotiate a payoff paid to users by the service provider in return for their private information

revelation. The payoff can be equivalent to a price discount offered to the user once the negotiation process completed with acceptance. In part of setting the negotiation game between the agent and the seller, we state the following rules: The number of users N in the list is known to the agent, but not to the service provider. This means that the agent will use his knowledge of N as a bargaining power to force the provider for a better offer. The price per record in the consumer information list is known to the provider but not to the agent. This means that the service provider has an upper limit to the offer and beyond this limit his profit will be negative. The agent is unaware of this information otherwise the game will end from the first negotiation round; Provider t is the monopoly provider of service t. This means that the provider is free to set his expected payoff. However, the providers limitation is: if the offered discount is low, then the demand (equal to the number of users) will decrease, and the size of the consumers information list will decrease; The agent and the provider are two non-cooperative negotiators; 2) Reasoning Mechanism: The reasoning mechanism provides a decision support criterion to make a comparison between ones own negotiation proposal and the counterparts proposal. The MADM (Multiple Attribute Decision Making) presented in [16] will is used to formalize each negotiation item on the basis of a defined uniform criteria. Consider that the agent is negotiating with the service providers agent the following two items: the users payoff and retention time of holding the private data. The MADM that applies to this scenario is defined below: N: is the number of attributes (N=2), the payoff to the users and the retention time M: number of proposal (M=2) Ai: ith negotiation proposal, i=1, 2 (1: providers proposal, 2: the negotiation agent proposal) Cj: jth attribute, j=1 users payoff, j=2 retention time Xij: the value of the proposal Ai with respect to Cj, Decision matrix D: C2 C1

A*: the optimal solution (proposal) based on the Simple Additive Weighting Method (SAW) In the MADM, the following formulas are used to calculate the weight conversion and entropy value.

E j = K Pij ln Pij
i =1

(7)

K is a constant

Pij =

X ij

X
i =1

(8)
ij

Oj =

d ij

d
i =1 M

(9)
ij

O* j =

l jO j

l O
i i =1

(10)

d j = 1 E j
m * O j Pij * A = Ai max i j =1 m O *j j =1

(11)

(12)

3) Offer Construction Graph: It stores a relational graph of all the offers and the strategy of executing them. The offer construction graph forms a library of all offers. IV. AGENT INTERACTION EXAMPLE In figure 4, we present an interaction example between the negotiation agent in our system and another agent representing a service provider. First the provider agent has to register with the facilitator agent. This step is supported by the FIPA-Subscribe-Protocol. Then the user begins by asking the facilitator agent to provide info about the provider. This is done by internal communication based on method invocation. The facilitator agent informs the negotiation agent to start the negotiation process. The negotiation agent contacts the providers agent and the negotiation process starts using the FIPA-Contract-NetProtocol [17]. The negotiation agent may now accept or refuse the offer and then inform the facilitator agent about the outcome. Finally, the facilitator agent informs the user.

D=

A1 X 11 A2 X 21

X 12 X 22

Pij: formalized value of xij by attribute in the section [0, 1], i=1, 2, j=1, 2 Ej: entropy value of Pij against Cj, 0Ej1, j=1, 2 dj: the degree of diversity as for the information provided by evaluation value of Cj, dj =1- Ej, j=1,2 lj : negotiators subjective weight after considering the attributes, 0 lj 1, j=1,2 Oj : formalized value determined by dj , 0 Oj 1, j=1,2 O*j: the weight of each attribute based on the entropy criteria, 0 Oj 1, j=1, 2 Li: the total value of Pij multiplied by O*j, i=1, 2, j=1, 2

User

Facilitator Agent

Negotiation Agent register

Provider Agent

[3] [4] [5]

ask for info

inform CallForProposal (CFP) (Payoff)

[6] [7]

Refuse

[8]
Propose (payoff) Reject

[9] [10]

CallForProposal (CFP) (Payoff) Reject Accept

[11] [12] [13]

inform inform inform

Fig.4. Agent interaction example [14]

V. CONCLUSION AND FUTURE WORK In this paper, we presented a novel framework architecture based on allowing users to capitalize on the value of their personal information and get something of value in return. In our model, not only users will benefit from the revelation of their private data, but also it will be beneficial for service providers in virtual environments. Since attention in the information market of the virtual world is one of the main reasons, if not the most important reason, behind the collection of personal information, it is essential for service providers in the virtual world find ways to economize on attention. A future direction for our work is to investigate the optimality of the discounted payoff and its validity in a realworld application. Another direction is to extend the negotiation problem for multiple providers. In such setting, not only the reputation of the service provider and the privacy risk assessment is considered but also the dynamics of the negation process that considers the uncertain effect of other providers and their offering which may come dynamically in the future. REFERENCES
[1] E. Paquet, H. Victor, S. Peters, The virtual boutique: a synergic approach to virtualization content-base management of 3D information, 3D data mining a virtual reality for ecommerce 3D Data Processing Visualization and Transmission, 2002. Proceedings. First International Symposium on 19-21 June 2002 Page(s):268 - 276 A. Masaud-Wahaisi, H. Ghenniwa, and W. Shen, A privacy-based brokering architecture for collaboration in virtual environments IFIP International Federation for Information Processing, Volume 243, Establishing the foundation of collaborative networks Springer 2007 pp. 283-290

[15] [16]

[17] [18]

[19]

T. Zarsky Privacy and Data Collection in Virtual Worlds From State of play-Law, Games and virtual worlds NYU press 2006; pages 217-223 J. Rendelman, Customer data means money InformationWeek, n851, Aug 20, 2001, p49-50 135. C. Prins, When Personal data, behavior and virtual identities become a commodity: Would a property rights approach matter?, SCRIPT-ed, Volume 3, Issue 4. 2006 J.A.Dighton, "Market Solutions to Privacy Problems?", Chap. 6 in Digital Anonymity and the Law - Tensions and Dimensions, The Hague: T.M.C. Asser Press, 2003 Il-Horn Hann, K. Hui, T. S. Lee, and I.P.L. Png The Value of Online Information Privacy: An Empirical Investigation Joint Center AEIBrookings joint center for regulatory studies October 2003. A. Yassine, S. Shirmohammadi Privacy and the Market for Private Data A Negotiation Model to Capitalize on Private Data AICCSA , Qatar, 2008 Pages 669-678 A. Acquisti, J. Grossklags, Privacy and rationality in individual decision making IEEE Security and Privacy 2005 J. Turow, D.K. Mulligan, and C.J. Hoofnagle, User fundamentally misunderstands the online advertising marketplace. University of Pennsylvania Annenberg School for Communication and UCBerkeley Samuelson Law Technology and Public Policy Clinic, 2007 M. Schwartz, Property, Privacy, and Personal Data Harvard Law Review, 117, 2056-2127, 2004 S. Preibush, Implementing Privacy Negotiation Techniques in ECommerce, Proceeding of the seventh IEEE International Conference on E-Commerce Technology, (CEC05), 2005 A. Gutowska, K. Buckly, Computing Reputation Metric in MultiAgent E-Commerce Reputation System Distributed Computing Systems Workshops, 2008. ICDCS '08. 28th international Conference on 17-20 June 2008 Page(s):255 - 260 P. Glasserman Monte Carlo methods in financial engineering Springer 2004 D. Fudenburg, J. Tirole, Game Theory The MIT press Cambridge, Massachusetts second printing 1992 H. R. Choi, J. Park, H. S. Kim, Y. S. Park, Y. J. park, Multi-Agent based negotiation support systems for order based manufactures ICEC 2003, Proceedings of the 5th international conferences on electronic commerce FIPA specification http://www.fipa.org/ T.Yu, Y. Zhang, and K.J. Lin Modeling and measuring privacy Risks in QoS Web services Proceeding of the 8th IEEE International conference on E-Commerce technology and the 3rd IEEE International Conference on Enterprise Computing, E-Commerce, and E-Services(CEC/EEE06), 2006 A. Gutowska and K. Bechkoum, "The Issue of Online Trust and Its Impact on International Curriculum Design", The Third China-Europe International Symposium on Software Industry-Oriented Education, Dublin, 6-7 February, 2007, pp. 134-140

[2]

Вам также может понравиться