Вы находитесь на странице: 1из 13

Expert Systems With Applications 82 (2017) 115–127

Contents lists available at ScienceDirect

Expert Systems With Applications


journal homepage: www.elsevier.com/locate/eswa

Does order matter? Effect of order in group recommendation


Akshita Agarwal1, Manajit Chakraborty1,∗, C. Ravindranath Chowdary1
Department of Computer Science & Engineering, Indian Institute of Technology (BHU), Varanasi 221 005, India

a r t i c l e i n f o a b s t r a c t

Article history: Recommendation Systems (RS) are gaining popularity and they are widely used for dealing with infor-
Received 11 November 2016 mation on education, e-commerce, travel planning, entertainment etc. Recommender Systems are used to
Revised 9 March 2017
recommend items to user(s) based on the ratings provided by the other users as well as the past prefer-
Accepted 29 March 2017
ences of the user(s) under consideration. Given a set of items from a group of users, Group Recommender
Available online 5 April 2017
Systems generate a subset of those items within a given group budget (i.e. the number of items to have
Keywords: in the final recommendation). Recommending to a group of users based on the ordered preferences pro-
Recommender systems vided by each user is an open problem. By order, we mean that the user provides a set of items that he
Order in recommendation would like to see in the generated recommendation along with the order in which he would like those
Group recommendation items to appear. We design and implement algorithms for computing such group recommendations effi-
Hungarian method ciently. Our system will recommend items based on modified versions of two popular Recommendation
Least Misery
strategies– Aggregated Voting and Least Misery. Although the existing versions of Aggregated Voting (i.e.
Greedy Aggregated Method) and Least Misery perform fairly well in satisfying individuals in a group, they
fail to gain significant group satisfaction. Our proposed Hungarian Aggregated Method and Least Misery
with Priority improves the overall group satisfaction at the cost of a marginal increase in time complex-
ity. We evaluated the scalability of our algorithms using a real-world dataset. Our experimental results
evaluated using a self-established metric substantiates that our approach is significantly efficient.
© 2017 Elsevier Ltd. All rights reserved.

1. Introduction (Baltrunas, Makcinskas, & Ricci, 2010; Ghazarian & Nematbakhsh,


2015; O’connor, Cosley, Konstan, & Riedl, 2001) to Critiquing-based
Recommender Systems are a subclass of information filtering approaches (McCarthy et al., 2006), it still continues to attract the
systems that predict items to a user or a group of users based on community because of its varied application in Social Networks
their prior preferences. This information can be obtained either ex- (Cantador & Castells, 2012; Gartrell et al., 2010), E-commerce web-
plicitly by collecting users’ ratings or implicitly by monitoring the sites (Sarwar, Karypis, Konstan, & Riedl, 20 0 0; Schafer, Konstan, &
users’ behavior (Bobadilla, Ortega, Hernando, & Bernal, 2012). Rec- Riedl, 2001) etc. In this paper, we first model the user satisfac-
ommender Systems are often classified based on their design or tion measure and show that introducing order in group recom-
filtering technique. They are namely, Content-based (Lops, De Gem- mendation has a positive effect. Consequently, we propose a sys-
mis, & Semeraro, 2011), Context-based (Adomavicius & Tuzhilin, tem, which not only takes user preferences into account but also
2015; Chakraborty, Agrawal, Shekhar, & Chowdary, 2015), Collab- considers the order in which the user likes them to be presented
orative (Ricci, Rokach, & Shapira, 2011), Demographic (Pazzani, to her. To the best of our knowledge, this is the first attempt
1999) and Knowledge-based (Trewin, 20 0 0). Group Recommender that takes order into account for group recommendation. To prove
Systems are a subclass of general Recommender Systems. Here, the efficacy of our proposed system, we adopt two widely known
the emphasis is on satisfying the needs of a group of users rather consensus functions for recommendation– Aggregated Voting and
than individuals. While a number of techniques have been applied Least Misery. Additionally, we provide approximations (Hungarian
to group recommendation ranging from Collaborative Filtering Aggregated Method and Least Misery Method with Priority) for the
above two consensus functions to suit our purposes. While GReedy
Aggregated Method (GRAM) aims to select the best item to max-
imize the satisfaction of the group, it still is far from providing a

Corresponding author. perfect solution. Instead, Hungarian Aggregated Method (HAM) of-
E-mail addresses: akshita.agarwal.cse13@iitbhu.ac.in (A. Agarwal),
fers a maximum satisfaction assignment combinatorial solution. On
cmanajit.rs.cse14@iitbhu.ac.in (M. Chakraborty), rchowdary.cse@iitbhu.ac.in
(C.R. Chowdary). the other hand, while Least Misery Method (LMM) tends to max-
1
All authors contributed equally, names are listed in alphabetical order. imize the minimum least satisfaction of the group, we show with

http://dx.doi.org/10.1016/j.eswa.2017.03.069
0957-4174/© 2017 Elsevier Ltd. All rights reserved.
116 A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127

Table 1 with “Goblet of Fire” but then a Harry Potter purist may stress
Example of schedule choices.
on watching the series chronologically starting with “Philosopher’s
Student Choice of schedule Stone”. Certainly, this is a conundrum and picking the right order
Student 1 Computer Science → Mathematics → Physics → Chemistry that satisfies everybody’s mood is a tough choice. Herein, lies the
Student 2 Mathematics → Geography → Computer Science → English possibility of a recommender system’s inclusion of order to accom-
Student 3 Physics → Chemistry → Computer Science → Mathematics modate such variance in choices.
:: :::

1.2. Our contributions

Our contributions are two-fold:


the help of experiments that Least Misery Method with Priority
(LMMP) provides better group satisfaction (as high as 30%). We • We posit the idea of introducing order in group recommenda-
provide a detailed analysis of the effect on three essential variables tion and study its effect on group satisfaction.
namely, Group Budget, Group Size and Number of items pertaining • We propose intelligent approximation algorithms for consen-
to the inclusion of order in group recommendation. Group bud- sus functions to suit our requirements and analyze them thor-
get (also known as space budget (Amer-Yahia, Roy, Chawlat, Das, & oughly using three variables– group size, group budget and
Yu, 2009) is defined as a subset of the items from the total set of number of items.
items available. It is the maximum number of items to be recom-
mended. A comprehensive set of experiments was conducted on a The rest of the paper is organized as follows. Section 2 gives an
real-world dataset MovieLens (GroupLens, 2015) which corroborate overview of related works in existing literature. Section 3 offers an
our claim. insight into our proposed model. The algorithms and their mod-
ified versions are presented in Section 4. Section 5 provides the
1.1. Motivation details of experimental setup. Results and Analysis pertaining to
experimentation are presented in Section 6. Finally, Section 7 con-
Most Group Recommender Systems suggest a list of items to cludes the article.
users, based on some consensus function. While generating rele-
vant items to user preferences in the recommended list is a prime
2. Related Work
requirement, the order in which they appear in the list is impor-
tant as well. It has been observed through experimentation (as ex-
2.1. Group Recommendation
plained later in Section 3.2) that when the satisfaction measure is
modeled as a similarity function between user’s preference list and
Movie recommendation has been an active research area for
the generated recommendation list, there exists a positive correla-
the past few decades. But, in recent years there has been an in-
tion between the two. We studied this correlation between users’
clination towards building group recommender systems for movies
preferences and generated recommendation list and found out that
(Christensen & Schiaffino, 2011) to cater the needs of families,
order plays a major role in determining user satisfaction when
friends etc.. Pera and Ng (2013) build a group recommender for
there is a strict group budget. By order we mean that the user
movies by the name of GroupReM. The novelty of this approach is
provides the set of items that he would like to see in the gener-
that instead of using the conventional approach of Matrix Factor-
ated recommendation along with the order in which he would like
ization for movie ratings, this system employs (personal) tags for
those items to appear. As such, we propose a system which incor-
capturing the contents of movies considered for the recommenda-
porates the notion of order in addition to the existing functionality
tion and group members interests. They formulate a three-pronged
of recommending to users of a group. This is particularly useful in
approach for the same:
Recommender Systems like travel planning where along with pro-
viding the preferences the users would also like to visit places in • Employ a merging strategy to explore individual group mem-
a particular order depending on the convenience, weather, trans- bers interests in movies and create a profile that reflects the
portation etc. The idea can also be used for planning schedules preferences of the group on movies.
in educational institutions where the students and teachers would • Use word-correlation factors to find movies similar in content.
like to suggest an order in which their classes are held or the order • Consider the popularity of movies on a movie website.
in which the examinations are conducted.
Kagita, Pujari, and Padmanabhan (2015) utilizes the virtual
Example 1. Suppose there are six subjects in the curriculum–
user approach for recommendation generation. They make use of
Computer Science, Mathematics, Physics, Chemistry, Geography and
transitive precedence relations among items to generate a virtual
English and for a particular day, the students have to choose four
user profile that represents the combined profile of the group.
subjects to be taught and also suggest the order in which they
This strategy has been applied to two different recommendation
would like the classes to be held. Some possible proposed choices
techniques– Precedence Mining and Collaborative Filtering. Other
are listed in Table 1.
existing virtual user strategies take into consideration the set of
The problem of selecting four subjects and arranging them in common items consumed by the users whereas the authors’ strat-
order in which they would be held based on some consensus func- egy computes the fuzzy score for the items not consumed by all
tion is quite a challenging task especially if the number of students the users, that gives this method better precision and recall over
is large. Now, consider similar constraints on a problem which may the other virtual user strategies. Kagita et al. (2015) also propose
include many variables, such as recommending an itinerary to a ‘monotonicity’, i.e. the degree to which a recommendation prevails
group of travelers. Clearly, the problem complexity increases mani- when new information is added to the training set, as a measure
fold. Satisfying each and every user is as important as meeting the for estimating the quality of a Group Recommender System.
satisfaction of the whole group. Another situation could be fam- Villavicencio, Schiaffino, Diaz-Pace, and Monteserin (2016)
ily members planning for a Harry Potter movie series marathon. present a multi-agent approach, called PUMAS-GR, for group rec-
While some members might like to watch the critically acclaimed ommendation. The novelty of their approach is that it leverages
“Prisoners of Azkaban” first, another member might like to start on negotiation techniques in order to integrate recommendations
A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127 117

(previously) obtained for each group member into a list of rec- highest aggregate ratings. The choice of method to use depends on
ommendations for the group. Each user is represented by a per- the kind of application and the extent to which the users are able
sonal agent that works on her behalf. The agents carry out a co- to modify their choices or view the other person’s choices during
operative negotiation process based on the multilateral Monotonic the recommendation process. The authors also examine the various
Concession Protocol (MCP). The authors claim that this negotiation existing recommender systems and depict the different ways in
process can generate recommendations that satisfy the different which the users’ preferences are analyzed and used for the group
group members more evenly than traditional group recommenda- preference model. Our work adopts the recommendation aggrega-
tion approaches since it mirrors the way in which human negotia- tion consensus because of its flexibility.
tion seems to work. There are four basic stages in the collaborative filtering algo-
To achieve high-quality initial personalization, recommender rithms where the group’s users’ data can be aggregated to the data
systems must provide an efficient and effective process for new of the group of users: similarity metric, establishing the neigh-
users to express their preferences. Chang, Harper, and Terveen borhood, prediction phase, determination of recommended items.
(2015) propose that this goal is best served not by the classical Our approach varies from this paper in two respects viz. (a) we do
method where users begin by expressing preferences for individual not employ collaborative filtering and (b) preference of users are
items but instead new users can begin by expressing their prefer- captured implicitly in our approach, as we simply consider those
ences for groups of items. The authors test this idea by designing movies in which the user is interested in. Ortega, Bobadilla, Her-
and evaluating an interactive process where users express prefer- nando, and Gutirrez (2013) perform aggregation experiments in
ences across groups of items that are automatically generated by each of the four stages and present two fundamental conclusions:
clustering algorithms. The paper contributes a strategy for recom-
1. The system accuracy does not vary significantly according to
mending items based on these preferences that is generalizable to
the stage where the aggregation is performed.
any collaborative filtering-based system.
2. The system performance improves notably when the aggrega-
Ortega, Hernando, Bobadilla, and Kang (2016) introduces Ma-
tion is performed in an earlier stage of the collaborative filter-
trix Factorization (MF) based Collaborative Filtering (CF) for group
ing process.
recommendations. The authors propose three original approaches
to map the group of users to the latent factor space and compare The authors also provide a group recommendation similarity
the proposed methods in three different scenarios: when the group metric and demonstrates the convenience of tackling the aggre-
size is small, medium and large. They also compare the precision gation of the group’s users in the actual similarity metric of the
of the proposed methods with state-of-the-art group recommen- collaborative filtering process.
dation systems using KNN based Collaborative Filtering. The au- Conventional solutions to Group Decision Making (GDM) prob-
thors further analyze group movie ratings on MovieLens and Net- lems do not regard the disagreement of few of the partici-
flix datasets. pants. Castro, Quesada, Palomares, and Martínez (2015) provides a
Kaššák, Kompan, and Bieliková (2016) focus on the improve- method of seeking group recommendation that ensures a certain
ment of the quality of the very Top-N recommendations and the level of agreement among the group members. For this Consensus
recommendation list order in group recommendations. The authors Reaching Processes (CRPs) have been proposed wherein the prefer-
achieve this by generating content-based and collaborative candi- ences are brought iteratively closer to one another until a sufficient
dates for group members. Next, these candidates (content-based level of agreement is achieved. The task of recommendation here
and collaborative separately) are aggregated to resolve conflict has been divided into two major phases– Recommendation phase
preferences of group members. Finally, the hybrid recommender is and Consensus phase. In the Recommendation phase, the k-nearest
applied to merge content-based and collaborative group candidates neighbor Collaborative Filtering algorithm is applied to obtain indi-
and to generate final recommendations. vidual recommendations and the top-n commonly predicted items
are presented as preferences. Finally, in the consensus phase, a CRP
2.2. Consensus functions is applied by using automatic consensus model, wherein automatic
changes in preferences are made, for the decision making using the
Amer-Yahia et al. (2009) introduces the semantics of group rec- fuzzy preference relations. The proposed method was tested on the
ommendation algorithms by means of a Consensus Function. The Movie Lens-100K dataset (MovieLens, 1998) and the improvement
authors also introduce disagreement among group members as a over the baseline group recommending techniques has been estab-
factor that greatly influences the efficiency and effectiveness of the lished.
generated recommendation. The disagreement among the users on In most of the works mentioned above, a consensus has been
a particular item is taken into account by taking the consensus drawn with or without collaborative filtering that reflects the over-
function that computes the recommendation score for an item as all commonality of users based on their past preferences, but none
a weighted summation of the item relevance and the group dis- of them account for the dissimilarity between the order of items
agreement for that item. Amer-Yahia et al. (2009) utilizes the con- that are recommended. Thus, we hypothesize that order is impor-
cept of user similarity to break ties by giving priority to the user tant and in later sections that follow we establish the same.
having maximum similarity with other members of the group. Two Roy, Thirumuruganathan, Amer-Yahia, Das, and Yu (2014) intro-
disagreement models are proposed and top-k threshold algorithms duces the concept of a feedback box in group recommendation.
are given for two popular recommendation strategies– Aggregated The feedback box is used by a user when he is updating his prefer-
Voting and Least Misery. The proposed algorithms were analyzed ences. Given the updated user’s preference, the feedback box gen-
for quality and efficiency by performing experiments on the Movie erates a recommendation for the user that maximizes the satisfac-
Lens dataset (MovieLens, 2009) and the utility of the disagreement tion of the user in the henceforth generated recommendation. The
models for obtaining effective and efficient results is established. use of the feedback box is examined for the two recommendation
Jameson and Smyth (2007) put forward two methods of gen- strategies– Aggregated Voting and Least Misery in the presence of
erating group recommendation viz. Virtual User and Recommen- two different consensus functions– Overlap Similarity and Ham-
dation Aggregation. The first method combines the highest rated ming Distance. The various proposed recommendation algorithms
items by the individual users into a list that is recommended to and feedback boxes are analyzed for efficiency using the Movie
the group, while the second aggregates the predicted ratings of Lens dataset. The feedback box generates optimal results for the
the items for all the users and returns the set of items having the Least Misery case using Overlap Similarity as the consensus func-
118 A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127

tion. Performance is recorded mainly by varying three parameters–


number of users, total number of items and group budget.
Our work draws inspiration from Roy et al. (2014), as we em-
ploy similar consensus functions embedded with the additional
concept of order. We have adopted Overlap Similarity as the ba-
sis of our satisfaction measure2 and adapted it appropriately for
our problem.

3. Model

Our user interaction model requires four inputs viz. Number of


users (n), Total number of items (m), Group budget(k) and User Fig. 1. Effect of order on recommendation for k=15.
preference vector (v). Once the user preference vectors are ac-
quired, it is fed into our system in the form of various matrices
appear at and the position at which the item actually appears in
and vectors; the final output is a vector of recommendation (r).
the generated recommendation (x can have values between 0 and
As stated above, our work draws its framework from Roy et al.
k − 1). a is a constant such that the value of a is inversely pro-
(2014) and hence, we have used their models with modifications
portional to the importance that we give to the order to generate
to adapt our requirement wherever necessary; a glimpse of which
the recommendation. c is a regularization factor, usually a constant
is provided in Section 3.3.
value. The values of f(x, a) is stored in a score vector s. Satisfaction
of any user j at any state depends on the currently recommended
3.1. User preference model
list of items and their order. So, initially, all users have a satisfac-
tion score of zero. When the first item appears in the recommen-
A user provides a vector v of length k (equivalent to group bud-
dation list, each user j will express her satisfaction using f(x, a). In
get) where each index has an item, which she would like to ap-
case, the item was not in her initial preference vector v, the sat-
pear at ith position in the recommendation list.3 The content of
isfaction score for that user j will remain the same. This process
vi is invariably the item_id of an item. We assume that each item
is carried out for all n users involved. Now, when the second item
provided in the preference vector is unique. The result of Group
appears again, each user will compute her satisfaction using f(x,
Recommendation System is a vector r with length k similar to the
a). This new satisfaction will be inclusive of the satisfaction she
user preference vector.
received due to an item in the first position. The process as men-
Example 2. Let’s say there is a total of 20 items (m = 20) of which tioned above is thus iterative and continues until the group budget
each user needs to select 6 items to be listed in the preference is full. The procedure to calculate s is presented as Algorithm 1.
vector. In that case a sample vector may look like this:
12 6 1 9 15 17 Algorithm 1: Generating score vector.
Data: k, a
which implies that a user wants to see an item with item_id= 12 at
Result: Score vector s
first position, item with item_id= 6 at second position and so on.
1 for 0 ≤ i ≤ k − 1 do
The recommendation list generated may be k
2 si ← c + 1 ;
15 12 1 9 6 17 (i+1 ) a
3 end
which means that items with item_ids 1, 6, 9, 12, 15 and 17 are se-
lected based on user satisfaction measure and consensus function
and the items would appear in the order as stated. Greater the value of a, lesser will be the role of order in deter-
mining the recommendation and vice-versa. To demonstrate this,
3.2. User satisfaction measure we plot the function f(x, a) for different values of x and a while
assigning the group budget k = 15 and setting c = 1. The resultant
In the existing literature there are many techniques to measure plot is depicted in Fig. 1. From this figure, it is clear that for small
the satisfaction of user preferences against the generated recom- values of a (say 0.5), there appears a large gap for the deviation of
mendation list, but none of them account for order. So, we propose a single position in the recommendation. On the other hand, if the
a new measure called USO (User Satisfaction with Order). It can be value of a becomes very large (say 100 or 10 0 0), it hardly affects
represented as: the satisfaction function.6 Thus, it substantiates our claim that or-
der is of prime importance to group recommendation. Additional
uso j = uso j + f (x, a ) (1)
graphs are also presented in Figs. 2 and 3. The satisfaction func-
k
f (x, a ) = c + 1
(2) tion f(x, a) basically represents the group budget normalized over
(x + 1 ) a the deviation in order in the recommendation list from user’s ex-
pectation. f(x, a) gives us control over the extent that order affects
where uso is a vector of length n whose elements are user satis-
recommendation.
faction score computed by the above formula.4 For each user j, usoj
is the summation of previous USO score and the new order satis-
3.3. Consensus function
faction component f(x, a)5 . k is the group budget, x is the absolute
difference between the position that the user wanted the item to
3.3.1. Aggregated Voting
Aggregated Voting is one of the popular recommendation
2
Overlap Similarity was chosen as it considers the similarities between the gen- consensus methods (Adomavicius & Tuzhilin, 2005; Kompan &
erated recommendation and the user preferences. Bieliková, 2014; Resnick & Varian, 1997). This method aims to com-
3
Existing approaches require users to specify their requirements arbitrarily.
4
All the user satisfaction scores are initialized to zero.
5
sj basically denotes user j’s satisfaction with the order of items. 6
In Fig. 1 the plots for a = 100 and a = 10 0 0 almost overlap.
A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127 119

Table 2
User preference matrix U.

Item_ID

Position 1 2 3 4 5 6 7 8 9

1 0 2 0 0 2 0 1 2 1
2 0 1 1 1 1 1 1 2 0
3 0 1 2 0 1 1 2 1 0
4 1 0 1 1 1 0 3 1 0
5 0 2 1 2 0 1 0 1 1

Table 3
Fig. 2. Effect on order on recommendation for k=5. Score vector s for k = 5.

Index 0 1 2 3 4

Score 6.0 0 0 0 4.5355 3.8867 3.50 0 0 3.2361

Suppose, a user u selects an item t, which does not appear in rest


of the user preferences. Now, if u is minimally satisfied at some
stage, it would further reduce the satisfaction of other users, since
they never kept t in their preferences. To overcome such adversi-
ties, we have proposed a modified version of Least Misery method
in Section 4.4.

4. Algorithms
Fig. 3. Effect on order on recommendation for k=100.

To understand the algorithms in this section, let us consider the


pute the best group recommendation by considering items in such following example:
a fashion that the sum total of individual satisfaction scores of all
Example 4. Number of users (n) is : 8
the users is maximized.
Number of items (m) : 9
Example 3. Suppose there are n = 4 users and m = 5 items out of Group budget (k) : 5
which k = 2 have to be selected and the preference vectors are: Value of a = 2
The preferences as given by the users are:
2 1
4 1 User1 : 2 5 6 7 8
2 1 User2 : 5 8 7 4 3
3 5 User3 : 8 2 3 7 9
User4 : 8 4 3 7 2
In this case, the total number of possible recommendations are:
mP = 20
k
User5 : 2 3 5 1 6
By aggregated method of voting the recommendation list would User6 : 5 6 7 8 2
be 2 1 as it will provide maximum satisfaction to the group. Al- User7 : 7 8 2 5 4
though the method is relatively simple; it incurs a computational
cost of O (k2 mk ). To avoid such a high computational complexity, User8 : 9 7 8 3 4
we propose an intelligent approximation algorithm to aggregated Corresponding to these vectors, two matrices are generated–
voting in Section 4.2 that computes the recommendation efficiently
in polynomial time. 1. User Preference matrix U and
2. User Satisfaction Score matrix S.
3.3.2. Least Misery In the former matrix of size k × m each element Ui, j represents
This is another popular method (Masthoff, 2011; O’connor et al., the number of users who would like to see item j at position i
2001) that generates a recommendation such that minimum sat- in the recommendation list. We set the value of a = 2. The U so
isfaction score among all the users is maximized. For Example 3, generated, is presented in Table 2.
Least Misery method would generate
Table 3 gives the values for score vector s. Once the User Pref-
3 1 erence Matrix and the score vector have been computed, we gen-
as the final output. While this method ensures that every user is erate the User Satisfaction Matrix. Here, Si, j denotes the satisfac-
minimally satisfied, it performs poorly when the overall satisfac- tion value added to the group satisfaction when item numbered j
tion of the group is considered. Moreover, in cases where the user is chosen for the ith position (The User Satisfaction Score matrix S
base is large and the group budget is small, both individual and corresponding to the above U is shown in Table 4). For instance,
overall satisfaction of the group may be affected. This is because S1,2 denotes the group satisfaction score when the 2nd item is
in Least Misery when we try to maximize the minimum satisfac- chosen for the 1st position. As we can see from the User Preference
tion of the group and if the group budget is small then the over- Matrix U there are: two users who want to see item no. 2 at po-
all satisfaction score would be much less that that generated by sition 1 (Users 1 and 5), one user (User 3) who wants to see item
GRAM. Also, since we are aiming for at least minimum satisfaction no. 2 at position 2, one user (User 7) who wants to see this item
for any user, chances are no one in the group is completely sat- at position 3 and finally two users (User 4 and 6) who want to see
isfied. E.g. let there are 10 0 0 users while the group budget is 10. it at position 5. For each of the users {1, 5, 3, 7, 4, 6}, choosing
120 A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127

Table 4
User satisfaction matrix S.

Item_ID

Position 1 2 3 4 5 6 7 8 9

1 3.50 26.89 19.04 14.51 23.92 11.66 28.81 31.69 9.24


2 3.88 26.61 22.46 16.88 23.49 14.04 31.27 32.99 8.04
3 4.54 26.08 24.96 16.84 22.84 14.42 34.03 31.27 7.77
4 6.00 24.49 23.49 18.96 21.42 12.96 34.46 29.84 8.03
5 4.53 25.86 21.81 20.03 18.39 13.39 28.12 27.89 9.23

item no. 2 at position 1 will entail adding their corresponding sat- isfaction of the group in the present state (that is having the max-
isfaction component (s) to the group satisfaction. Users 1 and 5 are imum value in matrix S). According to this logic, for the given ex-
optimally satisfied when item no. 2 is selected for position 1 since ample, S4,7 has the highest value, so, for the 4th position, 7th item
there is no deviation in expected position. Hence, corresponding to will be selected. By greedy approach (Black & Pieterse, 2005), we
these two users, we will add s0 = 6 to the group satisfaction score will not visit item no. 7 again. In the following iterations the 8th
S1,2 . User 3 will add s1 = 4.5355 to S1,2 , since there is a deviation item is selected for the 2nd position, 2nd item is selected for 1st
of one position from her expectation and so on. Hence, Si, j is the position and so on. Finally, this method generates the output:
sum of all these satisfaction component values added by each user
for a particular item and a particular position. 2 8 3 7 4

k
with the total group satisfaction = 139.339. The total group satis-
Si, j = s| p−i| × U p, j
faction is obtained by adding up the individual user satisfaction
p=1
generated by the procedure Compute_Satisfaction presented as
So, for the example under consideration: Algorithm 4. GRAM is presented in Algorithm 5. Time complexity
S1,2 = s0 × U1,2 + s1 × U2,2 + s2 × U3,2 + s3 × U4,2 + s4 × U5,2 of GRAM (including the cost of computing S) = O (k2 m + kn ).
= 6 × 2 + 4.53 × 1 + 3.89 × 1 + 3.50 × 0 + 3.23 × 2 Clearly, the introduction of greedy approach makes it an intel-
= 26.89 ligent one as it avoids selection of an item visited once. Once the
Similarly, the other values of S are calculated and the final ma- item has been visited for a particular position, GRAM avoids visit-
trix is presented as Table 4. The above explained generalized pro-
cedure to generate U and S are formalized as Algorithms 2 and
3 respectively.
Algorithm 4: Computing satisfaction of individual user.

4.1. GReedy Aggregated Method (GRAM) Data: u, vu , k, r


Result: Satisfaction score as sum
This method uses a greedy approach (Black & Pieterse, 2005) to 1 sum ← 0;
compute the recommendation. It operates in k iterations where at 2 for 1 ≤ p ≤ k do
each step it finds a particular position p and an item j such that 3 item ← vu [ p];
selecting this item j for this position p adds to the maximum sat- 4 for 1 ≤ q ≤ m do
5 if item = rq then
6 sum ← sum + s| p−q| ;
7 end
Algorithm 2: Generating user preference matrix.
8 end
Data: k, v, n
9 end
Result: User Preference Matrix U
1 U ← 0;
2 for 1 ≤ j ≤ n do
3 for 1 ≤ p ≤ k do Algorithm 5: Greedy Aggregated Method.
4 U p,v j [ p] ← U p,v j [ p] + 1; Data: k, m, S
5 end Result: Recommendation vector r
6 end 1 i ← 1;
2 j ← 1;
3 r←0  Bold 0 represents zerovector;
Algorithm 3: Generating user satisfaction matrix. 4 while j ≤ k do
5 max_value ← max 1≤p≤k S p,q ;
Data: k, m, a, n 1≤q≤m
Result: User Satisfaction Matrix S 6 r p ← q;
1 S ← 0; 7 for 1 ≤ i ≤ m do
2 for 1 ≤ j ≤ k do 8 S p,i ← φ ;
3 for 1 ≤ p ≤ m do 9 end
4 for 1 ≤ q ≤ k do 10 for 1 ≤ i ≤ k do
5 S j,p ← S j,p + Uq, j × s|i−q| ; 11 Si,q ← φ ;
6 end 12 end
7 end 13 j ← j + 1;
8 end 14 end
A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127 121

Algorithm 6: Hungarian Aggregated Method. Algorithm 7: Least Misery method algorithm.


Data: k, m, S Data: k, m, v
Result: Recommendation vector r Result: Recommendation vector r
1 p ← 1; 1 r ← 0;
2 q ← 1; 2 for 1 ≤ p ≤ k do
3 r←0  Bold 0 represents zerovector; 3 for 1 ≤ q ≤ n do
4 n ← max(k, m ); 4 sat p ← Compute_Satisfaction(q, vq , k, r );
5 max_val ← max 1≤i≤n Si, j ; 5 end
1≤ j≤n 6 least _sat ← min1≤p≤n sat p ;
6 ∀i, j S̄i, j = max_val − Si, j ; 7 u ← p;
7 H ← Hungarian(k, m, S̄ ); 8 m, row, col ← 0;
8 while p ≤ k do 9 for 1 ≤ p ≤ k do
9 while q ≤ m do 10 item ← vu [ p];
10 if H p,q = 1 then 11 q ← 1;
11 r p ← q; 12 for 1 ≤ q ≤ k do
12 end 13 if Sq,item > m then
13 q ← q + 1; 14 m ← Sq,item ;
14 end 15 row ← q;
15 p ← p + 1; 16 col ← item;
16 end 17 end
18 end
19 end
20 rrow ← col;
ing the item again. This effectively reduces the computation time
21 for 1 ≤ i ≤ m do
at the same time aiming to increase the overall group satisfaction.
22 Srow,i ← φ ;
23 end
24 for 1 ≤ j ≤ k do
4.2. Hungarian Aggregated Method (HAM) 25 S j,col ← φ ;
26 end
This method employs Hungarian Algorithm (Kuhn (1955)) for 27 end
maximizing the group satisfaction. As usual, the satisfaction value
matrix S is supplied as input. Hungarian algorithm works on
the principle that the each component of the sum in a two-
dimensional matrix belongs to a unique row and column. For our Time complexity of HAM (including the cost of computing S) =
case, it produces a matrix H of size m × m such that if Hi, j = 1, O (m3 + k2 m + kn ). This is again an application of intelligence as
it implies that jth item should be placed at ith position to ensure we pick elements into the final list in a perceptive fashion.7
maximum group satisfaction. Strictly speaking, the Hungarian al-
gorithm offers a minimum cost assignment combinatorial solution.
4.3. Least Misery Method (LMM)
However, it can be modified to compute the maximum cost by
subtracting each element from the maximum cost in the matrix.
As stated earlier, this algorithm tries to maximize the minimum
The Hungarian Aggregated Method (HAM) as presented in
satisfaction of the group. It too operates in k iterations where at
Algorithm 6 takes a p × q matrix as input and generates an as-
each step it finds the user with the current least satisfaction. Next,
signment matrix of size r × r (where r = max( p, q )) with Boolean
the method considers the set of items provided in her user pref-
entries such that assignment (i, j ) = 1 if element j is selected for
erence vector and select an item i for position j, so as to provide
ith row. This stands true for cases generating the minimum sum
maximum satisfaction to the group. For the example under con-
of the matrix such that there is only one entry assigned the value
sideration initially, user 1 is identified as the user with the least
of 1 from every row and every column. In our case, we have to
misery. Her set of items: {2, 5, 6, 7, 8}. In this set, LMM searches
modify this problem for obtaining the maximum cost. To do so, we
for an item that can produce maximum satisfaction for the group
take the maximum cost in the satisfaction value matrix S and each
pertaining to that particular item (i.e. it tries to find the maximum
element of this matrix is subtracted from this maximum value,
satisfaction score in her satisfaction score vector). The correspond-
i.e.x = max − x for each element x in S. This resultant matrix S̄ is
ing position for item is also retained. So, item 7 is selected for the
given as input to the minimum cost Hungarian procedure. For our
4th position. Again, the method looks for the next user with min-
situation, we have k ≤ m. The Hungarian procedure will run in
imum satisfaction. It turns out to be user 5 and the above proce-
O (m3 ) time to generate a m × m matrix. The items of our interest
dure is repeated until all the users are covered. The recommenda-
will lie in the first k rows of this assignment matrix. For each en-
tion generated using this method is:
try Hi, j = 1, we will choose the jth item for the ith position. Based
on this matrix H, the requisite items are selected in the appro- 8 5 2 7 3
priate order. The Hungarian procedure is presented in detail as
Algorithm 11 in Appendix A. For the example under consideration, with the total group satisfaction =137.537 and the minimum satis-
Algorithm 6 produces the recommendation: faction of the group =11.9223.
The LMM is presented in Algorithm 7. Time complexity of LMM
5 8 3 7 2 is O (k3 n ).

with the total group satisfaction = 142.19 as computed using Com-


pute_Satisfaction procedure. 7
For details refer Appendix A.
122 A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127

Algorithm 8: Computing similarity between two users. Algorithm 9: Procedure for computing priority.
Data: u, vu , k, vw Data: n, vu , k, s, u
Result: val Result: Priority of user u as sum
1 val ← Compute_Satisfaction(u, vu , k, vw ); 1 r ← vu ;
2 return val; 2 sum ← 0;
3 for 1 ≤ i ≤ n do
4 if i = j then
Table 5 5 sum ← sum
Similarity values for User 1 with other users. + Compute_Satisfaction(i, vu , k, r );
User Overlap with User 1 6 end
7 end
2 12.5711
3 13.7716
4 12.4721
Table 6
5 14.4223
Priority values for the users in Example 4.
6 21.3782
7 14.7735 User Priority
8 7.7735
1 97.1623
2 101.711
3 96.9522
4 103.855
4.4. Least Misery Method with Priority (LMMP) 5 61.5836
6 97.6902
This algorithm is an extension of Least Misery method. It fol- 7 99.3427
8 85.2987
lows the same principle as described in the Section 4.3. However,
to resolve the ties that may occur, for instance, when initially all
the users have the same satisfaction score of zero, we take the
similarity between the users into consideration. The principal idea Example 4, LMMP generates the following output:
behind this concept is that if there is a set of users having the 5 8 2 7 3
same value of current satisfaction, we must satisfy the user, who
can add maximum possible satisfaction to the group. To imple- with the total group satisfaction = 139.265 and the minimum sat-
ment this, we assign a Priority value for each user based on her isfaction of the group = 11.2735.
similarity with all the other users in the group. The priority value The above scores depict that including priority in the Least
for a user is computed once for a recommendation where prior- Misery approach increases the overall satisfaction of the group by
ity[i] denotes the similarity of user i with all other users in the 0.015% while ensuring that the least satisfaction is not reduced be-
group. Overlap similarity (i.e. the similarity between two users) is yond 0.05%. This approach is particularly useful when the number
calculated on the basis of similarity between two users’ preference of users and items is large as compared to the group budget.
vectors. The underlying philosophy of Compute_Satisfaction and LMMP is presented in Algorithm 10. Time complexity of LMMP
Compute_Overlap is same, the difference being the type of list be- is O (k3 n + k2 n2 ).
ing passed as arguments to them. For, Compute_Satisfaction, one
list is the preference vector and the other is the generated rec- 5. Experimental setup
ommendation. While for Compute_Overlap, both the lists are user
preferences. Higher the priority value, greater is the similarity of We conducted a series of performance and quality experiments
the user with other members of the group. Hence, satisfying such using a real world dataset extracted from MovieLens (GroupLens,
a user will most definitely add greater satisfaction to the group. For 2015). Our prototype system has been implemented in C++. All
instance, let us compute priority score for User 1 from Example 4. experiments were conducted on an Intel machine equipped with
For this, we will calculate the similarity or overlap of preference quad-core 2.20 GHz CPU, 8GB RAM, 1TB HDD and running Ubuntu
vector v1 with the vectors v2 , v3 , ... , v8 . The overlap values cal- 14.04. All the results presented in the next sections were obtained
culated using the Compute_Overlap procedure (Algorithm 8) are as the average of five runs.
presented in Table 5.
Summing up the above values we get the priority for user 5.1. Data preparation
1 as pr ior ity[1] = 97.1623. Similarly, priority values are calculated
for other users as well. In this way, we ensure that the Least The Movie Lens dataset (GroupLens, 2015) contained
Misery user is chosen optimally, that is, we satisfy this user as 20,0 0 0,263 movie ratings provided by 7200 users over 27,278
well as maximize the overall group satisfaction. Breaking ties with movies. We adopted a simple boolean procedure to convert these
the help of priority instigate the property of aggregation into ratings to our required model. For each user, if a particular movie
the Least Misery method. For the case under consideration, the has been rated greater than 2 (on a scale of 1–5), was selected
priority values are generated using Compute_Priority procedure and appended to the preference vector of that user in the order in
(Algorithm 9) as shown in Table 6. which it appeared in the input rating file. If the movie acquired
At first, all the users have a satisfaction score of 0. So, user 4 a rating less than or equal to 2, it was simply discarded. So
with the highest priority value is selected with the current least in effect, we had 7200 user preference vectors. We considered
satisfaction. Now from his set of items, i.e. {8, 4, 3, 7, 2} we look order of movies, as suggested by the input order of ratings of
for the item that will be able to generate maximum satisfaction movies by users. Since the number of movies is quite large as
for the group. So the 7th item is selected for the 4th position. The compared to the number of users, the chances of having exactly
satisfaction values of the users are recomputed and now user 5 has the same order set is minimal. Hence, we assumed that the order
the least satisfaction. The same procedure is repeated for this user in which the rating of movies appear in the dataset is the order of
and the 2nd item is chosen for the 3rd position and so on. For recommendation the user prefers.
A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127 123

Algorithm 10: Least Misery Method with Priority.


Data: k, m, a, s, S
Result: Recommendation vector r
1 j ← 1;
2 r ← 0;
3 while j ≤ n do
4 pr ior ity j ← Compute_Priority(n, k, s, v j , j );
5 end
6 for 1 ≤ j ≤ k do
7 for 1 ≤ p ≤ n do
8 sat p ← Compute_Satisfaction( p, v p , k, r );
9 end
Fig. 4. Comparison of Time complexity of algorithms for varying group budget.
10 least _sat ← min1≤p≤n sat p ;
11 if (sati = sat j and sati = least _sat) ∀i = j and 1 ≤ i, j ≤ n
then
12 max_priority ← max1≤q≤n pr ior ityq ;
13 u ← q;
14 end
15 else
16 u ← p;
17 end
18 m, row, col ← 0;
19 for 1 ≤ p ≤ k do
20 item ← vu [ p];
21 q ← 1;
22 for 1 ≤ q ≤ k do
Fig. 5. Comparison of Time complexity of algorithms for varying number of users.
23 if Sq,item > m then
24 m ← Sq,item ;
25 row ← q;
26 col ← item;
27 end
28 end
29 end
30 rrow ← col;
31 for 1 ≤ i ≤ m do
32 Srow,i ← φ ;
33 end
34 for 1 ≤ j ≤ k do
35 S j,col ← φ ;
36 end
Fig. 6. Comparison of Time complexity of algorithms for varying number of items.
37 end

ities apriori and then breaking ties with priority adds to the com-
putation time.
Since no explicit groups were provided in the dataset, we had
assumed groups of users based on their similarity as explained in
6.1.2. Varying the number of users n
Section 4.4 using Algorithm 8.
In this experiment we wanted to study the effect of number of
users on the running time of the proposed algorithms. For this we
6. Results and Analysis set the group budget k = 100 and m = 2000 and vary the group
size n. The resulting plot is depicted in Fig. 5.
We report the efficiency results for the five proposed algo- From this, we observe that the GRAM gives the performance as
rithms. Performance is measured mainly by varying 3 parameters– it produces no noticeable changes with respect to the number of
Group size(n), Total number of items(m) and the Group budget(k). users.

6.1. Results 6.1.3. Varying the total number of items m


In this experiment we set k = 100 and n = 10 0 0 and vary the
6.1.1. Varying group budget k total number of items. The results are shown in Fig. 6.
We study the time complexity of all the proposed algorithms by
varying the group budget (k). The brute force algorithm quickly be- 6.2. Analysis
comes very expensive (for k < 5). We set n = 1200 and m = 2000.
As shown in Fig. 4 our proposed algorithms prove to be very ef- The performance of the two aggregation methods was com-
ficient and the variance with the group budget k is depicted. The pared on the basis of the overall group satisfaction and the two
running time of Least Misery with Priority method increases most Least Misery algorithms were compared by the overall group sat-
rapidly with increasing in k. This can be attributed to the fact that isfaction as well as the least satisfaction obtained. The following
it has the time complexity of O (nk3 + n2 k2 ). Calculating the prior- results were obtained:
124 A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127

Fig. 7. Comparison of HAM and GRAM over satisfaction. Fig. 9. Comparison of Least Misery vs. Least Misery with Priority over least satis-
faction.

Fig. 8. Comparison of Least Misery vs. Least Misery with Priority over user satis-
faction. Fig. 10. Comparison of Least Misery vs. Least Misery with Priority over average user
satisfaction.

1. The Hungarian Aggregated method(HAM) gave the satisfaction at


par with Greedy Aggregated Method(GRAM). On an average, it • Least Misery with Priority and Hungarian Aggregated Method
gave about 2.5% more satisfaction than the GRAM. However, (HAM) are quite costly as compared to their counterparts. How-
this greater satisfaction is obtained at the cost of a much higher ever these algorithms sacrifice time to produce better satisfac-
running time. Fig. 7 depicts the average user satisfaction for tion results.
both the algorithms for fixed values of m and k (m = 20 0 0, k = • HAM achieves satisfaction equal to the Brute Force Algorithm in
100) and varying n between 800 and 1200. most cases. However, GRAM also produces approximate results
2. Both the Least Misery methods gave almost equal overall satis- (97% of HAM) in much lesser time.
faction for the group when the values of the group budget were • The Least Misery Method with Priority method is useful in cases
comparable with the total number of items. However as the when the group budget is small as compared to the total num-
group budget is reduced the Least Misery with Priority method ber of items and users and it is not possible to satisfy all the
gives better satisfaction than the normal approach. The varia- users. In other cases, the Least Misery method is preferred as it
tion of average user satisfaction with the number of users for produces greater value of least satisfaction and an approximate
both the methods is shown in Fig. 8. overall group satisfaction in lesser time.
Although, the overall satisfaction produced by the Least Misery
Method with Priority is greater in some cases, the correspond-
ing least satisfaction produced by LMMP is lesser as compared 7. Conclusions and future work
to Least Misery. This can be attributed to the fact that including
preferences of the users adds the property of aggregation to the Group Recommender Systems have been studied and applied
normal least misery approach and hence there is a slight com- widely. Conventional approaches of Group Recommender Systems
promise in the least misery in such cases. The variation of the employ either Aggregated Voting or Least Misery methods. While
least satisfaction with the group budget for both the algorithms much work has been done to ensure maximum satisfaction of the
is shown in Fig. 9. group by processing user preferences, to the best of our knowledge
3. However, in cases where the group budget is quite small in order of recommended items were never considered. Through our
comparison to the number of users and the total number of proposed user satisfaction measure (USO) we show that order plays
items, Least Misery with Priority method produces much bet- an integral part in achieving the best satisfaction for the group.
ter overall group satisfaction (as high as 30% greater) than Least We proposed two modified versions of Aggregated Voting strat-
Misery. This is apparent when the group budget is less. In such egy namely, GRAM and HAM which overcomes the shortcomings of
cases, it becomes difficult to satisfy all the users, so the opti- the original aggregated method. Also, we introduced a variation of
mum recommendation would be the one that not only tries to Least Misery method known as Least Misery with Priority to counter
maximize the least misery but also produces a good satisfaction the pitfalls of the former. Most of the variations as proposed by
for the group. This is the advantage of Least Misery with Priority us uses some form of intelligence. The resultant system takes into
method. The difference between the two algorithms for lower account both the preference and order as suggested by individual
values of k is depicted in Fig. 10. users to ensure maximum group satisfaction. An exhaustive set of
experiments were carried out on a real-life dataset to corroborate
From the above set of experiments we observe that: our claims.
A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127 125

As part of our future work, we would like to explore the effi-


Algorithm 11: Hungarian procedure.
ciency and scalability of our algorithms to various other datasets,
such as for travel itinerary. Besides, we intend to apply other intel- Data: S, k, m
ligent machine learning based algorithms to capture the effect of Result: Assignment matrix H
order on both individual and group recommendation. 1 C_cov ← 0;
2 R_cov ← 0;
3 path_row, path_col ← 0;
Acknowledgment
4 for 1 ≤ i ≤ n do
5 min_val ← min1≤ j≤n Si, j ;
This work was partially funded under the DST-SERB, India grant
YSS/2015/0 0 0906.
6 ∀ j Si, j ← Si, j − min_val;
7 end
8 for 1 ≤ i ≤ n do
Appendix A. Steps involved in computing assignment matrix H
9 for 1 ≤ j ≤ n do
10 if Si, j = 0&&C _cov j = 0&&R_covi = 0 then
The following Hungarian algorithm is a modified form of the
11 Hi, j , C _cov j , R_covi ← 1;
original Munkres’ Assignment Algorithm (Pilgrim, 2017). This algo-
12 end
rithm describes the manipulation of a two-dimensional matrix by
13 end
starring and priming zeros (explained later) and by covering and
uncovering rows and columns. 14 end
15 C _covi , R_covi ← 0  i ∈ ( 1, n );
1. For each row of the matrix, we find the smallest element and 16 count ← 0;
subtract it from every element in its row. We define a local 17 ∀ j if ∃i : Hi, j ← 1 then
variable called minval that is used to hold the smallest value 18 C _cov j ← 1;
in a row. minval is subtracted from each element of that row. 19 count ← count + 1;
Next, we find a zero (Z) in the resulting matrix. If there is no 20 end
starred zero in its row or column, star Z. This is repeated for 21 if count = n then
each element in the matrix. 22 return H;
In this step, we introduce the mask matrix H, which has the 23 end
same dimensions as the cost matrix S and is used to star and 24 else
prime zeros of the cost matrix. If H(i,j)=1 then S(i,j) is a starred 25 SubProcedureOne();
zero, if H(i,j)=2 then S(i,j) is a primed zero. We also define two 26 end
vectors R_cov and C_cov that are used to “cover” the rows and
columns of the cost matrix S. We check to see if S(i,j) has a
zero value and if its column or row is not already covered. If
not, then we set a star to this zero (i.e. set H(i,j)=1) and cover Algorithm 12: SubProcedureOne.
its row and column (i.e. set R_cov(i)=1 and C_cov(j)=1). 1 row, col, done ← 0;
Next, we uncover all rows and columns so that we can use the 2 while done = 1 do
cover vectors to help us count the number of starred zeros. We 3 FindZero(row,col);
repeat this to cover each column containing a starred zero. If 4 if row = 0 then
y columns are covered, the starred zeros describe a complete 5 done ← 1;
set of unique assignments. In that case, done, a flag variable is 6 SubProcedureThree();
set to true, otherwise, we go to the next step. Once we have
7 end
searched the entire cost matrix, we count the number of inde-
8 else
pendent zeros found. If we have found (and starred) y indepen-
9 Hrow,col ← 2;
dent zeros then we are done. If not, we proceed to the next
10 index ← StarRow(row);
step. This step is presented as Algorithm 11.
11 if index = −1 then
2. The goal of this step is to find a non-covered zero and prime
12 col ← index;
it. If there is no starred zero in the row containing this primed
13 R_covrow ← 1;
zero, we go to Step 3. Otherwise, we cover this row and un-
14 C _covcol ← 0;
cover the column containing the starred zero. We continue to
15 end
repeat this until there are no uncovered zeros left. We save the
16 else
smallest uncovered value and go to Step 4.
17 done ← 1;
In this step, statements such as “find a non-covered zero” are
18 path_row ← row;
clearly distinct operations that deserve their own functional
19 path_col ← col;
blocks. We have decomposed this step into a main procedure
20 SubProcedureTwo();
(Algorithm 12) and two subprograms (Algorithms 15 and 16 ).
3. Now, we construct a series of alternating primed and starred 21 end
zeros as follows. Let path_row represent the uncovered primed 22 end
zero found in Step 2. Let path_col denote the starred zero in 23 end
the column of path_row (if any). We continue until the series
terminates at a primed zero that has no starred zero in its col-
umn. Next, we unstar each starred zero of the series, star each
primed zero of the series, erase all primes and uncover every 4. In this step, we add the value found in Step 3 to every element
line in the matrix. Once done, we return to Step 1. of each covered row, and subtract it from every element of each
The operations of this step are segregated into a main pro- uncovered column. We go back to Step 1 without altering any
cedure (Algorithm 13) and two relatively simple subprograms stars, primes, or covered lines. Algorithm 14 concludes the main
(Algorithms 17 and 18 ). Hungarian procedure.
126 A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127

Algorithm 13: SubProcedureTwo. Algorithm 15: FindZero.


1 pc ← 1; Data: row, col
2 path pc,1 ← path_row; Result: row, col
3 path pc,2 ← path_col; 1 i ← 1;
4 done ← 0; 2 d ← 0;
5 while done = 1 do 3 while d = 1&&i ≤ n do
6 r ← FindStarCol( path pc,2 ); 4 j ← 1;
7 if r > 0 then 5 while j ≤ n do
8 pc ← pc + 1; 6 if Si, j = 0&&C _cov j = 0&&R_covi = 0 then
9 path pc,1 ← r; 7 row ← i;
10 path pc,2 ← path pc−1,2 ; 8 col ← j;
9 d ← 1;
11 else
12 done ← 1; 10 end
11 j ← j + 1;
13 if done = 1 then 12 end
14 c ← FindPrimeRow( path pc,1 ); 13 i ← i + 1;
15 pc ← pc + 1;
14 end
16 path pc,1 ← path pc−1,1 ;
17 path pc,2 ← c;

18 for 1 ≤ i ≤ pc do Algorithm 16: StarRow.


19 if H pathi,1 ,pathi,2 = 1 then Data: row
20 H pathi,1 ,pathi,2 ← 0; Result: j
1 for 1 ≤ j ≤ n do
21 else
2 if Hrow, j = 1 then
22 H pathi,1 ,pathi,2 ← 1;
3 return j;
end
23 ∀i ∈ (1, n ) R_covi , C _covi ← 0; 4

24 ∀i, j ∈ (1, n ) if Hi, j = 2 then 5 end


25 Hi, j ← 0; 6 return −1;

26 go to 16 in Hungarian();

Algorithm 17: FindStarCol.


Data: col
Algorithm 14: SubProcedureThree. Result: i
1 for 1 ≤ i ≤ n do
1 mv ← ∞;
2 if Hi,col = 1 then
2 for 1 ≤ i ≤ n do
3 return i;
3 for 1 ≤ j ≤ n do
4 end
4 if R_covi = 0&&C _cov j = 0&&mv > Si, j then
5 mv ← Si, j ; 5 end
6 end 6 return −1;
7 end
8 end
9 for 1 ≤ i ≤ n do Algorithm 18: FindPrimeRow.
10 for 1 ≤ j ≤ n do Data: row
11 if R_covi = 1 then Result: j
12 Si, j ← Si, j + mv; 1 for 1 ≤ j ≤ n do
13 end 2 if Hrow, j = 2 then
14 if C _cov j = 0 then 3 return j;
15 Si, j ← Si, j − mv; 4 end
16 end 5 end
17 end 6 return −1;
18 end
19 SubProcedureOne();
Baltrunas, L., Makcinskas, T., & Ricci, F. (2010). Group recommendations with rank
aggregation and collaborative filtering. In Proceedings of the fourth acm confer-
ence on recommender systems (pp. 119–126). ACM.
Black, P. E., & Pieterse, V. (2005). Greedy algorithm. Accessed: 2016-05-29 http://
References www.nist.gov/dads/HTML/greedyalgo.html.
Bobadilla, J., Ortega, F., Hernando, A., & Bernal, J. (2012). Generalization of recom-
Adomavicius, G., & Tuzhilin, A. (2005). Toward the next generation of recommender mender systems: Collaborative filtering extended to groups of users and re-
systems: a survey of the state-of-the-art and possible extensions. Knowledge and stricted to groups of items. Expert systems with Applications, 39(1), 172–186.
Data Engineering, IEEE Transactions on, 17(6), 734–749. Cantador, I., & Castells, P. (2012). Group recommender systems: New perspectives
Adomavicius, G., & Tuzhilin, A. (2015). Context-aware recommender systems. In Rec- in the social web. In Recommender systems for the social web (pp. 139–157).
ommender systems handbook (pp. 191–226). Springer. Springer.
Amer-Yahia, S., Roy, S. B., Chawlat, A., Das, G., & Yu, C. (2009). Group recommenda- Castro, J., Quesada, F. J., Palomares, I., & Martínez, L. (2015). A consensus-driven
tion: Semantics and efficiency. Proceedings of the VLDB Endowment, 2(1), 754– group recommender system. International Journal of Intelligent Systems, 30(8),
765. doi:10.14778/1687627.1687713. 887–906. doi:10.1002/int.21730.
A. Agarwal et al. / Expert Systems With Applications 82 (2017) 115–127 127

Chakraborty, M., Agrawal, H., Shekhar, H., & Chowdary, C. R. (2015). Contextual MovieLens (2009). GroupLens at University of Minnesota. http://grouplens.org/
suggestion using tag-description similarity. In Proceedings of the twenty-fourth datasets/movielens/10m/.
text retrieval conference, TREC 2015, gaithersburg, maryland, usa, november 17–20, O’connor, M., Cosley, D., Konstan, J. A., & Riedl, J. (2001). PolyLens: a recommender
2015. system for groups of users. In Ecscw 2001 (pp. 199–218). Springer.
Chang, S., Harper, F. M., & Terveen, L. (2015). Using groups of items for preference Ortega, F., Bobadilla, J., Hernando, A., & Gutirrez, A. (2013). Incorporating group
elicitation in recommender systems. In Proceedings of the 18th acm conference on recommendations to recommender systems: Alternatives and performance. In-
computer supported cooperative work social computing. In CSCW ’15 (pp. 1258– formation Processing & Management, 49(4), 895–901. http://dx.doi.org/10.1016/j.
1269). New York, NY, USA: ACM. doi:10.1145/2675133.2675210. ipm.2013.02.003.
Christensen, I. A., & Schiaffino, S. (2011). Entertainment recommender systems for Ortega, F., Hernando, A., Bobadilla, J., & Kang, J. H. (2016). Recommending items to
group of users. Expert Systems with Applications, 38(11), 14127–14135. http://dx. group of users using matrix factorization based collaborative filtering. Informa-
doi.org/10.1016/j.eswa.2011.04.221. tion Sciences, 345, 313–324. http://dx.doi.org/10.1016/j.ins.2016.01.083.
Gartrell, M., Xing, X., Lv, Q., Beach, A., Han, R., Mishra, S., & Seada, K. (2010). Enhanc- Pazzani, M. (1999). A framework for collaborative, content-based and demo-
ing group recommendation by incorporating social relationship interactions. In graphic filtering. Artificial Intelligence Review, 13(5–6), 393–408. doi:10.1023/A:
Proceedings of the 16th acm international conference on supporting group work 1006544522159.
(pp. 97–106). ACM. Pera, M. S., & Ng, Y.-K. (2013). A group recommender for movies based on content
Ghazarian, S., & Nematbakhsh, M. A. (2015). Enhancing memory-based collabora- similarity and popularity. Information Processing & Management, 49(3), 673–687.
tive filtering for group recommender systems. Expert Systems with Applications, Personalization and Recommendation in Information Access http://dx.doi.org/
42(7), 3801–3812. http://dx.doi.org/10.1016/j.eswa.2014.11.042. 10.1016/j.ipm.2012.07.007
GroupLens (2015). MovieLens 20M dataset. http://grouplens.org/datasets/movielens/ Pilgrim, R. A. (2017). Munkres’ assignment algorithm, course notes. http://
20m/. csclab.murraystate.edu/∼bob.pilgrim/445/munkres.html. [Online; accessed 08-
Jameson, A., & Smyth, B. (2007). The adaptive web (pp. 596–627). Berlin, Heidelberg: Mar-2017].
Springer-Verlag. Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the
Kagita, V. R., Pujari, A. K., & Padmanabhan, V. (2015). Virtual user approach for ACM, 40(3), 56–58.
group recommender systems using precedence relations. Information Sciences, Ricci, F., Rokach, L., & Shapira, B. (2011). Introduction to recommender systems hand-
294, 15–30. book. Springer.
Kaššák, O., Kompan, M., & Bieliková, M. (2016). Personalized hybrid recommenda- Roy, S. B., Thirumuruganathan, S., Amer-Yahia, S., Das, G., & Yu, C. (2014). Exploiting
tion for group of users: Top-n multimedia recommender. Information Processing group recommendation functions for flexible preferences. In IEEE 30th interna-
& Management, 52(3), 459–477. http://dx.doi.org/10.1016/j.ipm.2015.10.001. tional conference on data engineering, chicago, ICDE 2014, il, usa, march 31 – april
Kompan, M., & Bieliková, M. (2014). Voting Based Group Recommendation: How 4, 2014 (pp. 412–423). doi:10.1109/ICDE.2014.6816669.
Users Vote. In Proc. of 1st international workshop on social personalization in con- Sarwar, B., Karypis, G., Konstan, J., & Riedl, J. (20 0 0). Analysis of recommendation al-
junction with 25th acm conference on hypertext and social media. gorithms for e-commerce. In Proceedings of the 2nd acm conference on electronic
Kuhn, H. W. (1955). The hungarian method for the assignment problem. Naval Re- commerce (pp. 158–167). ACM.
search Logistics Quarterly, 2, 83–97. Schafer, J. B., Konstan, J. A., & Riedl, J. (2001). E-commerce recommendation ap-
Lops, P., De Gemmis, M., & Semeraro, G. (2011). Content-based recommender plications. In Applications of data mining to electronic commerce (pp. 115–153).
systems: State of the art and trends. In Recommender systems handbook Springer.
(pp. 73–105). Springer. Trewin, S. (20 0 0). Knowledge-based recommender systems. Encyclopedia of Library
Masthoff, J. (2011). Group recommender systems: Combining individual models. In and Information Science, 69(32), 180.
Recommender systems handbook (pp. 677–702). Springer. Villavicencio, C., Schiaffino, S., Diaz-Pace, J. A., & Monteserin, A. (2016). Advances
McCarthy, K., Salamó, M., Coyle, L., McGinty, L., Smyth, B., & Nixon, P. (2006). Group in practical applications of scalable multi-agent systems. The PAAMS collection:
Recommender Systems: A Critiquing Based Approach. In Proceedings of the 11th 14th international conference, PAAMS 2016, sevilla, spain, june 1–3, 2016, pro-
international conference on intelligent user interfaces. In IUI ’06 (pp. 267–269). ceedings (pp. 294–298). Cham:Springer International Publishing. doi:10.1007/
New York, NY, USA: ACM. doi:10.1145/1111449.1111506. 978- 3- 319- 39324- 7_34.
MovieLens (1998). GroupLens at University of Minnesota. http://grouplens.org/
datasets/movielens/100k/.

Вам также может понравиться