Вы находитесь на странице: 1из 60

Contextual Operation for Recommender Systems

A PROJECT REPORT

in the partial fulfillment for the award of the degree

of

BACHELOR OF TECHNOLOGY
in

INFORMATION TECHNOLOGY

MAY 2017

8
BONAFIDE CERTIFICATE

9
ACKNOWLEDGEMENT

I am personally indebted to a number of persons that a complete


acknowledgement would be encyclopedic. First of all, I love to record my deepest
gratitude to the Almighty Lord and my family.

My sincere thanks and performed sense of gratitude goes to the respected


chairman for all his effort in educating me in a premier institution.

I take this opportunity to thank the Director of this prestigious institution,


for his kind cooperation in completing this project.

I like to express my gratitude to our principal, and the Head of the


Department, of Computer Science and Engineering, Mrs., , for their guidance and
advise all through the project.

I convey my sincere and in depth gratitude to my internal guide for her


valuable guidance throughout the duration of this project.

I would also like to thank our friends for the support they extended during
the course of this project.

10
ABSTRACT

With the rapid growth of various applications on the Internet, recommender systems

become fundamental for helping users alleviate the problem of information overload.

Since contextual information is a significant factor in modeling the user behavior, various

context-aware recommendation methods have been proposed recently. The state-of-the-

art context modeling methods usually treat contexts as certain dimensions similar to those

of users and items, and capture relevances between contexts and users/items. However,

such kind of relevance has much difficulty in explanation. Some works on multi-domain

relation prediction can also be used for the context-aware recommendation, but they have

limitations in generating recommendations under a large amount of contextual

information. Motivated by recent works in natural language processing, we represent

each context value with a latent vector, and model the contextual information as a

semantic operation on the user and item. Besides, we use the contextual operating tensor

to capture the common semantic effects of contexts. Experimental results show that the

proposed Context Operating Tensor (COT) model yields significant improvements over

the competitive compared methods on three typical datasets. From the experimental

results of COT, we also obtain some interesting observations which follow our intuition.

11
TABLE OF CONTENTS

CHAPTER TITLE PAGE NO.

LIST OF FIGURES ii

LIST OF ABBREVATIONS iii

1 INTRODUCTION

1.1 About the Project 15

2 SYSTEM ANALYSIS

2.1 Existing system 16

2.2 Proposed system 16

3 REQUIREMENTS SPECIFICATION

3.1 Introduction 17

3.2 Hardware and Software specification 17

3.3 Technologies Used 18

3.4Technologies Used 18

3.4.1 Introduction to Dotnet 19

3.4.2 Working of Dotnet 20

3.5 SQL Server 20

3.5.1 Introduction to SQL server 21

4 SYSTEM DESIGN

4.1 Architecture Diagram 22

4.2 Sequence Diagram 23

12
4.3 Use Case Diagram 24

4.4 Activity Diagram 25

4.5 Data Base Design

5 SYSTEM DESIGN – DETAILED

5.1 Modules 26

5.2 Module explanation 26

6 CODING AND TESTING

6.1 Coding 28

6.2 Coding standards 31

6.3 Test procedure 31

6.4 Test data and output 32

REFERENCES 78

SNAP SHOTS

13
LIST OF FIGURES

Architecture

Sequence Diagram

Use Case Diagram

Activity Diagram

14
LIST OF ABBREVATIONS

IEEE The Institute of Electrical and Electronics Engineers, Inc.


HTML Hyper Text Markup Language
HTTP Hyper Text Transport Protocol
SRS Software Requirements Specification
ASP Active Server Page

15
CHAPTER 1
INTRODUCTION

Aim:

The aim of this project is to developing Recommendation system based on


contextual information. Here, we focus on modeling the general contextual information
associated with not only users/items but also user-item interactions.
Synopsis:
Recommender systems have become an important tool which can help users to
select the information of interest in many web applications. The contexts of recommender
systems specify the contextual information associated with a recommendation
application, and provides two kinds of examples which are attributes associated with
users or Items and attributes associated with user-item interactions. Due to the
fundamental effect of contextual information in recommender systems, many context
modeling methods have been developed.
In the existing context-aware recommendation method do not explain the
relevance explanation about contextual information and they have limitations in
generating recommendations under a large amount of contextual information. To
overcome the shortages of the existing methods mentioned above, we propose a novel
context modeling method Contextual Operating Tensor model (COT).In this method
contextual information includes interaction contexts, which describe the interaction
situations, and entity contexts, which can identify user/item characteristics. Here, we
focus on modeling the general contextual information associated with not only
users/items but also user-item interactions.

16
CHAPTER 2

SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

Context-Aware Recommender Systems is described contextual information has


been proved to be useful for recommender systems. It provides the contextual
information of each interaction with a distinct vector, but is not suitable for numerical
contexts and abundant contexts in real-world applications.
I this methods calculate the relevance between contexts and entities, but such kind
of relevance is not always reasonable. This provides each user/item with not only a latent
vector but also an effective context-aware representation. However, using a distinct
vector to represent contexts of each interaction has the problem in confronting with
abundant contextual information in real applications. Also these methods have the
difficulty in dealing with a large amount of contextual information

2.2 PROPOSED SYSTEM

The proposed Context Operating Tensor (COT) method learns representation


vectors of context values and uses contextual operations to capture the semantic
operations of the contextual information. We provide a strategy in embedding each
context value into a latent representation, no matter which domain the value belongs to.
For each user-item interaction, we use contextual operating matrices to represent the
semantic operations of these contexts, and employ contextual operating tensors to capture
common effects of contexts. Then, the operating matrix can be generated by multiplying
latent representations of contexts with the operating tensor.
To describe the operation ability of contexts, we embed each context value with a
latent representation, and model the contextual information as semantic operations on
users and items. Context representation and contextual operation present a novel
perspective of context modeling.

17
We use the contextual operating tensor to capture the common semantic effects of
contexts. For each interaction, the contextual operation can be generated from the
multiplication of operating tensor and latent vectors of contexts.

CHAPTER 3

REQUIREMENT SPECIFICATIONS

3.1 INTRODUCTION

Among the numerous methods for user access control, password-based authentication is
the most widely used and acceptable mechanism because of its easy-operation,
scalability, compatibility and low-cost advantages [1]. In such authentication schemes
(some notable ones include SRP [2], KOY [3] and J-PAKE [4]), each user is assumed to
only hold a memorable, low-entropy password, while the server needs to store a
password-related verifier table necessary to verify the authenticity of users. An inherent
limitation of these password-only mechanism is that, the server has to store a sensitive
verifier table that contains the passwords of all the registered users. Even if passwords are
properly stored in salted-hash, once the authentication server is compromised, an
overwhelming fraction of users’ passwords will be exposed (see [5]) due to three reasons:
(1) Human-beings’ memory is inherently limited/ stable, and the distribution of user-
chosen passwords are highly skewed [6]; (2) Password cracking hardware (e.g., GPUs)
and algorithms (e.g., Markov-Chain-based [7]) are constantly being improved. At
Password’12, Gosney [8] showed that a rig of 25 GPUs can test up to 350 billion guesses
per second in an offline dictionary attack against traditional hash functions (e.g., NTLM
and MD5). More sophisticated password hash functions (e.g., bcrypt and PBKDF2) only
provide some relief [9], but with the cost of an honest server increasing by the same

18
factor with the attacker, while the attacker is likely to be better equipped with dedicated
password-cracking hardware.

3.2 HARDWARE AND SOFTWARE SPECIFICATION

3.2.1 HARDWARE REQUIREMENTS

RAM : 1 GB and above

Processor : Dual core and above

Hard Disk : 80 GB and above

3.2.2 SOFTWARE REQUIREMENTS

Operating System : Windows XP / 7 - 32 Bit Version

Language : C#.NET

Web Technology : Asp.net

Dot Net Framework : V 4.0

Documentation Tool : Ms word 2007

Development Tool : Ms Visual Studio 2012

Sql Server : MS SQL Server 2008

3.3 TECHNOLOGIES USED

 Visuals Tools
INTRODUCING WEB APPLICATION:

Organizations are increasingly becoming dependent on the Internet for sharing


and accessing information. This Internet boom has changed the focus of application
development from stand-alone applications to distributed Web applications. Web
applications are programs that can be executed either on a web server or in a web

19
browser. They enable you to share and access information over the Internet and operate
intranets. In addition, Web application can support online commercial transactions,
popularly known as e-commerce. An online store access through a browser is an example
of a web application.

INTRODUCTION TO ASP.NET

ASP.NET is a part of the .NET Framework, a new computing platform from


Microsoft optimized for creating applications that are highly distributed across the
Internet. Highly distributed means that the components of the application, as well as the
data, may reside anywhere on the Internet rather than all being contained inside one
software program somewhere. Each part of an application can be referenced and accessed
using a standard procedure ASP.NET is the part that provides the features necessary to
easily tie all this capability together for coherent web-based applications. It is a
programming framework, and one of the primary differences between it and traditional
ASP is that it uses a common language runtime (CLR) capable of running compiled code
on a web server to deploy powerful wed-based applications.

ASP.NET still use HTTP to communicate to the browser and back, but it brings
added functionality that makes the communication process much richer. If any files have
the appropriate extension or contain code, the server routes those files to ASP.NET for
processing prior to sending them out to the client. The script or code is then processed
and the appropriate content is generated for transmission back to the browser/client.
Because processing takes place before the results are delivered to the user, all manner of
functionality can be built-in such as database access, component usage and the ordinary
programmatic functionality available with scripting languages.

ASP.NET applications can be coded using a plain text edited such as notepad,
although this not the most efficient method to use. Developing all the other resources that
might be required for a particular ASP.NET application, especially for the user interface,
may involve range of specialized tools including image-editing programs, database
programs and HTML editors.

20
To create dynamic web pages by using server-side scripts. Microsoft has
introduced ASP. ASP.NET is the .NET version of ASP. ASP.NET is a standard HTML file
that contains embedded server-side scripts. ASP.NET provides the following advantage of
server-side scripting.

ASP.NET enables you to access information from data sources, such as back-
end database and text files that are stored on a web server or a computer that is accessible
to a web server.

ASP.NET enables you to use a set of programming code called templates to


create HTML documents. The advantage of using template is that you can dynamically
insert content retrieved from data sources, such as back-end database and text-files, into
an HTML document before the HTML document is displayed to users. Therefore, the
information need not be changed manually as and when the content s retrieved from data
source change.

ASP.NET also enables you to separate HTML design from the data retrieval
mechanism. Therefore changing the HTML deign does not affect the program that
retrieve data from the databases. Similarly, server-side scripting ensures that changing
data sources does not require a change in HTML documents.

ASP.NET has a number of advance features that help you develop robust web
applications. The advance features of ASP.NET are based on the .NET Framework.

ASP.NET in .NET Framework

ASP.NET, which is the .NET version of ASP, is built on Microsoft .NET


Framework. Microsoft introduced the .NET Framework to help developers create
globally distributed software with Internet functionality and interoperability.ASP.NET
application include WEB Forms, configuration files and XML, web service files. Web
forms enable you to include user interfaces, such as Textbox, listbox controls and
application logic of Web applications, and configuration files enable you to store the
configuration settings of an ASP.NET application. The elements of an ASP.NET

21
application also include Web service to provide a mechanism for programs to
communicate over the Internet.

FEATURES OF ASP.NET

Compiled Code - Code written in ASP.NET is compiled and not interpreted.


This makes ASP.NET applications faster to execute than other server- side scripts
that are interpreted, such as scripts written in a previous of ASP.

Enriched Tool Support - The ASP.NET Framework is provided with a rich


toolbox and designer in VS.NET IDE (Visual Studio .NET integrated
development environment). Some of the features of this powerful tool are the
WYSIWTG (What You See Is What You Get) editor, drag-and-drop server
controls and automatic deployment.

Power and Flexibility - ASP.NET applications are based on Common


Language Runtime (CLR). Therefore, the powerful and flexibility of the .NET platform is
available enable you to ensure that the .NET Framework class library, messaging and data
access solutions are seamlessly over the web. ASP.NET is also language-independent.
Therefore, you can choose any .NET language to develop your application.

Simplicity - ASP.NET enables you to build user interfaces that separate


application logic from presentation content. In addition, CLR simplifies application
development by using managed code services, such as automatic reference counting and
garbage collection. Therefore, ASP.NET makes it easy to perform common tasks ranging
from submission and client authentication to site configuration and deployment.

Manageability - ASP.NET enables you to manage Web application by storing


the configuration information in an XML file. You can open the XML file in the visual
Studio .NET IDE.

Scalability - ASP.NET has been designed with scalability in mind. It has


features that help improve performance in a multiprocessor environment.

22
Security - ASP.NET provides a number of options for implementing security
and restricting user access to a web application. All these options are configured within
the configuration file.

IIS- Internet Information Service

The most important server you can install is internet information server (IIS)
because you will need it to run your ASP.NET applications. There a number of other
servers specifically designed to work with the .NET Framework.

3.5 SQL-SERVER

SQL Server is an enterprise-scale, industrial strength, relational database


management solution. It contains all the features expected of high-end DBMS systems, as
well as XML support.

Introduction for C-Sharp

C# (pronounced "see sharp") is a multi-paradigm programming language


encompassing imperative, declarative, functional, generic, object-oriented (class-based),
and component-oriented programming disciplines. It was developed by Microsoft within
the .NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO
(ISO/IEC 23270). C# is one of the programming languages designed for the Common
Language Infrastructure.

C# is intended to be a simple, modern, general-purpose, object-oriented programming


language.[7] Its development team is led by Anders Hejlsberg. The most recent version is
C# 4.0, which was released on April 12, 2010.

Design goals

The ECMA standard lists these design goals for C#:

o C# language is intended to be a simple, modern, general-purpose, object-oriented


programming language.

23
o The language, and implementations thereof, should provide support for software
engineering principles such as strong type checking, array bounds checking,
detection of attempts to use uninitialized variables, and automatic garbage
collection. Software robustness, durability, and programmer productivity are
important.

o The language is intended for use in developing software components suitable for
deployment in distributed environments.

o Source code portability is very important, as is programmer portability, especially


for those programmers already familiar with C and C++.

o Support for internationalization is very important.

o C# is intended to be suitable for writing applications for both hosted and


embedded systems, ranging from the very large that use sophisticated operating
systems, down to the very small having dedicated functions.

o Although C# applications are intended to be economical with regard to memory


and processing power requirements, the language was not intended to compete
directly on performance and size with C or assembly language.

o Name

o C-sharp musical note (left)

o The name "C sharp" was inspired by musical notation where a sharp indicates that
the written note should be made a semitone higher in pitch. This is similar to the
language name of C++, where "++" indicates that a variable should be
incremented by 1.

o Due to technical limitations of display (standard fonts, browsers, etc.) and the fact
that the sharp symbol (♯, U+266F, MUSIC SHARP SIGN) is not present on the
standard keyboard, the number sign (#, U+0023, NUMBER SIGN) was chosen to

24
represent the sharp symbol in the written name of the programming language.
This convention is reflected in the ECMA-334 C# Language Specification. [7]
However, when it is practical to do so (for example, in advertising or in box
art[10]), Microsoft uses the intended musical symbol.

o The "sharp" suffix has been used by a number of other .NET languages that are
variants of existing languages, including J# (a .NET language also designed by
Microsoft which is derived from Java 1.1), A# (from Ada), and the functional F#.
The original implementation of Eiffel for .NET was called Eiffel#,[12] a name since
retired since the full Eiffel language is now supported. The suffix has also been
used for libraries, such as Gtk# (a .NET wrapper for GTK+ and other GNOME
libraries), Cocoa# (a wrapper for Cocoa) and Qt# (a .NET language binding for
the Qt toolkit).

History

During the development of the .NET Framework, the class libraries were
originally written using a managed code compiler system called Simple Managed C
(SMC). In January 1999, Anders Hejlsberg formed a team to build a new language at the
time called Cool, which stood for "C-like Object Oriented Language". [16] Microsoft had
considered keeping the name "Cool" as the final name of the language, but chose not to
do so for trademark reasons. By the time the .NET project was publicly announced at the
July 2000 Professional Developers Conference, the language had been renamed C#, and
the class libraries and ASP.NET runtime had been ported to C#.

C#'s principal designer and lead architect at Microsoft is Anders Hejlsberg, who was
previously involved with the design of Turbo Pascal, Embarcadero Delphi (formerly
Code Gear Delphi and Borland Delphi), and Visual J++. In interviews and technical
papers he has stated that flaws in most major programming languages (e.g. C++, Java,
Delphi, and Smalltalk) drove the fundamentals of the Common Language Runtime
(CLR), which, in turn, drove the design of the C# language itself.

25
James Gosling, who created the Java programming language in 1994, and Bill Joy, a
co-founder of Sun Microsystems, the originator of Java, called C# an "imitation" of Java;
Gosling further claimed that "[C# is] sort of Java with reliability, productivity and
security deleted."Klaus Kreft and Angelika Langer (authors of a C++ streams book)
stated in a blog post that "Java and C# are almost identical programming languages.
Boring repetition that lacks innovation," "Hardly anybody will claim that Java or C# are
revolutionary programming languages that changed the way we write programs," and "C#
borrowed a lot from Java - and vice versa. Now that C# supports boxing and unboxing,
we'll have a very similar feature in Java."Anders Hejlsberg has argued that C# is "not a
Java clone" and is "much closer to C++" in its design.
C# used to have a mascot called Andy (named after Anders Hejlsberg). It was retired on
29 Jan 2004.

Versions

In the course of its development, the C# language has gone through several versions:

Language specification .NET Visual


Version Date
ECMA ISO/IEC Microsoft Framework Studio

.NET Visual
January
C# 1.0 January 2002 Framework Studio .NET
2002
December 1.0 2002
April 2003
2002 .NET Visual
C# 1.2 October 2003 April 2003 Framework Studio .NET
1.1 2003

.NET
September September November Visual
C# 2.0 June 2006 Framework
2006 2005[note 1] 2005 Studio 2005
2.0

26
.NET
November Visual
C# 3.0 August 2007 Framework
2007 Studio 2008
None [note 2] 3.5

.NET Visual
C# 4.0 April 2010 April 2010
Framework 4 Studio 2010

^ The Microsoft C# 2.0 specification document only contains the new 2.0 features. For
older features use the 1.2 specification above.

^ There are currently, as of May 2010, no ECMA and ISO/IEC specifications for C# 3.0
and 4.0.

Summary of versions

C# 2.0 C# 3.0 C# 4.0 C# 5.0 (planned)


Implicitly typed
variables

Generics Implicitly typed


arrays Dynamic binding
Partial types
Asynchronous
Anonymous types Named and optional
methods
Anonymous
Features arguments
methods Extension methods
added Compiler As a
Generic co- and Service
Iterators Query expressions
contravariance

Nullable types Lambda


expressions

Expression trees

27
Features

By design, C# is the programming language that most directly reflects the


underlying Common Language Infrastructure (CLI). Most of its intrinsic types
correspond to value-types implemented by the CLI framework. However, the language
specification does not state the code generation requirements of the compiler: that is, it
does not state that a C# compiler must target a Common Language Runtime, or generate
Common Intermediate Language (CIL), or generate any other specific format.
Theoretically, a C# compiler could generate machine code like traditional compilers of
C++ or Fortran.

Some notable distinguishing features of C# are:

There are no global variables or functions. All methods and members must be
declared within classes. Static members of public classes can substitute for global
variables and functions.

Local variables cannot shadow variables of the enclosing block, unlike C and C+
+. Variable shadowing is often considered confusing by C++ texts.

C# supports a strict Boolean datatype, bool. Statements that take conditions, such
as while and if, require an expression of a type that implements the true operator, such as
the boolean type. While C++ also has a boolean type, it can be freely converted to and
from integers, and expressions such as if(a) require only that a is convertible to bool,
allowing a to be an int, or a pointer. C# disallows this "integer meaning true or false"
approach on the grounds that forcing programmers to use expressions that return exactly
bool can prevent certain types of common programming mistakes in C or C++ such as if
(a = b) (use of assignment = instead of equality ==).

In C#, memory address pointers can only be used within blocks specifically
marked as unsafe, and programs with unsafe code need appropriate permissions to run.
Most object access is done through safe object references, which always either point to a
"live" object or have the well-defined null value; it is impossible to obtain a reference to a

28
"dead" object (one which has been garbage collected), or to a random block of memory.
An unsafe pointer can point to an instance of a value-type, array, string, or a block of
memory allocated on a stack. Code that is not marked as unsafe can still store and
manipulate pointers through the System.IntPtr type, but it cannot dereference them.

Managed memory cannot be explicitly freed; instead, it is automatically garbage


collected. Garbage collection addresses the problem of memory leaks by freeing the
programmer of responsibility for releasing memory which is no longer needed.

In addition to the try...catch construct to handle exceptions, C# has a try...finally


construct to guarantee execution of the code in the finally block.

Multiple inheritance is not supported, although a class can implement any number
of interfaces. This was a design decision by the language's lead architect to avoid
complication and simplify architectural requirements throughout CLI.

C# is more type safe than C++. The only implicit conversions by default are those
which are considered safe, such as widening of integers. This is enforced at compile-time,
during JIT, and, in some cases, at runtime. There are no implicit conversions between
booleans and integers, nor between enumeration members and integers (except for literal
0, which can be implicitly converted to any enumerated type). Any user-defined
conversion must be explicitly marked as explicit or implicit, unlike C++ copy
constructors and conversion operators, which are both implicit by default.

Enumeration members are placed in their own scope.

C# provides properties as syntactic sugar for a common pattern in which a pair of


methods, accessor (getter) and mutator (setter) encapsulate operations on a single
attribute of a class.

Full type reflection and discovery is available.

C# currently (as of version 4.0) has 77 reserved words.

29
Checked exceptions are not present in C# (in contrast to Java). This has been a
conscious decision based on the issues of scalability and versionability.[21]

Common Type System (CTS)

C# has a unified type system. This unified type system is called Common Type
System (CTS).[22]

A unified type system implies that all types, including primitives such as integers,
are subclasses of the System.Object class. For example, every type inherits a ToString()
method. For performance reasons, primitive types (and value types in general) are
internally allocated on the stack.

Libraries

The C# specification details a minimum set of types and class libraries that the compiler
expects to have available. In practice, C# is most often used with some implementation of
the Common Language Infrastructure (CLI), which is standardized as ECMA-335
Common Language Infrastructure (CLI).

"Hello, world" example

The following is a very simple C# program, a version of the classic "Hello, world"
example:

using System;

class ExampleClass

static void Main()

Console.WriteLine("Hello, world!");

30
}

The effect is to write the following text to the output console:

Hello, world!

Each line has a purpose:

using System;

The above line of code tells the compiler to use 'System' as a candidate prefix for
types used in the source code. In this case, when the compiler sees use of the 'Console'
type later in the source code, it tries to find a type named 'Console', first in the current
assembly, followed by all referenced assemblies. In this case the compiler fails to find
such a type, since the name of the type is actually 'System.Console'. The compiler then
attempts to find a type named 'System.Console' by using the 'System' prefix from the
using statement, and this time it succeeds. The using statement allows the programmer to
state all candidate prefixes to use during compilation instead of always using full type
names.

class ExampleClass

Above is a class definition. Everything between the following pair of braces describes
ExampleClass.

static void Main()

This declares the class member method where the program begins execution.
The .NET runtime calls the Main method. (Note: Main may also be called from
elsewhere, like any other method, e.g. from another method of ExampleClass.) The static
keyword makes the method accessible without an instance of ExampleClass. Each
console application's Main entry point must be declared static. Otherwise, the program
would require an instance, but any instance would require a program. To avoid that

31
irresolvable circular dependency, C# compilers processing console applications (like that
above) report an error if there is no static Main method. The void keyword declares that
Main has no return value.

Console.WriteLine("Hello, world!");

This line writes the output. Console is a static class in the System namespace. It
provides an interface to the standard input, output, and error streams for console
applications. The program calls the Console method WriteLine, which displays on the
console a line with the argument, the string "Hello, world!".

Implementations

The reference C# compiler is Microsoft Visual C#.

Other C# compilers exist, often including an implementation of the Common


Language Infrastructure and the .NET class libraries up to .NET 2.0:

Microsoft's Rotor project (currently called Shared Source Common Language


Infrastructure) (licensed for educational and research use only) provides a shared source
implementation of the CLR runtime and a C# compiler, and a subset of the required
Common Language Infrastructure framework libraries in the ECMA specification (up to
C# 2.0, and supported on Windows XP only).

The Mono project provides an open source C# compiler, a complete open source
implementation of the Common Language Infrastructure including the required
framework libraries as they appear in the ECMA specification, and a nearly complete
implementation of the Microsoft proprietary .NET class libraries up to .NET 3.5. As of
Mono 2.6, there are no plans to implement WPF; WF is planned for a later release; and
there are only partial implementations of LINQ to SQL and WCF.

The DotGNU project also provides an open source C# compiler, a nearly


complete implementation of the Common Language Infrastructure including the required
framework libraries as they appear in the ECMA specification, and subset of some of the

32
remaining Microsoft proprietary .NET class libraries up to .NET 2.0 (those not
documented or included in the ECMA specification but included in Microsoft's
standard .NET Framework distribution).

The DotNetAnywhere Micro Framework-like Common Language Runtime is


targeted at embedded systems, and supports almost all C# 2.0 specifications. It is licensed
under the MIT license conditions, is implemented in C and directed towards embedded
devices.

Unity 3D uses C# as a scripting language as an alternative to Javascript

ADO.NET

ADO.NET is all about data access. Data is generally stored in a


relational database in the form of related tables. Retrieving and manipulating data directly
from a database requires the knowledge of database commands to access the data.

Features of ADO.NET

 Disconnected data architecture- ADO.NET uses the disconnected


data architecture. Applications connect to the database only while retrieving and updating
data. After data is retrieved, the connection with the database closed. When the database
needs to be updated, the connection is re-established. Working with applications that to
do not follow a disconnected architecture leads to a wastage of valuable system
resources, since the application connect to the database and keeps the connection open
until it stops running, but does not actually interact with the database can cater to the
needs of several applications simultaneously since the interaction is for a shorter
duration.
 Data cached in datasets- A dataset is the most common method of
accessing data since it implements a disconnected architecture. Since ADO.NET is based
on a disconnected data structure, it is not possible for the application to interact with the
database for processing each record. Therefore, the data is retrieved and stored in

33
datasets. A dataset is a cached set of database records. We can work with the records
stored in a dataset as we work with real data; the only difference being that the dataset is
independent of data source and we remain disconnected from the data source.
 ADO.NET supports scalability by working with datasets. Datasets
operations are performed on the datasets instead of on the database. As a result, resources
are saved, and the database can meet the increasing demands of users more efficiently.
 Data transfer in XML format- XML is the fundamental format for
data transfer in ADO.NET. Data is transferred from a database into a dataset and from
the dataset to another component by using XML. We can even use an XML file as a
data source and store data from it in a dataset. Using XML as the data transfer
language is beneficial as XML is an industry standard format for exchanging information
between different types of applications. The knowledge of XML is not required for
working with ADO.NET since data conversion in the XML and any component that can
read the dataset structure from and to XML is hidden from the user. Since a dataset is
stored can process the data.

Interaction with the database is done through data commands – All operations on the
database are performed by using data commands. A data command can be a SQL
statement or a stored procedure. We can retrieve, insert, delete or modify data from a
database by executing data commands.

3.5.1 INTRODUCTION TO SQL SERVER:

To create a database determines the name of the database, its owner (the user
who creates the database), its size, and the files and file groups used to store it.

Before creating a database, consider that:

 Permission to create a database defaults to members of the sysadmin and


dbcreator fixed server roles, although permissions can be granted to other users.

 The user who creates the database becomes the owner of the database.

 A maximum of 32,767 databases can be created on a server.

34
 The name of the database must follow the rules for identifiers.

Three types of files are used to store a database:


 Primary files

These files contain the startup information for the database. The primary files are
also used to store data. Every database has one primary file.

 Secondary files

These files hold all the data that does not fit in the primary data file. Databases do
not need secondary data files if the primary file is large enough to hold all the data
in the database. Some databases may be large enough to need multiple secondary
data files, or they may use secondary files on separate disk drives to spread the
data across multiple disks.

 Transaction log

These files hold the log information used to recover the database. There must be
at least one transaction log file for each database, although there may be more
than one. The minimum size for a log file is 512 kilobytes (KB).

When a database is created, all the files that comprise the database are filled with
zeros to overwrite any existing data left on the disk by previously deleted files. Although
this means that the files take longer to create, this action prevents the operating system
from having to fill the files with zeros when data is written to the files for the first time
during usual database operations. This improves the performance of day-to-day
operations.

It is recommended that you specify a maximum size to which the file is permitted
to grow. This prevents the file from growing, as data is added, until disk space is
exhausted. To specify a maximum size for the file, use the MAXSIZE parameter of the
CREATE DATABASE statement or the Restrict filegrowth (MB) option when using
the Properties dialog box in SQL Server Enterprise Manager to create the database.

35
CREATING DATABASE PLAN:

The first step in creating a database is creating a plan that serves both as a
guide to be used when implementing the database and as a functional specification for the
database after it has been implemented. The complexity and detail of a database design is
dictated by the complexity and size of the database application as well as the user
population.

The nature and complexity of a database application, as well as the


process of planning it, can vary greatly. A database can be relatively simple and designed
for use by a single person, or it can be large and complex and designed, for example, to
handle all the banking transactions for hundreds of thousands of clients. In the first case,
the database design may be little more than a few notes on some scratch paper. In the
latter case, the design may be a formal document with hundreds of pages that contain
every possible detail about the database.

In planning the database, regardless of its size and complexity, use these basic steps:

 Gather information.

 Identify the objects.

 Model the objects.

 Identify the types of information for each object.

 Identify the relationships between objects.

GATHERING INFORMATION:

Before creating a database, you must have a good understanding of the job
the database is expected to perform. If the database is to replace a paper-based or
manually performed information system, the existing system will give you most of the

36
information you need. It is important to interview everyone involved in the system to find
out what they do and what they need from the database. It is also important to identify
what they want the new system to do, as well as to identify the problems, limitations, and
bottlenecks of any existing system. Collect copies of customer statements, inventory lists,
management reports, and any other documents that are part of the existing system,
because these will be useful to you in designing the database and the interfaces.

IDENTIFYING OBJECTS

During the process of gathering information, you must identify the key
objects or entities that will be managed by the database. The object can be a tangible
thing, such as a person or a product, or it can be a more intangible item, such as a
business transaction, a department in a company, or a payroll period. There are usually a
few primary objects, and after these are identified, the related items become apparent.
Each distinct item in your database should have a corresponding table.

The primary object in the pubs sample database included with Microsoft® SQL
Server™ 2000 is a book. The objects related to books within this company's business are
the authors who write the books, the publishers who manufacture the books, the stores
which sell them, and the sales transactions performed with the stores. Each of these
objects is a table in the database.

Modeling the Objects

As the objects in the system are identified, it is important to record them


in a way that represents the system visually. You can use your database model as a
reference during implementation of the database.

For this purpose, database developers use tools that range in technical complexity
from pencils and scratch paper to word processing or spreadsheet programs, and even to
software programs specifically dedicated to the job of data modeling for database
designs. Whatever tool you decide to use, it is important that you keep it up-to-date.

37
SQL Server Enterprise Manager includes visual design tools such as the Database
Designer that can be used to design and create objects in the database.

Identifying the Types of Information for Each Object

After the primary objects in the database have been identified as


candidates for tables, the next step is to identify the types of information that must be
stored for each object. These are the columns in the object's table. The columns in a
database table contain a few common types of information:

 Raw data columns

These columns store tangible pieces of information, such as names, determined by


a source external to the database.

 Categorical columns

These columns classify or group the data and store a limited


selection of data such as true/false, married/single, VP/Director/Group Manager,
and so on.

 Identifier columns

These columns provide a mechanism to identify each item stored


in the table. These columns often have id or number in their names (for example,
employee_id, invoice_number, and publisher_id). The identifier column is the
primary component used by both users and internal database processing for
gaining access to a row of data in the table. Sometimes the object has a tangible
form of ID used in the table (for example, a social security number), but in most
situations you can define the table so that a reliable, artificial ID can be created
for the row.

 Relational or referential columns

38
These columns establish a link between information in one table and
related information in another table. For example, a table that tracks sales transactions
will commonly have a link to the customer’s table so that the complete customer
information can be associated with the sales transaction.

Identifying the Relationships between Objects

One of the strengths of a relational database is the ability to relate or


associate information about various items in the database. Isolated types of information
can be stored separately, but the database engine can combine data when necessary.
Identifying the relationships between objects in the design process requires looking at the
tables, determining how they are logically related, and adding relational columns that
establish a link from one table to another.

For example, the designer of the pubs database has created tables for titles
and publishers in the database. The titles table contains information for each book: an
identifier column named title_id; raw data columns for the title, the price of the book, and
the publishing date; and some columns with sales information for the book. The table
contains a categorical column named type, which allows the books to be grouped by the
type of content in the book. Each book also has a publisher, but the publisher information
is in another table; therefore, the titles table has a pub_id column to store just the ID of
the publisher. When a row of data is added for a book, the publisher ID is stored with the
rest of the book information.

Data Security

One of the functions of a database is to protect the data by preventing


certain users from seeing or changing highly sensitive data and preventing all users from
making costly mistakes. The security system in Microsoft® SQL Server™ 2000 controls
user- access to the data, and user-permissions to perform activities in the database.

39
Designing Tables

When you design a database, you decide what tables you need, what type
of data goes in each table, which can access each table, and so on. As you create and
work with tables, you continue to make more detailed decisions about them.

The most efficient way to create a table is to define everything you need in the
table at one time, including its data restrictions and additional components. However, you
can also create a basic table, add some data to it, and then work with it for a while. This
approach gives you a chance to see what types of transactions are most common and
what types of data are frequently entered before you commit to a firm design by adding
constraints, indexes, defaults, rules, and other objects.

It is a good idea to outline your plans on paper before creating a table and
its objects. Decisions that must be made include:

 Types of data the table will contain.

 Columns in the table and the data type (and length, if required) for each column.

 Which columns accept null values?

 Whether and where to use constraints or defaults and rules.

 Types of indexes needed, where required, and which columns are primary keys
and which are foreign keys.

Microsoft SQL Server uses features similar to those found in other databases
and some features that are unique. Most of these additional features are made possible by
SQL Server's tight integration with the Windows NT operating system. SQL Server
contains the data storage options and the capability to store and process the same volume
of data as a mainframe or minicomputer.

40
Like most mainframe or minicomputer databases, SQL Server is a database that
has seen an evolution from its introduction in the mid-1960s until today. Microsoft's SQL
Server is founded in the mature and powerful relational model, currently the preferred
model for data storage and retrieval.

Unlike mainframe and minicomputer databases, a server database is accessed by


users--called clients--from other computer systems rather than from input/output devices,
such as terminals. Mechanisms must be in place for SQL Server to solve problems that
arise from the access of data from perhaps hundreds of computer systems, each of which
can process portions of the database independently from the data on the server. Within the
framework of a client/server database, a server database also requires integration with
communication components of the server in order to enable connections with client
systems. Microsoft SQL Server's client/server connectivity uses the built-in network
components of Windows NT.

Unlike a stand-alone PC database or a traditional mainframe or minicomputer


database, a server database, such as Microsoft SQL Server, adds service-specific
middleware components--such as Open Database Connectivity (ODBC)--on top of the
network components. ODBC enables the interconnection of different client applications
without requiring changes to the server database or other existing client applications.

SQL Server also contains many of the front-end tools of PC databases that
traditionally haven't been available as part of either mainframe or minicomputer
databases. In addition to using a dialect of Structured Query Language (SQL), GUI
applications can be used for the storage, retrieval, and administration of the database.

SQL Server permits client applications to control the information retrieved from
the server by using several specialized tools and techniques, including options such as
stored procedures, server-enforced rules, and triggers that permit processing to be done
on the server automatically. You don't have to move all processing to the server, of
course; you still can do appropriate information processing on the client workstation.

41
Although organizations routinely use SQL Server to manipulate millions of
records, SQL Server provides several tools that help you manage the system and its
databases and tables. The Windows- and command-line-based tools that come with SQL
Server allow you to work with the many aspects of SQL Server. You can use these tools
to

Perform the administration of the databases

1. Control access to data in the databases


2. Control the manipulation of data in the databases

You also can use a command-line interface to perform all operations with SQL Server.

A key characteristic of SQL Server is that it is a relational database. You must


understand the features of a relational database to effectively understand and access data
with SQL Server. You can't construct successful queries to return data from a relational
database unless you understand the basic features of a relational database.

Introduction

a. Purpose
The goal of this project is to developing Recommendation system based on
contextual information. Here, we focus on modeling the general contextual information
associated with not only users/items but also user-item interactions.

Project Scope

Recommender systems have become an important tool which can help users to
select the information of interest in many web applications. The contexts of recommender
systems specify the contextual information associated with a recommendation
application, and provides two kinds of examples which are attributes associated with
users or Items and attributes associated with user-item interactions. Due to the

42
fundamental effect of contextual information in recommender systems, many context
modeling methods have been developed.
In the existing context-aware recommendation method do not explain the
relevance explanation about contextual information and they have limitations in
generating recommendations under a large amount of contextual information. To
overcome the shortages of the existing methods mentioned above, we propose a novel
context modeling method Contextual Operating Tensor model (COT).In this method
contextual information includes interaction contexts, which describe the interaction
situations, and entity contexts, which can identify user/item characteristics. Here, we
focus on modeling the general contextual information associated with not only
users/items but also user-item interactions.
Overall Description
b. Product Perspective

Recommender systems have become an important tool which can help users to select the
information of interest in many web applications. Nowadays, with the enhanced ability of
systems in collecting information, a great amount of contextual information has been
collected. Contextual information includes interaction contexts, which describe the
interaction situations, and entity contexts, which can identify user/item characteristics.
Here, we focus on modeling the general contextual information associated with not only
Users/items but also user-item interactions.

Product Features

Context-Aware Recommender Systems is described contextual


information has been proved to be useful for recommender systems. It provides the
contextual information of each interaction with a distinct vector, but is not suitable for
numerical contexts and abundant contexts in real-world applications.
I this methods calculate the relevance between contexts and entities, but such kind
of relevance is not always reasonable. This provides each user/item with not only a latent
vector but also an effective context-aware representation. However, using a distinct
vector to represent contexts of each interaction has the problem in confronting with

43
abundant contextual information in real applications. Also these methods have the
difficulty in dealing with a large amount of contextual information
The proposed Context Operating Tensor (COT) method learns representation
vectors of context values and uses contextual operations to capture the semantic
operations of the contextual information. We provide a strategy in embedding each
context value into a latent representation, no matter which domain the value belongs to.
For each user-item interaction, we use contextual operating matrices to represent the
semantic operations of these contexts, and employ contextual operating tensors to capture
common effects of contexts. Then, the operating matrix can be generated by multiplying
latent representations of contexts with the operating tensor.

To describe the operation ability of contexts, we embed each context value with a
latent representation, and model the contextual information as semantic operations on
users and items. Context representation and contextual operation present a novel
perspective of context modeling.

We use the contextual operating tensor to capture the common semantic effects of
contexts. For each interaction, the contextual operation can be generated from the
multiplication of operating tensor and latent vectors of contexts.

User Classes and Characteristics


1. User Registration and Admin process
First we need to done the registration process for user, while doing registration
they want to fill the entire required field. If the registrations success they can login with
their mail id and password. In worst case the system will throw messages as invalid login.
Admin can login using corresponding mail id and password, after login success
entering in admin home then admin can update the movies in movie recommendation
database system.

44
2. Movie Rating and Comments

User can login using corresponding mail id and password, after login success
entering in user home then user can rate and comment the movies in movie
recommendation system.
When user rates the movie they want to give some information like movie name,
where you are watched (theatre or online) movie and companion name (friends, family,
children).

3. Old user Recommendation


A user already gives the rating for movie in movie recommendation system that
user consider as old user. We giving the recommendation to user based on contextual
information, Context information is defined in three types is user/item and interaction
contexts.
4. New user Recommendation
A user do not gives the rating for movie in movie recommendation system that
user consider as new user. We giving the recommendation to user based on previous
common records like top rated movies, top picks, and new release.

2.5 Design and Implementation Constraints

2.5.1 Constraints in Analysis


 Constraints as Informal Text
 Constraints as Operational Restrictions
 Constraints Integrated in Existing Model Concepts
 Constraints as a Separate Concept
 Constraints Implied by the Model Structure

2.5.2 Constraints in Design


 Determination of the Involved Classes

45
 Determination of the Involved Objects
 Determination of the Involved Actions
 Determination of the Require Clauses
 Global actions and Constraint Realization

2.5.3 Assumptions and Dependencies


A hierarchical structuring of relations may result in more classes and a more
complicated structure to implement. Therefore it is advisable to transform the hierarchical
relation structure to a simpler structure such as a classical flat one. It is rather
straightforward to transform the developed hierarchical model into a bipartite, flat model,
consisting of classes on the one hand and flat relations on the other. Flat relations are
preferred at the design level for reasons of simplicity and implementation ease. There is
no identity or functionality associated with a flat relation. A flat relation corresponds with
the relation concept of entity-relationship modeling and many object oriented methods.

System Features
The proposed Context Operating Tensor (COT) method learns
representation vectors of context values and uses contextual operations to capture the
semantic operations of the contextual information. We provide a strategy in embedding
each context value into a latent representation, no matter which domain the value belongs
to. For each user-item interaction, we use contextual operating matrices to represent the
semantic operations of these contexts, and employ contextual operating tensors to capture
common effects of contexts. Then, the operating matrix can be generated by multiplying
latent representations of contexts with the operating tensor.
To describe the operation ability of contexts, we embed each context value with a
latent representation, and model the contextual information as semantic operations on
users and items. Context representation and contextual operation present a novel
perspective of context modeling.

We use the contextual operating tensor to capture the common semantic effects of
contexts. For each interaction, the contextual operation can be generated from the
multiplication of operating tensor and latent vectors of contexts.

46
External Interface Requirements

3.1 User Interfaces


 User Interfaces are Graphical User Interfaces in this product.

 Users are communicated with Buttons to clear the content or send data to the
destination.

 User can enter the data through the textbox.

 User can interact with text area to enter the multiple line of text.

c. Hardware Interfaces

Ethernet
Ethernet on the AS/400 supports TCP/IP, Advanced Peer-to-Peer
Networking (APPN) and advanced program-to-program communications (APPC).

ISDN

You can connect your AS/400 to an Integrated Services Digital Network


(ISDN) for faster, more accurate data transmission. An ISDN is a public or private
digital communications network that can support data, fax, image, and other
services over the same physical interface. Also, you can use other protocols on
ISDN, such as IDLC and X.25.

d. Software Interfaces

1. This software is interacted with the TCP/IP protocol.

47
2. This product is interacted with the Socket and listening on unused ports.

3. This product is interacted with the Server Socket and listening on unused ports.

4. This product is interacted with JDK 1.5

e. Communication Interfaces

The TCP/IP protocol will be used to facilitate communications between the client
and server.

Other Nonfunctional Requirements


5.1 Performance Requirements
The maximum satisfactory response time to be experienced most of the time for each
distinct type of user-computer interaction, along with a definition of most of the time. Response
time is measured from the time that the user performs the action that says "Go" until the user
receives enough feedback from the computer to continue the task. It is the user's subjective wait
time. It is not from entry to a subroutine until the first write statement. If the user denies interest
in response time and indicates that only the result is of interest, you can ask whether "ten times
your current estimate of stand-alone execution time" would be acceptable. If the answer is "yes,"
you can proceed to discuss throughput. Otherwise, you can continue the discussion of response
time with the user's full attention. The response time that is minimally acceptable the rest of the
time. A longer response time can cause users to think the system is down. You also need to specify
rest of the time; for example, the peak minute of a day, 1 percent of interactions. Response time
degradations can be more costly or painful at a particular time of the day.

5.2 Safety Requirements


The software may be safety-critical. If so, there are issues associated with its
integrity level. The software may not be safety-critical although it forms part of a safety-
critical system. For example, software may simply log transactions. If a system must be
of a high integrity level and if the software is shown to be of that integrity level, then the
hardware must be at least of the same integrity level. There is little point in producing
'perfect' code in some language if hardware and system software (in widest sense) are not
reliable. If a computer system is to run software of a high integrity level then that system
should not at the same time accommodate software of a lower integrity level. Systems

48
with different requirements for safety levels must be separated. Otherwise, the highest
level of integrity required must be applied to all systems in the same environment.

5.3 Security Requirements


Do not block the some available ports through the windows firewall
Two machines should be connected with LAN setting.

5.4 Software Quality Attributes


Functionality: are the required functions available, including Interoperability and
security

Reliability: maturity, fault tolerance and recoverability

Usability: how easy it is to understand, learn, and operate the software System

Efficiency: performance and resource behavior.

Maintainability: Maintaining the software.

Portability: can the software easily be transferred to another environment, Including


install ability

49
CHAPTER 4

4.1 Architecture:

Admin

Old User Update New User


Movie list

Movie rating Movie rating


And And
Recommendations
Comments Data Base Comments
Recommendations
User context Based
Top rated movies
Item Context Based
Movie Top picks movies
Interaction Based 50
Data
Recent movies
set
Sequence diagram

51
Usecase Diagram

52
Activity Diagram:

53
Collaboration diagram

54
Class:

55
56
57
DFD Diagram

Level 0

Admin Admin
Data Base
log in

Level 1

Old/New User Movie Rating Data Base


Log in and comments

58
Level 2

Recommendations for
Data Base Old/New User

CHAPTER 5
SYSTEM DESIGN

1. User Registration and Admin process


2. Movie Rating and Comments
3. Old user Recommendation
4. New user Recommendation
1. User Registration and Admin process

First we need to done the registration process for user, while doing registration they
want to fill the entire required field. If the registrations success they can login with their
mail id and password. In worst case the system will throw messages as invalid login.
Admin can login using corresponding mail id and password, after login success
entering in admin home then admin can update the movies in movie recommendation
database system.

2. Movie Rating and Comments


User can login using corresponding mail id and password, after login success
entering in user home then user can rate and comment the movies in movie
recommendation system.
When user rates the movie they want to give some information like movie name,
where you are watched (theatre or online) movie and companion name (friends, family,
children).

59
3. Old user Recommendation
A user already gives the rating for movie in movie recommendation system that
user consider as old user. We giving the recommendation to user based on contextual
information, Context information is defined in three types is user/item and interaction
contexts.

4. New user Recommendation


A user did not gives the rating for movie in movie recommendation system that
user consider as new user. We giving the recommendation to user based on previous
common records like top rated movies, top picks, and new release.

CHAPTER 6

60
VERIFICATION AND VALIDATION

Once the program exists, we must test it to see if it is free of bugs.


High quality products must meet user’s needs and expectations. Further more the product
should attain this with minimal or no defects, the focus being on improving products
prior to delivery rather than correcting them after delivery. The ultimate goal of building
high quality software is user’s satisfaction.

There are two basic approaches to system testing.

Validation is the task of predicting correspondence, which cannot be determined until


this system is in place.

Verification is the exercise of determining correctness.

Testing strategies

The extent of testing a system is controlled by many factors, such as the risk involved, the
limitations of the resources and deadlines. We deploy a testing strategy that does the best
job of finding the defects in the product within the given constraints. The different testing
strategies are:

 Black Box Testing:


The concept of black box testing is used to represent the system whose
inside workings are not available for inspection. In black box testing, we try various
inputs and examine the resulting outputs. Black box testing works very nicely in testing
objects in object oriented environment. For inspection the input and output are defined
through use cases or other analysis information.

 White Box Testing:

61
White box testing assumes that the specific logic is important and must
be tested to guarantee the systems proper functioning. The main use of the white box id
the error based testing. In a white box testing, the bugs are looked for that have a low
probability of execution that have been overlooked previously. It is also known as path
testing.

There are two types of path testing:

Statement testing coverage: where every statement in the objects method is covered by
executing it at least once.

Branch testing coverage: it is to perform enough tests to ensure that every branch
alternative is executed at least once.

Top down testing

A top-down strategy supports the user interface and event driven system. This
serves two purposes; first the top down approach can test navigation through screens and
verify that it matches the requirement. Second, users at the early stage can see how the
final application will look and feel.

Bottom up testing

Bottom up testing starts with the details of the system and proceeds to higher
levels by a progressive aggregation of details until they collectively fit requirements of
the system. In this testing the methods and classes which are independent are tested.

Source Code

62
Screenshot

63
REFERENCES

64
8] M. Jamali and L. Lakshmanan, “Heteromf: Recommendation in heterogeneous
information networks using context dependent factor models,” in Proc. 22nd Int.
Conf.WorldWideWeb, 2013, pp. 643–654. [9] T. Mikolov, I. Sutskever, K. Chen, G.
Corrado, and J. Dean, “Distributed representations of words and phrases and their
compositionality,” in Proc. Neural Inform. Process. Syst., 2013, pp. 3111–3119. [10] S.
Rendle, “Factorization machines with libfm,” ACM Trans. Intell. Syst. Technol., vol. 3,
no. 3, p. 57, 2012. [11] M. Baroni and R. Zamparelli, “Nouns are vectors, adjectives are
matrices: Representing adjective-noun constructions in semantic space,” in Proc. Conf.
Empirical Methods Natural Language Process., 2010, pp. 1183–1193. [12] R. Socher, B.
Huval, C. D. Manning, and A. Y. Ng, “Semantic compositionality through recursive
matrix-vector spaces,” in Proc. Joint Conf. Empirical Methods Natural Language Process.
Comput. Natural Language Learning, 2012, pp. 1201–1211. [13] R. Socher, A. Perelygin,
J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts, “Recursive deep models for
semantic compositionality over a sentiment treebank,” in Proc. Conf. Empirical Methods
Natural Language Process., 2013, pp. 1631–1642. [14] A.Mnih and R. Salakhutdinov,
“Probabilistic matrix factorization,” in Proc. Neural Inform. Process. Syst., 2007, pp.
1257–1264. [15] Y. Koren, R. Bell, and C. Volinsky, “Matrix factorization techniques for
recommender system,” Computing, vol. 42, no. 8, pp. 30– 37, 2009. [16] Y. Koren and R.
Bell, “Advances in collaborative filtering,” in Proc. Int. Conf. Recommender Syst., 2011,
pp. 145–186. [17] Y. Koren, “Factorization meets the neighborhood: A multifaceted
collaborative filtering model,” in Proc. 14th ACM SIGKDD Int. Conf. Knowl. Discovery
Data Mining, 2008, pp. 426–434. [18] Y. Koren, “Collaborative filtering with temporal
dynamics,” Commun. ACM, vol. 53, no. 4, pp. 89–97, 2010. [19] L. Xiong, X. Chen, T.-
K. Huang, J. G. Schneider, and J. G. Carbonell, “Temporal collaborative filtering with
Bayesian probabilistic tensor factorization,” in Proc. SIAM Int. Conf. Data Mining, 2010,
pp. 211–222. [20] T. Chen, W. Zheng, Q. Lu, K. Chen, Z. Zheng, and Y. Yu,
“SVDFeature: A toolkit for feature-based collaborative filtering,” J. Mach. Learn. Res.,
vol. 13, no. 1, pp. 3619–3622, Dec. 2012. [21] C. Palmisano, A. Tuzhilin, and M.
Gorgoglione, “Using context to improve predictive modeling of customers in
personalization applications,” IEEE Trans. Knowl. Data Eng., vol. 20, no. 11, pp. 1535–
1549, Nov. 2008. [22] G. Adomavicius, R. Sankaranarayanan, S. Sen, and A. Tuzhilin,
“Incorporating contextual information in recommender systems using a multidimensional
approach,” ACM Trans. Inform. Syst., vol. 23, no. 1, pp. 103–145, 2005. [23] L.
Baltrunas and F. Ricci, “Context-based splitting of item ratings in collaborative filtering,”
in Proc. 3rd ACM Conf. Recommender Syst., 2009, pp. 245–248. [24] U. Panniello, A.
Tuzhilin, M. Gorgoglione, C. Palmisano, and A. Pedone, “Experimental comparison of
pre-vs. post-filtering approaches in context-aware recommender systems,” in Proc. 3rd
ACM Conf. Recommender Syst., 2009, pp. 265–268. [25] Y. Li, J. Nie, Y. Zhang, B.
Wang, B. Yan, and F. Weng, “Contextual recommendation based on text mining,” in Proc.
23rd Int. Conf. Comput. Linguistics, 2010, pp. 692–700. [26] E. Zhong, W. Fan, and Q.
Yang, “Contextual collaborative filtering via hierarchical matrix factorization,” in Proc.

65
SIAM Int. Conf. Data Mining, 2012, pp. 744–755. [27] X. Liu and K. Aberer, “SoCo: A
social network aided contextaware recommender system,” in Proc. 22nd Int. Conf. World
Wide Web, 2013, pp. 781–802. [28] L. R. Tucker, “Some mathematical notes on three-
mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966. [29] A. P. Singh
and G. J. Gordon, “Relational learning via collective matrix factorization,” in Proc. 14th
ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2008, pp. 650–658. [30] S.-H.
Yang, B. Long, A. Smola, N. Sadagopan, Z. Zheng, and H. Zha, “Like like alike: Joint
friendship and interest propagation in social networks,” in Proc. 20th Int. Conf. World
Wide Web, 2011, pp. 537–546. [31] C. Lippert, S. H. Weber, Y. Huang, V. Tresp, M.
Schubert, and H.-P. Kriegel, “Relation prediction in multi-relational domains using
matrix factorization,” in Proc. Workshops Neural Inform. Process. Syst. Structured Input-
Structured Output, 2008, pp. 6–9.

66

Вам также может понравиться