Вы находитесь на странице: 1из 75

Text and Web Mining

Machine Learning and Data Mining (Unit 19)

Prof. Pier Luca Lanzi

References

Jiawei Han and Micheline Kamber, "Data Mining: Concepts and Techniques", The Morgan Kaufmann Series in Data Management Systems (Second Edition) Chapter 10, part 2 Web Mining Course by Gregory-Platesky Shapiro available at www.kdnuggets.com

Prof. Pier Luca Lanzi

Mining Text Data: An Introduction

Data Mining / Knowledge Discovery

Structured Data HomeLoan ( Loanee: Frank Rizzo Lender: MWF Agency: Lake View Amount: $200,000 Term: 15 years )

Multimedia

Free Text Frank Rizzo bought his home from Lake View Real Estate in 1992. He paid $200,000 under a15-year loan from MW Financial.

Hypertext <a href>Frank Rizzo </a> Bought <a hef>this home</a> from <a href>Lake View Real Estate</a> In <b>1992</b>. <p>...

Loans($200K,[map],...)
Prof. Pier Luca Lanzi

Bag-of-Tokens Approaches

Documents

Token Sets

Four score and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal. Now we are engaged in a great civil war, testing whether that nation, or

Feature Extraction

nation 5 civil - 1 war 2 men 2 died 4 people 5 Liberty 1 God 1

Loses all order-specific information! Severely limits context!


Prof. Pier Luca Lanzi

Natural Language Processing

A dog is chasing a boy on the playground


Det Noun Aux Verb Det Noun Prep Det
Noun Phrase

Noun
Noun Phrase

Lexical analysis (part-of-speech tagging)

Noun Phrase

Complex Verb

Semantic analysis Dog(d1). Boy(b1). Playground(p1). Chasing(d1,b1,p1).

Prep Phrase Verb Phrase

Syntactic analysis (Parsing)


Verb Phrase

+
Scared(x) if Chasing(_,x,_).

Sentence

Scared(b1) Inference

A person saying this may be reminding another person to get the dog back Pragmatic analysis (speech act)

Prof. Pier Luca Lanzi (Taken from ChengXiang Zhai, CS 397cxz Fall 2003)

General NLPToo Difficult!

Word-level ambiguity design can be a noun or a verb (Ambiguous POS) root has multiple meanings (Ambiguous sense) Syntactic ambiguity natural language processing (Modification) A man saw a boy with a telescope. (PP Attachment) Anaphora resolution John persuaded Bill to buy a TV for himself. (himself = John or Bill?) Presupposition He has quit smoking. implies that he smoked before.

Humans rely on context to interpret (when possible). This context may extend beyond a given document!
(Taken from ChengXiang Zhai, CS 397cxz Fall 2003)
Prof. Pier Luca Lanzi

Shallow Linguistics
English Lexicon Part-of-Speech Tagging Word Sense Disambiguation Phrase Detection / Parsing

Prof. Pier Luca Lanzi

WordNet

An extensive lexical network for the English language Contains over 138,838 words. Several graphs, one for each part-of-speech. Synsets (synonym sets), each defining a semantic sense. Relationship information (antonym, hyponym, meronym ) Downloadable for free (UNIX, Windows) Expanding to other languages (Global WordNet Association) Funded >$3 million, mainly government (translation interest) Founder George Miller, National Medal of Science, 1991.
watery parched

moist

synonym

wet

dry

arid

damp
Prof. Pier Luca Lanzi

anhydrous

antonym

Part-of-Speech Tagging

Training data (Annotated text)


This Det sentence N serves V1 as P an example Det N of P annotated V2 text N

This is a new sentence.

POS Tagger

This is a new Det Aux Det Adj

sentence. N

Pick the most sequence. p(w1likely ,..., wk , ttag 1 ,..., tk )

p(t1 | w1 )... p(tk | wk ) p(w1 )... p(wk ) p(w1 ,..., wk , t1 ,..., tk ) = k Independent assignment p(wi | ti ) p(ti | ti1 ) Most common tag =1 (iw p(t1 | w1 )... p(tk | wk ) p 1 )... p( wk ) = k p(wi | ti ) p(ti | ti 1 ) Partial dependency i =1
(HMM)
Prof. Pier Luca Lanzi

Word Sense Disambiguation

10

The difficulties of computational linguistics are rooted in ambiguity. N Aux V P N


Supervised Learning Features: Neighboring POS tags (N Aux V P N) Neighboring words (linguistics are rooted in ambiguity) Stemmed form (root) Dictionary/Thesaurus entries of neighboring words High co-occurrence words (plant, tree, origin,) Other senses of word within discourse Algorithms: Rule-based Learning (e.g. IG guided) Statistical Learning (i.e. Nave Bayes) Unsupervised Learning (i.e. Nearest Neighbor)
(Adapted from ChengXiang Zhai, CS 397cxz Fall 2003)
Prof. Pier Luca Lanzi

Parsing
(Adapted from ChengXiang Zhai, CS 397cxz Fall 2003)

11

Choose most likely parse tree


Probabilistic CFG
S NP VP NP Det BNP NP BNP NP NP PP BNP N VP V VP Aux V NP VP VP PP PP P NP V chasing Aux is N dog N boy N playground Det the Det a P on
1.0 0.3 0.4 0.3 1.0

Probability of this tree=0.000015

NP Det A BNP N dog Aux

VP VP V NP a boy the playground S


Probability of this tree=0.000011

PP P on NP

Grammar

is chasing

. . .
NP Det A

VP Aux is V chasing NP P a boy NP PP NP

0.01 0.003

BNP N dog

Lexicon

on the playground

Prof. Pier Luca Lanzi

Obstacles
Ambiguity

12

A man saw a boy with a telescope.


Computational Intensity Imposes a context horizon. Text Mining NLP Approach Locate promising fragments using fast IR methods (bag-of-tokens) Only apply slow NLP techniques to promising fragments

Prof. Pier Luca Lanzi

Summary: Shallow NLP

13

However, shallow NLP techniques are feasible and useful: Lexicon machine understandable linguistic knowledge possible senses, definitions, synonyms, antonyms, typeof, etc. POS Tagging limit ambiguity (word/POS), entity extraction WSD stem/synonym/hyponym matches (doc and query) Query: Foreign cars Document: Im selling a 1976

Jaguar

Parsing logical view of information (inference?, translation?)

A man saw a boy with a telescope.


Even without complete NLP, any additional knowledge extracted from text data can only be beneficial. Ingenuity will determine the applications.

Prof. Pier Luca Lanzi

References for NLP

14

C. D. Manning and H. Schutze, Foundations of Natural Language Processing, MIT Press, 1999. S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, 1995. S. Chakrabarti, Mining the Web: Statistical Analysis of Hypertext and Semi-Structured Data, Morgan Kaufmann, 2002. G. Miller, R. Beckwith, C. FellBaum, D. Gross, K. Miller, and R. Tengi. Five papers on WordNet. Princeton University, August 1993. C. Zhai, Introduction to NLP, Lecture Notes for CS 397cxz, UIUC, Fall 2003. M. Hearst, Untangling Text Data Mining, ACL99, invited paper. http://www.sims.berkeley.edu/~hearst/papers/acl99/acl99tdm.html R. Sproat, Introduction to Computational Linguistics, LING 306, UIUC, Fall 2003. A Road Map to Text Mining and Web Mining, University of Texas resource page. http://www.cs.utexas.edu/users/pebronia/textmining/ Computational Linguistics and Text Mining Group, IBM Research, http://www.research.ibm.com/dssgrp/

Prof. Pier Luca Lanzi

Text Databases and Information Retrieval

15

Text databases (document databases) Large collections of documents from various sources: news articles, research papers, books, digital libraries, E-mail messages, and Web pages, library database, etc. Data stored is usually semi-structured Traditional information retrieval techniques become inadequate for the increasingly vast amounts of text data Information retrieval A field developed in parallel with database systems Information is organized into (a large number of) documents Information retrieval problem: locating relevant documents based on user input, such as keywords or example documents
Prof. Pier Luca Lanzi

Information Retrieval (IR)


Typical Information Retrieval systems Online library catalogs Online document management systems

16

Information retrieval vs. database systems Some DB problems are not present in IR, e.g., update, transaction management, complex objects Some IR problems are not addressed well in DBMS, e.g., unstructured documents, approximate search using keywords and relevance

Prof. Pier Luca Lanzi

Basic Measures for Text Retrieval

17

Relevant

Relevant & Retrieved

Retrieved

All Documents

Precision: the percentage of retrieved documents that are in fact relevant to the query (i.e., correct responses)

| {Relevant} {Retrieved } | precision = | {Retrieved} |


Recall: the percentage of documents that are relevant to the query and were, in fact, retrieved

| {Relevant} {Retrieved } | precision = | {Relevant} |


Prof. Pier Luca Lanzi

Information Retrieval Techniques

18

Basic Concepts A document can be described by a set of representative keywords called index terms. Different index terms have varying relevance when used to describe document contents. This effect is captured through the assignment of numerical weights to each index term of a document. (e.g.: frequency, tf-idf) DBMS Analogy Index Terms Attributes Weights Attribute Values

Prof. Pier Luca Lanzi

Information Retrieval Techniques


Index Terms (Attribute) Selection: Stop list Word stem Index terms weighting methods Terms U Documents Frequency Matrices Information Retrieval Models: Boolean Model Vector Model Probabilistic Model

19

Prof. Pier Luca Lanzi

Boolean Model

20

Consider that index terms are either present or absent in a document As a result, the index term weights are assumed to be all binaries A query is composed of index terms linked by three connectives: not, and, and or E.g.: car and repair, plane or airplane The Boolean model predicts that each document is either relevant or non-relevant based on the match of a document to the query

Prof. Pier Luca Lanzi

Keyword-Based Retrieval

21

A document is represented by a string, which can be identified by a set of keywords Queries may use expressions of keywords E.g., car and repair shop, tea or coffee, DBMS but not Oracle Queries and retrieval should consider synonyms, e.g., repair and maintenance Major difficulties of the model Synonymy: A keyword T does not appear anywhere in the document, even though the document is closely related to T, e.g., data mining Polysemy: The same keyword may mean different things in different contexts, e.g., mining
Prof. Pier Luca Lanzi

Similarity-Based Retrieval in Text Data

22

Finds similar documents based on a set of common keywords Answer should be based on the degree of relevance based on the nearness of the keywords, relative frequency of the keywords, etc. Basic techniques Stop list Set of words that are deemed irrelevant, even though they may appear frequently E.g., a, the, of, for, to, with, etc. Stop lists may vary when document set varies

Prof. Pier Luca Lanzi

Similarity-Based Retrieval in Text Data

23

Word stem Several words are small syntactic variants of each other since they share a common word stem E.g., drug, drugs, drugged A term frequency table Each entry frequent_table(i, j) = # of occurrences of the word ti in document di Usually, the ratio instead of the absolute number of occurrences is used Similarity metrics: measure the closeness of a document to a query (a set of keywords) Relative term occurrences Cosine distance:

v1 v2 sim(v1 , v2 ) = | v1 || v2 |

Prof. Pier Luca Lanzi

Indexing Techniques

24

Inverted index Maintains two hash- or B+-tree indexed tables: document_table: a set of document records <doc_id, postings_list> term_table: a set of term records, <term, postings_list> Answer query: Find all docs associated with one or a set of terms + easy to implement do not handle well synonymy and polysemy, and posting lists could be too long (storage could be very large) Signature file Associate a signature with each document A signature is a representation of an ordered list of terms that describe the document Order is obtained by frequency analysis, stemming and stop lists

Prof. Pier Luca Lanzi

Vector Space Model

25

Documents and user queries are represented as m-dimensional vectors, where m is the total number of index terms in the document collection. The degree of similarity of the document d with regard to the query q is calculated as the correlation between the vectors that represent them, using measures such as the Euclidian distance or the cosine of the angle between these two vectors.

Prof. Pier Luca Lanzi

Latent Semantic Indexing (1)

26

Basic idea Similar documents have similar word frequencies Difficulty: the size of the term frequency matrix is very large Use a singular value decomposition (SVD) techniques to reduce the size of frequency table Retain the K most significant rows of the frequency table Method Create a term x document weighted frequency matrix A SVD construction: A = U * S * V Define K and obtain Uk ,, Sk , and Vk. Create query vector q . Project q into the term-document space: Dq = q * Uk * Sk-1 Calculate similarities: cos = Dq . D / ||Dq|| * ||D||

Prof. Pier Luca Lanzi

Latent Semantic Indexing (2)

27

Weighted Frequency Matrix

Query Terms: - Insulation - Joint

Prof. Pier Luca Lanzi

Probabilistic Model

28

Basic assumption: Given a user query, there is a set of documents which contains exactly the relevant documents and no other (ideal answer set) Querying process as a process of specifying the properties of an ideal answer set. Since these properties are not known at query time, an initial guess is made This initial guess allows the generation of a preliminary probabilistic description of the ideal answer set which is used to retrieve the first set of documents An interaction with the user is then initiated with the purpose of improving the probabilistic description of the answer set

Prof. Pier Luca Lanzi

Types of Text Data Mining

29

Keyword-based association analysis Automatic document classification Similarity detection Cluster documents by a common author Cluster documents containing information from a common source Link analysis: unusual correlation between entities Sequence analysis: predicting a recurring event Anomaly detection: find information that violates usual patterns Hypertext analysis Patterns in anchors/links Anchor text correlations with linked objects

Prof. Pier Luca Lanzi

Keyword-Based Association Analysis

30

Motivation Collect sets of keywords or terms that occur frequently together and then find the association or correlation relationships among them Association Analysis Process Preprocess the text data by parsing, stemming, removing stop words, etc. Evoke association mining algorithms
Consider each document as a transaction View a set of keywords in the document as a set of items in the transaction

Term level association mining


No need for human effort in tagging documents The number of meaningless results and the execution time is greatly reduced

Prof. Pier Luca Lanzi

Text Classification (1)

31

Motivation Automatic classification for the large number of on-line text documents (Web pages, e-mails, corporate intranets, etc.) Classification Process Data preprocessing Definition of training set and test sets Creation of the classification model using the selected classification algorithm Classification model validation Classification of new/unknown text documents Text document classification differs from the classification of relational data Document databases are not structured according to attribute-value pairs

Prof. Pier Luca Lanzi

Text Classification (2)


Classification Algorithms: Support Vector Machines K-Nearest Neighbors Nave Bayes Neural Networks Decision Trees Association rule-based Boosting

32

Prof. Pier Luca Lanzi

Document Clustering

33

Motivation Automatically group related documents based on their contents No predetermined training sets or taxonomies Generate a taxonomy at runtime Clustering Process Data preprocessing: remove stop words, stem, feature extraction, lexical analysis, etc. Hierarchical clustering: compute similarities applying clustering algorithms. Model-Based clustering (Neural Network Approach): clusters are represented by exemplars. (e.g.: SOM)

Prof. Pier Luca Lanzi

Text Categorization

34

Pre-given categories and labeled document examples (Categories may form hierarchy) Classify new documents A standard classification (supervised learning ) problem

Sports Categorization System Business Education Science

Sports Business Education

Prof. Pier Luca Lanzi

Applications
News article classification Automatic email filtering Webpage classification Word sense disambiguation

35

Prof. Pier Luca Lanzi

Categorization Methods

36

Manual: Typically rule-based Does not scale up (labor-intensive, rule inconsistency) May be appropriate for special data on a particular domain Automatic: Typically exploiting machine learning techniques Vector space model based
Prototype-based (Rocchio) K-nearest neighbor (KNN) Decision-tree (learn rules) Neural Networks (learn non-linear classifier) Support Vector Machines (SVM)

Probabilistic or generative model based


Nave Bayes classifier

Prof. Pier Luca Lanzi

Vector Space Model

37

Represent a doc by a term vector Term: basic concept, e.g., word or phrase Each term defines one dimension N terms define a N-dimensional space Element of vector corresponds to term weight E.g., d = (x1,,xN), xi is importance of term I New document is assigned to the most likely category based on vector similarity

Prof. Pier Luca Lanzi

Illustration of the Vector Space Model

38

Starbucks C2 Category 3 C3 Category 2

new doc

Java

Microsoft

C1 Category 1

Prof. Pier Luca Lanzi

What VS Model Does Not Specify?

39

How to select terms to capture basic concepts Word stopping


e.g. a, the, always, along

Word stemming
e.g. computer, computing, computerize => compute

Latent semantic indexing How to assign weights Not all words are equally important: Some are more indicative than others
e.g. algebra vs. science

How to measure the similarity

Prof. Pier Luca Lanzi

How to Assign Weights


Two-fold heuristics based on frequency

40

TF (Term frequency) More frequent within a document more relevant to semantics E.g., query vs. commercial IDF (Inverse document frequency) Less frequent among documents more discriminative E.g. algebra vs. science

Prof. Pier Luca Lanzi

Term Frequency Weighting

41

Weighting The more frequent, the more relevant to topic E.g. query vs. commercial Raw TF= f(t,d): how many times term t appears in doc d Normalization When document length varies, then relative frequency is preferred E.g., Maximum frequency normalization

Prof. Pier Luca Lanzi

Inverse Document Frequency Weighting

42

Idea: The less frequent terms among documents are the more discriminative Formula:

Given n, total number of docs k, the number of docs with term t appearing (the DF document frequency)

Prof. Pier Luca Lanzi

TF-IDF Weighting

43

TF-IDF weighting: weight(t, d) = TF(t, d) * IDF(t) Frequent within doc high tf high weight Selective among docs high idf high weight Recall VS model Each selected term represents one dimension Each doc is represented by a feature vector Its t-term coordinate of document d is the TF-IDF weight This is more reasonable Just for illustration Many complex and more effective weighting variants exist in practice

Prof. Pier Luca Lanzi

How to Measure Similarity?


Given two document

44

Similarity definition dot product

normalized dot product (or cosine)

Prof. Pier Luca Lanzi

Illustrative Example

45

doc1

text mining search engine text

Sim(newdoc,doc1)=4.8*2.4+4.5*4.5 Sim(newdoc,doc2)=2.4*2.4

travel text

To whom is newdoc more similar?

Sim(newdoc,doc3)=0

doc2

map travel text IDF(faked) 2.4 government president congress doc1 doc2 doc3 newdoc mining 4.5 travel 2.8 map search engine govern president congress 3.3 2.1 5.4 2.2 3.2 4.3 1(2.1) 2 (5.6) 1(3.3) 1 (2.2) 1(3.2) 1(2.4) 1(4.5) 1(4.3) 1(5.4)

2(4.8) 1(4.5) 1(2.4 )

doc3

Prof. Pier Luca Lanzi

Vector Space Model-Based Classifiers

46

What do we have so far? A feature space with similarity measure This is a classic supervised learning problem Search for an approximation to classification hyper plane VS model based classifiers K-NN Decision tree based Neural networks Support vector machine

Prof. Pier Luca Lanzi

Probabilistic Model

47

Category C is modeled as a probability distribution of predefined random events Random events model the process of generating documents Therefore, how likely a document d belongs to category C is measured through the probability for category C to generate d.

Prof. Pier Luca Lanzi

Quick Revisit of Bayes Rule

48

Category Hypothesis space: H = {C1 , , Cn} One document: D

P ( D | Ci ) P(Ci ) P (Ci | D) = P( D)
As we want to pick the most likely category C*, we can drop p(D)
Posterior probability of Ci

C* = arg max C P(C | D) = arg max C P ( D | C ) P(C )


Document model for category C

Prof. Pier Luca Lanzi

Probabilistic Model: Multi-Bernoulli


Event: word presence or absence D = (x1, , x|V|) xi =1 for presence of word wi xi =0 for absence Parameters {p(wi=1|C), p(wi=0|C)} p(wi=1|C)+ p(wi=0|C)=1
p( D = ( x1 ,..., x|V | ) | C )= p( wi = xi | C ) =
i =1 |V | i =1, xi =1

49

|V |

p(wi = 1| C )

i =1, xi =0

|V |

p( wi = 0 | C )

Prof. Pier Luca Lanzi

Probabilistic Model: Multinomial


Event: word selection/sampling D = (n1, , n|V|) ni: frequency of word wi n=n1,++ n|V| Parameters {p(wi|C)} p(w1|C)+ p(w|v||C) = 1

50

n |V | ni ( | ) p ( D = ( n1 , ..., n|v | ) | C )= p ( n | C ) p w C i ... n n |V | i = 1 1

Prof. Pier Luca Lanzi

Parameter Estimation

51

Training examples:

Category prior

E(C1) C1 Ck

E(C2) C2

p (Ci ) =

| E (Ci ) |

| E (C ) |
j =1 j

Multi-Bernoulli Doc model


p(wj =1| Ci ) =
dE(Ci )

(w ,d)+0.5
j

E(Ck)
Vocabulary: V = {w1, , w|V|}

| E(Ci )| +1

(wj ,d) =

1 if wj occursin d 0 otherwise

Multinomial doc model


p(wj | Ci ) = |V|
dE(Ci )

c(w ,d)+1
j m

c(w ,d)+|V |
m=1 dE(Ci )

c(wj ,d) =counts of wj in d

Prof. Pier Luca Lanzi

Classification of New Document

52

Multi-Bernoulli
d = (x1,..., x|V| ) x {0,1 } C* = argmaxC P(D | C)P(C) = argmaxC p(wi = xi | C)P(C)
i=1 |V|

Multinomial
d = (n1,..., n|V| ) | d |= n = n1 +... + n|V| C* = argmaxC P(D | C)P(C) = argmaxC p(n | C)p(wi | C)ni P(C)
i=1 |V|

= argmaxC log p(C) + log p(wi = xi | C)


i=1

|V|

= argmaxC log p(n | C) + log p(C) + ni log p(wi | C)


i=1

|V|

argmaxC log p(C) + ni log p(wi | C)


i=1

|V|

Prof. Pier Luca Lanzi

Categorization Methods
Vector space model K-NN Decision tree Neural network Support vector machine Probabilistic model Nave Bayes classifier

53

Many, many others and variants exist e.g. Bim, Nb, Ind, Swap-1, LLSF, Widrow-Hoff, Rocchio, Gis-W,

Prof. Pier Luca Lanzi

Evaluation (1)
Effectiveness measure Classic: Precision & Recall

54

Precision Recall

Prof. Pier Luca Lanzi

Evaluation (2)
Benchmarks Classic: Reuters collection

55

A set of newswire stories classified under categories related to economics.

Effectiveness Difficulties of strict comparison


different parameter setting different split (or selection) between training and testing various optimizations

However widely recognizable


Best: Boosting-based committee classifier & SVM Worst: Nave Bayes classifier

Need to consider other factors, especially efficiency

Prof. Pier Luca Lanzi

Summary: Text Categorization

56

Wide application domain Comparable effectiveness to professionals Manual Text Classification is not 100% and unlikely to improve substantially Automatic Text Classification is growing at a steady pace Prospects and extensions Very noisy text, such as text from O.C.R. Speech transcripts

Prof. Pier Luca Lanzi

Research Problems in Text Mining

57

Google: what is the next step? How to find the pages that match approximately the sophisticated documents, with incorporation of user-profiles or preferences? Look back of Google: inverted indicies Construction of indicies for the sophisticated documents, with incorporation of user-profiles or preferences Similarity search of such pages using such indicies

Prof. Pier Luca Lanzi

References for Text Mining

58

Fabrizio Sebastiani, Machine Learning in Automated Text Categorization, ACM Computing Surveys, Vol. 34, No.1, March 2002 Soumen Chakrabarti, Data mining for hypertext: A tutorial survey, ACM SIGKDD Explorations, 2000. Cleverdon, Optimizing convenient online accesss to bibliographic databases, Information Survey, Use4, 1, 3747, 1984 Yiming Yang, An evaluation of statistical approaches to text categorization, Journal of Information Retrieval, 1:67-88, 1999. Yiming Yang and Xin Liu A re-examination of text categorization methods. Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'99, pp 42--49), 1999.

Prof. Pier Luca Lanzi

World Wide Web: a brief history


Who invented the wheel is unknown Who invented the World-Wide Web ? (Sir) Tim Berners-Lee in 1989, while working at CERN, invented the World Wide Web, including URL scheme, HTML, and in 1990 wrote the first server and the first browser Mosaic browser developed by Marc Andreessen and Eric Bina at NCSA (National Center for Supercomputing Applications) in 1993; helped rapid web spread Mosaic was basis for Netscape

59

Prof. Pier Luca Lanzi

What is Web Mining?

60

Discovering interesting and useful information from Web content and usage
Examples Web search, e.g. Google, Yahoo, MSN, Ask, Specialized search: e.g. Froogle (comparison shopping), job ads (Flipdog) eCommerce Recommendations (Netflix, Amazon, etc.) Improving conversion rate: next best product to offer Advertising, e.g. Google Adsense Fraud detection: click fraud detection, Improving Web site design and performance

Prof. Pier Luca Lanzi

How does it differ from classical Data Mining?

61

The web is not a relation Textual information and linkage structure Usage data is huge and growing rapidly Googles usage logs are bigger than their web crawl Data generated per day is comparable to largest conventional data warehouses Ability to react in real-time to usage patterns No human in the loop

Prof. Pier Luca Lanzi

Reproduced from Ullman & Rajaraman with permission

How big is the Web?

62

The number of pages is technically, infinite Because of dynamically generated content Lots of duplication (30-40%) Best estimate of unique static HTML pages comes from search engine claims Google = 8 billion Yahoo = 20 billion Lots of marketing hype

Prof. Pier Luca Lanzi

Reproduced from Ullman & Rajaraman with permission

Netcraft Survey: 76,184,000 web sites (Feb 2006)

63

Netcraft survey

http://news.netcraft.com/archives/web_server_survey.html
Prof. Pier Luca Lanzi

Mining the World-Wide Web

64

The WWW is huge, widely distributed, global information service center for Information services: news, advertisements, consumer information, financial management, education, government, e-commerce, etc. Hyper-link information Access and usage information WWW provides rich sources for data mining Challenges Too huge for effective data warehousing and data mining Too complex and heterogeneous: no standards and structure

Prof. Pier Luca Lanzi

Web Mining: A more challenging task

65

Searches for Web access patterns Web structures Regularity and dynamics of Web contents Problems The abundance problem Limited coverage of the Web: hidden Web sources, majority of data in DBMS Limited query interface based on keyword-oriented search Limited customization to individual users

Prof. Pier Luca Lanzi

Web Mining Taxonomy

66

Web Mining

Web Content Mining

Web Structure Mining

Web Usage Mining

Web Page Content Mining

Search Result Mining

General Access Pattern Tracking

Customized Usage Tracking

Prof. Pier Luca Lanzi

Mining the World-Wide Web

67

Web Mining Web Content Mining Web Structure Mining Web Usage Mining

Web Page Content Mining


Web Page Summarization WebLog , WebOQL : Web Structuring query languages; Can identify information within given web pages Ahoy!:Uses heuristics to distinguish personal home pages from other web pages ShopBot: Looks for product prices within web pages Search Result Mining General Access Pattern Tracking Customized Usage Tracking

Prof. Pier Luca Lanzi

Mining the World-Wide Web

68

Web Mining

Web Content Mining

Web Structure Mining

Web Usage Mining

Web Page Content Mining

Search Result Mining Search Engine Result Summarization Clustering Search Result : Categorizes documents using phrases in titles and snippets

General Access Pattern Tracking

Customized Usage Tracking

Prof. Pier Luca Lanzi

Mining the World-Wide Web

69

Web Mining

Web Content Mining

Web Structure Mining Using Links PageRank CLEVER Use interconnections between web pages to give weight to pages. Using Generalization MLDB, VWV Uses a multi-level database representation of the Web. Counters (popularity) and link lists are used for capturing structure.

Web Usage Mining

Search Result Mining Web Page Content Mining

General Access Pattern Tracking

Customized Usage Tracking

Prof. Pier Luca Lanzi

Mining the World-Wide Web

70

Web Mining

Web Content Mining

Web Structure Mining

Web Usage Mining

Web Page Content Mining

General Access Pattern Tracking Web Log Mining Uses KDD techniques to understand general access patterns and trends. Can shed light on better structure and grouping of resource providers.

Customized Usage Tracking

Search Result Mining

Prof. Pier Luca Lanzi

Mining the World-Wide Web

71

Web Mining

Web Content Mining

Web Structure Mining

Web Usage Mining

Web Page Content Mining

General Access Pattern Tracking

Customized Usage Tracking Adaptive Sites Analyzes access patterns of each user at a time. Web site restructures itself automatically by learning from user access patterns.

Search Result Mining

Prof. Pier Luca Lanzi

Web Usage Mining

72

Mining Web log records to discover user access patterns of Web pages Applications Target potential customers for electronic commerce Enhance the quality and delivery of Internet information services to the end user Improve Web server system performance Identify potential prime advertisement locations Web logs provide rich information about Web dynamics Typical Web log entry includes the URL requested, the IP address from which the request originated, and a timestamp

Prof. Pier Luca Lanzi

Web Usage Mining

73

Understanding is a pre-requisite to improvement 1 Google, but 70,000,000+ web sites What Applications? Simple and Basic
Monitor performance, bandwidth usage Catch errors (404 errors- pages not found) Improve web site design (shortcuts for frequent paths, remove links not used, etc)

Advanced and Business Critical


eCommerce: improve conversion, sales, profit Fraud detection: click stream fraud,

Prof. Pier Luca Lanzi

Techniques for Web usage mining

74

Construct multidimensional view on the Weblog database Perform multidimensional OLAP analysis to find the top N users, top N accessed Web pages, most frequently accessed time periods, etc. Perform data mining on Weblog records Find association patterns, sequential patterns, and trends of Web accessing May need additional information,e.g., user browsing sequences of the Web pages in the Web server buffer Conduct studies to Analyze system performance, improve system design by Web caching, Web page prefetching, and Web page swapping

Prof. Pier Luca Lanzi

References

75

Deng Cai, Shipeng Yu, Ji-Rong Wen and Wei-Ying Ma, Extracting Content Structure for Web Pages based on Visual Representation, The Fifth Asia Pacific Web Conference, 2003. Deng Cai, Shipeng Yu, Ji-Rong Wen and Wei-Ying Ma, VIPS: a Vision-based Page Segmentation Algorithm, Microsoft Technical Report (MSR-TR-2003-79), 2003. Shipeng Yu, Deng Cai, Ji-Rong Wen and Wei-Ying Ma, Improving Pseudo-Relevance Feedback in Web Information Retrieval Using Web Page Segmentation, 12th International World Wide Web Conference (WWW2003), May 2003. Ruihua Song, Haifeng Liu, Ji-Rong Wen and Wei-Ying Ma, Learning Block Importance Models for Web Pages, 13th International World Wide Web Conference (WWW2004), May 2004. Deng Cai, Shipeng Yu, Ji-Rong Wen and Wei-Ying Ma, Block-based Web Search, SIGIR 2004, July 2004 . Deng Cai, Xiaofei He, Ji-Rong Wen and Wei-Ying Ma, Block-Level Link Analysis, SIGIR 2004, July 2004 . Deng Cai, Xiaofei He, Wei-Ying Ma, Ji-Rong Wen and Hong-Jiang Zhang, Organizing WWW Images Based on The Analysis of Page Layout and Web Link Structure, The IEEE International Conference on Multimedia and EXPO (ICME'2004) , June 2004 Deng Cai, Xiaofei He, Zhiwei Li, Wei-Ying Ma and Ji-Rong Wen, Hierarchical Clustering of WWW Image Search Results Using Visual, Textual and Link Analysis,12th ACM International Conference on Multimedia, Oct. 2004 .

Prof. Pier Luca Lanzi

Вам также может понравиться