Attribution Non-Commercial (BY-NC)

Просмотров: 4.8K

Attribution Non-Commercial (BY-NC)

- Midterm Review Solution
- Cluster Analysis
- Clustering
- chap9_advanced_cluster_analysis
- Assosiation Rule
- High Frequency Trading - Shawn Durrani
- Cluster Analysis in data Minining
- k mean
- Time Series Analysis by SPSS
- Introduction to Data Mining
- Decision Tree & Techniques
- Cluster analysis using k mean algorithm
- Factor Analysis
- Symmetric encryption, DES, AES, MAC, Hash algorithms, HMAC
- k-Means Clustering, Hierarchical Clustering
- Basic Classification
- Factor Analysis
- Clustering
- K Mean Clustering 1
- Concepts and Techniques

Вы находитесь на странице: 1из 104

and Algorithms

by

Tan, Steinbach, Kumar

What is Cluster Analysis?

will be similar (or related) to one another and different

from (or unrelated to) the objects in other groups

Inter-cluster

Intra-cluster distances are

distances are maximized

minimized

Applications of Cluster Analysis

Discovered Clusters Industry Group

● Understanding Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN,

1 Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN,

DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN,

Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down,

Technology1-DOWN

Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN,

for browsing, group genes Sun-DOWN

Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN,

Computer-Assoc-DOWN,Circuit-City-DOWN,

Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN,

Technology2-DOWN

similar functionality, or Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN

Fannie-Mae-DOWN,Fed-Home-Loan-DOWN,

group stocks with similar 3 MBNA-Corp-DOWN,Morgan-Stanley-DOWN Financial-DOWN

4 Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP,

Schlumberger-UP

Oil-UP

● Summarization

– Reduce the size of large

data sets

Clustering precipitation

in Australia

What is not Cluster Analysis?

● Supervised classification

– Have class label information

● Simple segmentation

– Dividing students into different registration groups

alphabetically, by last name

● Results of a query

– Groupings are a result of an external specification

● Graph partitioning

– Some mutual relevance and synergy, but areas are not

identical

Notion of a Cluster can be

Ambiguous

Types of Clusterings

partitional sets of clusters

● Partitional Clustering

– A division data objects into non-overlapping subsets (clusters)

such that each data object is in exactly one subset

● Hierarchical clustering

– A set of nested clusters organized as a hierarchical tree

Partitional Clustering

Hierarchical Clustering

p1

p3 p4

p2

p1 p2 p3 p4

Traditional Hierarchical Clustering Traditional Dendrogram

p1

p3 p4

p2

p1 p2 p3 p4

Non-traditional Hierarchical Clustering Non-traditional Dendrogram

Other Distinctions Between Sets of

Clusters

– In non-exclusive clusterings, points may belong to multiple

clusters.

– Can represent multiple classes or ‘border’ points

● Fuzzy versus non-fuzzy

– In fuzzy clustering, a point belongs to every cluster with some

weight between 0 and 1

– Weights must sum to 1

– Probabilistic clustering has similar characteristics

● Partial versus complete

– In some cases, we only want to cluster some of the data

● Heterogeneous versus homogeneous

– Cluster of widely different sizes, shapes, and densities

Types of Clusters

● Well-separated clusters

● Center-based clusters

● Contiguous clusters

● Density-based clusters

● Property or Conceptual

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 10

Types of Clusters: Well-Separated

● Well-Separated Clusters:

– A cluster is a set of points such that any point in a cluster is

closer (or more similar) to every other point in the cluster than

to any point not in the cluster.

3 well-separated clusters

Types of Clusters: Center-Based

● Center-based

– A cluster is a set of objects such that an object in a cluster is

closer (more similar) to the “center” of a cluster, than to the

center of any other cluster

– The center of a cluster is often a centroid, the average of all

the points in the cluster, or a medoid, the most “representative”

point of a cluster

4 center-based clusters

Types of Clusters: Contiguity-Based

Transitive)

– A cluster is a set of points such that a point in a cluster is

closer (or more similar) to one or more other points in the

cluster than to any point not in the cluster.

8 contiguous clusters

Types of Clusters: Density-Based

● Density-based

– A cluster is a dense region of points, which is separated by

low-density regions, from other regions of high density.

– Used when the clusters are irregular or intertwined, and when

noise and outliers are present.

6 density-based clusters

Types of Clusters: Conceptual Clusters

– Finds clusters that share some common property or represent

a particular concept.

.

2 Overlapping Circles

Types of Clusters: Objective Function

– Finds clusters that minimize or maximize an objective function.

– Enumerate all possible ways of dividing the points into clusters

and evaluate the `goodness' of each potential set of clusters by

using the given objective function. (NP Hard)

– Can have global or local objectives.

Hierarchical clustering algorithms typically have local objectives

Partitional algorithms typically have global objectives

– A variation of the global objective function approach is to fit the

data to a parameterized model.

Parameters for the model are determined from the data.

Mixture models assume that the data is a ‘mixture' of a number of

statistical distributions.

Types of Clusters: Objective Function …

and solve a related problem in that domain

– Proximity matrix defines a weighted graph, where the

nodes are the points being clustered, and the

weighted edges represent the proximities between

points

connected components, one for each cluster.

and maximize the edge weight within clusters

Characteristics of the Input Data Are

Important

– This is a derived measure, but central to clustering

● Sparseness

– Dictates type of similarity

– Adds to efficiency

● Attribute type

– Dictates type of similarity

● Type of Data

– Dictates type of similarity

– Other characteristics, e.g., autocorrelation

● Dimensionality

● Noise and Outliers

● Type of Distribution

Clustering Algorithms

● Hierarchical clustering

● Density-based clustering

K-means Clustering

● Each cluster is associated with a centroid (center point)

● Each point is assigned to the cluster with the closest

centroid

● Number of clusters, K, must be specified

● The basic algorithm is very simple

K-means Clustering – Details

● Initial centroids are often chosen randomly.

– Clusters produced vary from one run to another.

● The centroid is (typically) the mean of the points in the

cluster.

● ‘Closeness’ is measured by Euclidean distance, cosine

similarity, correlation, etc.

● K-means will converge for common similarity measures

mentioned above.

● Most of the convergence happens in the first few

iterations.

– Often the stopping condition is changed to ‘Until relatively few

points change clusters’

● Complexity is O( n * K * I * d )

– n = number of points, K = number of clusters,

I = number of iterations, d = number of attributes

Two different K-means Clusterings

3

2.5

2

Original Points

1.5

y

1

0.5

x

3 3

2.5 2.5

2 2

1.5 1.5

y

y

1 1

0.5 0.5

0 0

x x

Importance of Choosing Initial

Centroids

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Iteration 5

Iteration 6

3

2.5

1.5

y

0.5

x

Importance of Choosing Initial

Centroids

Iteration 1 Iteration 2 Iteration 3

3 3 3

2 2 2

y

y

1 1 1

0 0 0

2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2

x x x

3 3 3

2 2 2

y

y

1 1 1

0 0 0

2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2

x x x

Evaluating K-means Clusters

– For each point, the error is the distance to the nearest cluster

– To get SSE, we square these errors and sum them.

K

SSE = ∑ ∑ dist 2 (mi , x )

i =1 x∈Ci

cluster Ci

can show that mi corresponds to the center (mean) of the cluster

– Given two clusters, we can choose the one with the smallest

error

– One easy way to reduce SSE is to increase K, the number of

clusters

A good clustering with smaller K can have a lower SSE than a poor

clustering with higher K

Importance of Choosing Initial

Centroids …

Iteration 1

Iteration 2

Iteration 3

Iteration 4

Iteration 5

3

2.5

1.5

y

0.5

x

Importance of Choosing Initial

Centroids …

Iteration 1 Iteration 2

3 3

2.5 2.5

2 2

1.5 1.5

y

y

1 1

0.5 0.5

0 0

x x

3 3 3

2 2 2

y

y

1 1 1

0 0 0

2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2

x x x

Problems with Selecting Initial Points

one centroid from each cluster is small.

– Chance is relatively small when K is large

– If clusters are the same size, n, then

– Sometimes the initial centroids will readjust themselves in

‘right’ way, and sometimes they don’t

– Consider an example of five pairs of clusters

10 Clusters Example

Iteration 1

Iteration 2

Iteration 3

Iteration 4

8

2

y

2

4

6

0 5 10 15 20

x

Starting with two initial centroids in one cluster of each pair of clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 29

10 Clusters Example

Iteration 1 Iteration 2

8 8

6 6

4 4

2 2

y

y

0 0

2 2

4 4

6 6

0 5 10 15 20 0 5 10 15 20

x x

Iteration 3 Iteration 4

8 8

6 6

4 4

2 2

y

y

0 0

2 2

4 4

6 6

0 5 10 15 20 0 5 10 15 20

x x

Starting with two initial centroids in one cluster of each pair of clusters

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 30

10 Clusters Example

Iteration 1

Iteration 2

Iteration 3

Iteration 4

8

2

y

2

4

6

0 5 10 15 20

x

Starting with some pairs of clusters having three initial centroids, while other have only

one.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 31

10 Clusters Example

Iteration 1 Iteration 2

8 8

6 6

4 4

2 2

y

y

0 0

2 2

4 4

6 6

0 5 10 15 20 0 5 10 15 20

Iteration 3

x Iteration 4

x

8 8

6 6

4 4

2 2

y

y

0 0

2 2

4 4

6 6

0 5 10 15 20 0 5 10 15 20

x x

Starting with some pairs of clusters having three initial centroids, while other have only

one.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 32

Solutions to Initial Centroids

Problem

● Multiple runs

– Helps, but probability is not on your side

● Sample and use hierarchical clustering to

determine initial centroids

● Select more than k initial centroids and then

select among these initial centroids

– Select most widely separated

● Postprocessing

● Bisecting K-means

– Not as susceptible to initialization issues

Handling Empty Clusters

clusters

● Several strategies

– Choose the point that contributes most to SSE

– Choose a point from the cluster with the highest SSE

– If there are several empty clusters, the above can be

repeated several times.

Updating Centers Incrementally

updated after all points are assigned to a centroid

each assignment (incremental approach)

– Each assignment updates zero or two centroids

– More expensive

– Introduces an order dependency

– Never get an empty cluster

– Can use “weights” to change the impact

Pre-processing and Post-

processing

● Pre-processing

– Normalize the data

– Eliminate outliers

● Post-processing

– Eliminate small clusters that may represent outliers

– Split ‘loose’ clusters, i.e., clusters with relatively high

SSE

– Merge clusters that are ‘close’ and that have relatively

low SSE

– Can use these steps during the clustering process

ISODATA

Bisecting K-means

– Variant of K-means that can produce a partitional or a

hierarchical clustering

Bisecting K-means Example

Limitations of K-means

differing

– Sizes

– Densities

– Non-globular shapes

outliers.

Limitations of K-means: Differing Sizes

Limitations of K-means: Differing

Density

Limitations of K-means: Non-globular

Shapes

Overcoming K-means Limitations

Find parts of clusters, but need to put together.

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 43

Overcoming K-means Limitations

Overcoming K-means Limitations

Hierarchical Clustering

hierarchical tree

● Can be visualized as a dendrogram

– A tree like diagram that records the sequences of

merges or splits

6 5

0.2

4

3 4

2

0.15

5

2

0.1

1

0.05 1

3

0

1 3 2 5 4 6

Strengths of Hierarchical

Clustering

clusters

– Any desired number of clusters can be obtained by

‘cutting’ the dendogram at the proper level

– Example in biological sciences (e.g., animal kingdom,

phylogeny reconstruction, …)

Hierarchical Clustering

– Agglomerative:

Start with the points as individual clusters

At each step, merge the closest pair of clusters until only one cluster

(or k clusters) left

– Divisive:

Start with one, all-inclusive cluster

At each step, split a cluster until each cluster contains a point (or

there are k clusters)

distance matrix

– Merge or split one cluster at a time

Agglomerative Clustering

Algorithm

● Basic algorithm is straightforward

– Compute the proximity matrix

– Let each data point be a cluster

– Repeat

Merge the two closest clusters

Update the proximity matrix

– Until only a single cluster remains

two clusters

– Different approaches to defining the distance between

clusters distinguish the different algorithms

Starting Situation

proximity matrix p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

p5

.

.

. Proximity Matrix

...

p1 p2 p3 p4 p9 p10 p11 p12

Intermediate Situation

C1 C2 C3 C4 C5

C1

C2

C3

C3

C4

C4

C5

Proximity Matrix

C1

C2 C5

...

p1 p2 p3 p4 p9 p10 p11 p12

Intermediate Situation

● We want to merge the two closest clusters (C2 and C5) and

update the proximity matrix. C1 C2 C3 C4 C5

C1

C2

C3

C3

C4

C4

C5

Proximity Matrix

C1

C2 C5

...

p1 p2 p3 p4 p9 p10 p11 p12

After Merging

C2

U

C1 C5 C3 C4

C1 ?

C2 U C5 ? ? ? ?

C3

C3 ?

C4

C4 ?

Proximity Matrix

C1

C2 U C5

...

p1 p2 p3 p4 p9 p10 p11 p12

How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...

p1

Similarity?

p2

p3

p4

p5

● MIN

.

● MAX

.

● Group Average .

Proximity Matrix

● Distance Between Centroids

● Other methods driven by an objective

function

– Ward’s Method uses squared error

How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

p5

● MIN

.

● MAX

.

● Group Average .

Proximity Matrix

● Distance Between Centroids

● Other methods driven by an objective

function

– Ward’s Method uses squared error

How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

p5

● MIN

.

● MAX

.

● Group Average .

Proximity Matrix

● Distance Between Centroids

● Other methods driven by an objective

function

– Ward’s Method uses squared error

How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...

p1

p2

p3

p4

● MIN

.

● MAX

.

● Group Average .

Proximity Matrix

● Distance Between Centroids

● Other methods driven by an objective

function

– Ward’s Method uses squared error

How to Define Inter-Cluster Similarity

p1 p2 p3 p4 p5 ...

p1

× × p2

p3

p4

● MIN

.

● MAX

.

● Group Average .

Proximity Matrix

● Distance Between Centroids

● Other methods driven by an objective

function

– Ward’s Method uses squared error

Cluster Similarity: MIN or Single

Link

most similar (closest) points in the different

clusters

– Determined by one pair of points, i.e., by one link in

the proximity graph.

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Hierarchical Clustering: MIN

5

1

3

5 0.2

2 1

0.15

2 3 6 0.1

0.05

4

4 0

3 6 2 5 4 1

Strength of MIN

Limitations of MIN

Cluster Similarity: MAX or Complete

Linkage

similar (most distant) points in the different

clusters

– Determined by all pairs of points in the two clusters

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

Hierarchical Clustering: MAX

4 1

2 5 0.4

0.35

5

2 0.3

0.25

3 6 0.2

3 0.15

1 0.1

4 0.05

0

3 6 4 1 2 5

Strength of MAX

Limitations of MAX

•Biased towards globular clusters

Cluster Similarity: Group Average

between points in the two clusters.

∑ proximity(p ,p )

pi∈Cluster

i j

i

pj∈Clusterj

proximity(

Cluster j) =

i , Cluster

|Clusteri | ∗|Clusterj |

proximity favors large clusters

I1 I2 I3 I4 I5

I1 1.00 0.90 0.10 0.65 0.20

I2 0.90 1.00 0.70 0.60 0.50

I3 0.10 0.70 1.00 0.40 0.30

I4 0.65 0.60 0.40 1.00 0.80

I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 67

Hierarchical Clustering: Group

Average

5 4 1

2 0.25

5 0.2

2

0.15

3 6 0.1

1 0.05

4

0

3 3 6 4 1 2 5

Hierarchical Clustering: Group

Average

Link

● Strengths

– Less susceptible to noise and outliers

● Limitations

– Biased towards globular clusters

Cluster Similarity: Ward’s Method

in squared error when two clusters are merged

– Similar to group average if distance between points is

distance squared

– Can be used to initialize K-means

Hierarchical Clustering: Comparison

5

1 4 1

3

2 5

5 5

2 1 2

MIN MAX

2 3 6 3 6

3

1

4 4

4

5

1 5 4 1

2 2

5 Ward’s Method 5

2 2

3 6 Group Average 3 6

3

4 1 1

4 4

3

Hierarchical Clustering: Time and Space

requirements

– N is the number of points.

– There are N steps and at each step the size, N2,

proximity matrix must be updated and searched

– Complexity can be reduced to O(N2 log(N) ) time for

some approaches

Hierarchical Clustering: Problems and

Limitations

it cannot be undone

more of the following:

– Sensitivity to noise and outliers

– Difficulty handling different sized clusters and convex

shapes

– Breaking large clusters

MST: Divisive Hierarchical

Clustering

– Start with a tree that consists of any point

– In successive steps, look for the closest pair of points (p, q)

such that one point (p) is in the current tree but the other (q) is

not

– Add q to the tree and put an edge between p and q

MST: Divisive Hierarchical

Clustering

DBSCAN

– Density = number of points within a specified radius (Eps)

of points (MinPts) within Eps

These are points that are at the interior of a cluster

the neighborhood of a core point

point.

DBSCAN: Core, Border, and Noise Points

DBSCAN Algorithm

● Perform clustering on the remaining points

DBSCAN: Core, Border and Noise Points

border and noise

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 79

When DBSCAN Works Well

• Resistant to Noise

• Can handle clusters of different shapes and sizes

When DBSCAN Does NOT Work Well

(MinPts=4, Eps=9.75).

Original Points

• Varying densities

• High-dimensional data

(MinPts=4, Eps=9.92)

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 81

DBSCAN: Determining EPS and MinPts

neighbors are at roughly the same distance

● Noise points have the kth nearest neighbor at farther

distance

● So, plot sorted distance of every point to its kth

nearest neighbor

Cluster Validity

measures to evaluate how good our model is

– Accuracy, precision, recall

evaluate the “goodness” of the resulting clusters?

– To avoid finding patterns in noise

– To compare clustering algorithms

– To compare two sets of clusters

– To compare two clusters

Clusters found in Random Data

1 1

0.9 0.9

0.8 0.8

0.7 0.7

Points 0.5 0.5

y

y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

x x

1 1

0.9 0.9

0.7 0.7 Link

0.6 0.6

0.5 0.5

y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

x x

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 84

Different Aspects of Cluster Validation

distinguishing whether non-random structure actually exists in the

data.

● Comparing the results of a cluster analysis to externally known

results, e.g., to externally given class labels.

● Evaluating how well the results of a cluster analysis fit the data

without reference to external information.

- Use only the data

● Comparing the results of two different sets of cluster analyses to

determine which is better.

● Determining the ‘correct’ number of clusters.

evaluate the entire clustering or just individual clusters.

Measures of Cluster Validity

● Numerical measures that are applied to judge various aspects

of cluster validity, are classified into the following three types.

– External Index: Used to measure the extent to which cluster labels

match externally supplied class labels.

Entropy

– Internal Index: Used to measure the goodness of a clustering

structure without respect to external information.

Sum of Squared Error (SSE)

– Relative Index: Used to compare two different clusterings or

clusters.

Often an external or internal index is used for this function, e.g., SSE or entropy

● Sometimes these are referred to as criteria instead of indices

– However, sometimes criterion is the general strategy and index is the

numerical measure that implements the criterion.

Measuring Cluster Validity Via

Correlation

● Two matrices

– Proximity Matrix

– “Incidence” Matrix

One row and one column for each data point

An entry is 1 if the associated pair of points belong to the same cluster

An entry is 0 if the associated pair of points belongs to different clusters

● Compute the correlation between the two matrices

– Since the matrices are symmetric, only the correlation between

n(n-1) / 2 entries needs to be calculated.

● High correlation indicates that points that belong to the

same cluster are close to each other.

● Not a good measure for some density or contiguity based

clusters.

Measuring Cluster Validity Via

Correlation

for the K-means clusterings of the following two

data sets.

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

y

y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

x x

Using Similarity Matrix for Cluster

Validation

labels and inspect visually.

1

1

10 0.9

0.9

20 0.8

0.8

30 0.7

0.7

40 0.6

0.6

Points

50 0.5

0.5

y

60 0.4

0.4

70 0.3

0.3

80 0.2

0.2

90 0.1

0.1

100 0

0 20 40 60 80 100 Similarity

0 0.2 0.4 0.6 0.8 1

Points

x

Using Similarity Matrix for Cluster

Validation

1 1

10 0.9 0.9

20 0.8 0.8

30 0.7 0.7

40 0.6 0.6

Points

50 0.5 0.5

y

60 0.4 0.4

70 0.3 0.3

80 0.2 0.2

90 0.1 0.1

100 0 0

20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1

Points x

DBSCAN

Using Similarity Matrix for Cluster

Validation

1 1

10 0.9 0.9

20 0.8 0.8

30 0.7 0.7

40 0.6 0.6

Points

50 0.5 0.5

y

60 0.4 0.4

70 0.3 0.3

80 0.2 0.2

90 0.1 0.1

100 0 0

20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1

Points x

K-means

Using Similarity Matrix for Cluster

Validation

1 1

10 0.9 0.9

20 0.8 0.8

30 0.7 0.7

40 0.6 0.6

Points

50 0.5 0.5

y

60 0.4 0.4

70 0.3 0.3

80 0.2 0.2

90 0.1 0.1

100 0 0

20 40 60 80 100 Similarity 0 0.2 0.4 0.6 0.8 1

Points x

Complete Link

Using Similarity Matrix for Cluster

Validation

0.9

1 500

2 0.8

6

0.7

1000

3 0.6

4

1500 0.5

0.4

2000

0.3

5

0.2

2500

0.1

7

3000 0

500 1000 1500 2000 2500 3000

DBSCAN

Internal Measures: SSE

● Clusters in more complicated figures aren’t well separated

● Internal Index: Used to measure the goodness of a clustering

structure without respect to external information

– SSE

● SSE is good for comparing two clusterings or two clusters

(average SSE).

● Can also be used to estimate the number of clusters

10

6 9

8

4

7

2 6

SSE

5

0

4

2 3

2

4

1

6 0

2 5 10 15 20 25 30

5 10 15

K

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 94

Internal Measures: SSE

1

2 6

3

4

Framework for Cluster Validity

– For example, if our measure of evaluation has the value, 10, is that

good, fair, or poor?

● Statistics provide a framework for cluster validity

– The more “atypical” a clustering result is, the more likely it represents

valid structure in the data

– Can compare the values of an index that result from random data or

clusterings to those of a clustering result.

If the value of the index is unlikely, then the cluster results are valid

– These approaches are more complicated and harder to understand.

● For comparing the results of two different sets of cluster

analyses, a framework is less necessary.

– However, there is the question of whether the difference between two

index values is significant

Statistical Framework for SSE

● Example

– Compare SSE of 0.005 against three clusters in random data

– Histogram shows SSE of three clusters in 500 sets of random data

points of size 100 distributed over the range 0.2 – 0.8 for x and y

values

1

50

0.9

45

0.8

40

0.7

35

0.6

30

Count

0.5

y

25

0.4

20

0.3

15

0.2

10

0.1

5

0

0 0.2 0.4 0.6 0.8 1 0

0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034

x SSE

Statistical Framework for Correlation

K-means clusterings of the following two data sets.

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

0.5 0.5

y

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0

0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

x x

Internal Measures: Cohesion and Separation

are objects in a cluster

– Example: SSE

● Cluster Separation: Measure how distinct or well-

separated a cluster is from other clusters

● Example: Squared Error

– Cohesion is measured by the within cluster sum of squares (SSE)

WSS = ∑ ∑ ( x − mi )2

i x∈C i

– Separation is measured by the between cluster sum of squares

BSS = ∑ Ci ( m − mi ) 2

i

– Where |Ci| is the size of cluster i

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 99

Internal Measures: Cohesion and

Separation

● Example: SSE

– BSS + WSS = constant

m

× × ×

1 m1 2 3 4 m2 5

BSS= 4 × (3 − 3) 2 = 0

Total = 10 + 0 = 10

BSS= 2 × (3 − 1.5) 2 + 2 × ( 4.5 − 3) 2 = 9

Total = 1 + 9 = 10

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 100

Internal Measures: Cohesion and Separation

cohesion and separation.

– Cluster cohesion is the sum of the weight of all links within a cluster.

– Cluster separation is the sum of the weights between nodes in the cluster

and nodes outside the cluster.

cohesion separation

Internal Measures: Silhouette Coefficient

but for individual points, as well as clusters and clusterings

● For an individual point, i

– Calculate a = average distance of i to the points in its cluster

– Calculate b = min (average distance of i to points in another cluster)

– The silhouette coefficient for a point is then given by

b

– Typically between 0 and 1. a

– The closer to 1 the better.

clustering

© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 102

External Measures of Cluster Validity: Entropy and

Purity

Final Comment on Cluster Validity

difficult and frustrating part of cluster analysis.

Without a strong effort in this direction, cluster

analysis will remain a black art accessible only to

those true believers who have experience and

great courage.”

- Midterm Review SolutionЗагружено:nurnuraddin
- Cluster AnalysisЗагружено:Allison Collier
- ClusteringЗагружено:sunnynnus
- chap9_advanced_cluster_analysisЗагружено:Wenxiao Gu
- Assosiation RuleЗагружено:Naglaa Mostafa
- High Frequency Trading - Shawn DurraniЗагружено:ShawnDurrani
- Cluster Analysis in data MininingЗагружено:Sushil Kulkarni
- k meanЗагружено:Shivram Dwivedi
- Time Series Analysis by SPSSЗагружено:Faizan Ahmad
- Introduction to Data MiningЗагружено:Dong-gu Jeong
- Decision Tree & TechniquesЗагружено:jain
- Cluster analysis using k mean algorithmЗагружено:nooti
- Factor AnalysisЗагружено:jaya1117sarkar
- Symmetric encryption, DES, AES, MAC, Hash algorithms, HMACЗагружено:Mukesh
- k-Means Clustering, Hierarchical ClusteringЗагружено:akirank1
- Basic ClassificationЗагружено:Talal Ahmad
- Factor AnalysisЗагружено:vijendra chanda
- ClusteringЗагружено:lalith_nandana
- K Mean Clustering 1Загружено:HykaVirtasari
- Concepts and TechniquesЗагружено:mahendirana
- DBSCAN ClusteringЗагружено:aslam2001in
- Data Mining Homework 1Загружено:Joseph Chang
- Principal Component Analysis i ToЗагружено:Suleyman Kale
- k-means phpЗагружено:Muhammad Iqbal
- Practical Guide to Principal Component Methods in R Multivariate Analysis Book 2 by Alboukadel KasЗагружено:Enrique Lopez Severiano
- Factor Analysis PptЗагружено:vipul7589
- Homework2 SolЗагружено:Yavuz Sayan
- Stock Price Formula for HftЗагружено:Sukju Hwang
- Market Making Behaviour in an Order Book Model and Its Impact on the Bid-Ask SpreadЗагружено:TraderCat Solaris

- NLP notesЗагружено:api-3717234
- Chapter01 ArtЗагружено:api-3717234
- CH03 Image+Enhancement+in+the+Spatial+DomainЗагружено:api-3717234
- Ch20Загружено:api-3717234
- Software Engineering: A Practitioner's Approach, 6/eЗагружено:basit qamar
- Ch18Загружено:api-3717234
- Ch17Загружено:api-3717234
- Ch16Загружено:api-3717234
- Ch15Загружено:api-3717234
- Ch14Загружено:api-3717234
- Ch12Загружено:api-3717234
- Software Engineering: A Practitioner’s Approach, 6/eЗагружено:basit qamar
- Ch09Загружено:api-3717234
- Software Engineering: A Practitioner’s Approach, 6/eЗагружено:basit qamar
- Ch07Загружено:api-3717234
- Ch05Загружено:api-3717234
- Ch04Загружено:api-3717234
- ch03Загружено:api-3825915
- Ch06Загружено:api-3717234
- Software Engineering: A Practitioner’s Approach, 6/eЗагружено:basit qamar
- World Knowledge and Fuzzy Logic-2004Загружено:api-3717234
- The Concept of a Linguistic Variable and Its Applications to Approximate Reasoning III-1975Загружено:api-3717234
- 37-The Concept of a Linguistic Variable and Its Applications to Approximate Reasoning II-1975Загружено:Dr. Ir. R. Didin Kusdian, MT.
- The Concept of a Linguistic Variable and Its Applications to Approximate Reasoning I-1975Загружено:api-3717234
- Fuzzy Probabilities 1984Загружено:api-3717234
- Recognizing ConceptЗагружено:api-3717234
- JCST-Apr05-4Загружено:api-3717234
- Image RetrievalЗагружено:api-3717234
- Image CategorizationЗагружено:api-3717234
- Fuzzy Image RetrievalЗагружено:api-3717234

- 32PFL5322DЗагружено:iamisterd
- syllabusЗагружено:sandeepjaglan22
- Best Practice Procedures for Upgrading TSMЗагружено:shriraghav
- S36 IDDU use manual.pdfЗагружено:Sommerville Michael
- Advance Concrete 2012 - DYNamic Reinforcement TutorialЗагружено:Alex Purcaru
- klausul 37001_27001_9001Загружено:Adi Prasetyo
- netis_WF2780_DatasheetЗагружено:Dagoberto Cerrud
- Logiq P5 datasheetЗагружено:RUCEG1971
- ASHRAE Workshop BasicsЗагружено:Jennifer Hensley
- Goodness of Fit Test ExampleЗагружено:Jolene
- MAX5352-MAX5353Загружено:Marcelo Ferreira
- CURA AND GCODE GENERATION.pdfЗагружено:Philip Fletcher
- LIT1406_Uninterruptible_Power_Supplies_Flyer_03_14.pdfЗагружено:euqehtb
- IFEMЗагружено:Stefan Stanca
- GM338_398 Basic Service ManualЗагружено:Susan Pastrana
- Contouring 2Загружено:ShakrMurad
- A Primal Dual Integer Programming AlgorithmЗагружено:Juan Sebastián Poveda Gulfo
- 3PAR ManualЗагружено:jcrlima
- Tutorial Solid WorksЗагружено:LauraAnindyta
- SYH_IE-Net_76Загружено:Kukinjos
- 6 Blended Learning ModelsЗагружено:Teguh Budiarto
- ReleaseNote5.pdfЗагружено:razakashif
- tanya nguyen edf3021 assignment 1Загружено:api-320409861
- Indian Call CenterЗагружено:Brijesh Chauhan
- Resume Format (2)Загружено:Srinivas RSP
- MOSS as an App Dev Platform WPЗагружено:atreyasa
- TDA2009AЗагружено:namsongday
- Ezy Math Tutoring - Year 7 AnswersЗагружено:Vincents Genesius Evans
- st.line graphs.pdfЗагружено:vishyacharya
- Tim Southerton Resume - 2-15-14Загружено:southertontimothy