Вы находитесь на странице: 1из 100

Marielle Caccam

Jewel Refran
 Cluster analysis is a group of multivariate techniques whose primary
purpose is to group objects (e.g., respondents, products, or other
entities) based on the characteristics they possess.
 It is a means of grouping records based upon attributes that make
them similar. If plotted geometrically, the objects within the clusters
will be close together, while the distance between clusters will be
farther apart.

* Cluster Variate
- represents a mathematical representation of the
selected set of variables which compares the object‟s similarities.
Cluster Analysis Factor Analysis
- grouping is -grouping is based
based on the on patterns of
distance variation
(proximity) (correlation)

Factor analysis, we form group of variables based on the


several people‟s responses to those variables.
In contrast to Cluster analysis, we group people based on
their responses to several variables.
Cluster Analysis Discriminant analysis
- you don‟t know - requires you to know
who or what
belongs to which group membership for
group. Not even the cases used to derive
the number of classification rule.
groups.
 Field of psychiatry - where the characterization of patients on the basis of
clusters of symptoms can be useful in the identification of an appropriate
form of therapy.

 Biology - used to find groups of genes that have similar functions.

 Information Retrieval - The world Wide Web consists of billions of Web


pages, and the results of a query to a search engine can return thousands of
pages. Clustering can be used to group these search results into small
number of clusters, each of which captures a particular aspect of the query.
For instance, a query of “movie” might return Web pages grouped into
categories such as reviews, trailers, stars and theaters. Each category
(cluster) can be broken into subcategories (sub-clusters_, producing a
hierarchical structure that further assists a user‟s exploration of the query
results.

 Climate - Understanding the Earth‟s climate requires finding patterns in the


atmosphere and ocean. To that end, cluster analysis has been applied to
find patterns in the atmospheric pressure of polar regions and areas of the
ocean that have a significant impact on land climate.
 Data Reduction
-A researcher may be faced with a large number of
observations that can be meaningless unless classified
into manageable groups. CA can perform this data
reduction procedure objectively by reducing the info.
from an entire population of sample to info. about
specific groups.

 Hypothesis Generation
- Cluster analysis is also useful when a researcher
wishes to develop hypotheses concerning the nature of
the data or to examine previously stated hypotheses.
 Cluster analysis is descriptive, atheoretical, and noninferential. Cluster analysis has no
statistical basis upon which to draw inferences from a sample to a population, and many
contend that it is only an exploratory technique. Nothing guarantees unique solutions,
because the cluster membership for any number of solutions is dependent upon many
elements of the procedure, and many different solutions can be obtained by varying one or
more elements.
 Cluster analysis will always create clusters, regardless of the actual existence of any
structure in the data. When using cluster analysis, the researcher is making an assumption
of some structure among the objects. The researcher should always remember that just
because clusters can be found does not validate their existence. Only with strong
conceptual support and then validation are the clusters potentially meaningful and relevant.
 The cluster solution is not generalizable because it is totally dependent upon the variables
used as the basis for the similarity measure. This criticism can be made against any
statistical technique, but cluster analysis is generally considered more dependent on the
measures used to characterize the objects than other multivariate techniques. With the
cluster variate completely specified by the researcher. As a result, the researcher must be
especially cognizant of the variables used in the analysis, ensuring that they have strong
conceptual support.
 Cluster analysis used for:
◦ Taxonomy Description. Identifying groups within the data
◦ Data Simplication. The ability to analyze groups of similar
observations instead all individual observation.
◦ Relationship Identification. The simplified structure from CA
portrays relationships not revealed otherwise.
 Theoretical, conceptual and practical considerations must be
observed when selecting clustering variables for CA:
◦ Only variables that relate specifically to objectives of the CA are
included.
◦ Variables selected characterize the individuals (objects) being
clustered.
The primary objective of cluster analysis is to define the
structure of the data by placing the most similar
observations into groups. To accomplish this task, we must
address three basic questions:

◦ How do we measure similarity?


◦ How do we form clusters?
◦ How many groups do we form?
 Similarity represents the degree of correspondence among objects
across all of the characteristics used in the analysis. It is a set of
rules that serve as criteria for grouping or separating items.

◦ Correlational measures.
- Less frequently used, where large values of r‟s do indicate
similarity

◦ Distance Measures.
Most often used as a measure of similarity, with higher values
representing greater dissimilarity (distance between cases), not
similarity.
Graph 2
Graph 1
Chart Title 7
Graph 1 represents
7 higher 6 level of
6 similarity
5
5
4
4
3
3
2
2
1 1

0 0
Category Category Category Category Category Category Category Category
1 2 3 4 1 2 3 4

 Both graph have the same r = 1, which implies to


have a same pattern. But the distances (d‟s) are not
equal.
- several distance measures are available, each with specific
characteristics.

 Euclidean distance. The most commonly recognized to as straight-


line distance.

 Squared Euclidean distance. The sum of the squared differences


without taking the square root.

 City- block (Manhattan) distance. Euclidean distance. Uses the sum


of the variables‟ absolute differences
 Chebychev distance. Is the maximum of the absolute difference in
the clustering variables‟ values. Frequently used when working with
metric (or ordinal) data.

 Mahalanobis distance (D2). Is a generalized distance measure that


accounts for the correlations among variables in a way that weights
each variables equally.
 Suppose a marketing researcher wishes to determine market segments in a
community based on patterns of loyalty to brands and stores a small sample
of seven respondents is selected as a pilot test of how cluster analysis is
applied. Two measures of loyalty- V1(store loyalty) and V2(brand loyalty)-
were measured for each respondents on 0-10 scale.
Proximity Matrix of Euclidean Distance Between Observations
Observations
Observation
A B C D E F G

A ---
B 3.162 ---
C 5.099 2.000 ---
D 5.099 2.828 2.000 ---
E 5.000 2.236 2.236 4.123 ---
F 6.403 3.606 3.000 5.000 1.414 ---
G 3.606 2.236 3.606 5.000 2.000 3.16 ---
2
 SIMPLE RULE:
◦ Identify the two most similar(closest) observations not
already in the same cluster and combine them.

◦ We apply this rule repeatedly to generate a number of


cluster solutions, starting with each observation as its own
“cluster” and then combining two clusters at a time until all
observations are in a single cluster. This process is termed
a hierarchical procedure because it moves in a stepwise
fashion to form an entire range of cluster solutions. It is
also an agglomerative method because clusters are formed
by combining existing clusters
AGGLOMERATIVE PROCESS CLUSTER SOLUTION

Minimum Overall Similarity


Distance Measure (Average
Step Unclustered Observation Cluster Membership Number of Within-Cluster
Observationsa Pair Clusters Distance)
Initial Solution (A)(B)(C)(D)(E)(F)(G) 7 0
1 1.414 E-F (A)(B)(C)(D)(E-F)(G) 6 1.414
2 2.000 E-G (A)(B)(C)(D)(E-F-G) 5 2.192
3 2.000 C-D (A)(B)(C-D)(E-F-G) 4 2.144
4 2.000 B-C (A)(B-C-D)(E-F-G) 3 2.234
5 2.236 B-E (A)(B-C-D-E-F-G) 2 2.896
6 3.162 A-B (A-B-C-D-E-F-G) 1 3.420

In steps 1,2,3 and 4, the OSM does not change substantially, which
indicates that we are forming other clusters with essentially the
same heterogeneity of the existing clusters.
When we get to step 5, we see a large increase. This indicates that
joining clusters (B-C-D) and (E-F-G) resulted a single cluster that
was markedly less homogenous.
 Therefore, the three – cluster solution of Step 4 seems
the most appropriate for a final cluster solution, with
two equally sized clusters, (B-C-D) and (E-F-G), and a
single outlying observation (A).

This approach is particularly useful in identifying


outliers, such as Observation A. It also depicts the
relative size of varying clusters, although it becomes
unwieldy when the number of observations increases.
 Dendogram
- Graphical representation (tree graph) of the results of a
hierarchical procedure. Starting with each object as a
separate cluster, the dendogram shows graphically how the
clusters are combined at each step of the procedure until all
are contained in a single cluster
 Outliers can severely distort the representativeness of the
results if they appear as structure (clusters) inconsistent with the
objectives.

◦ They should be removed if the outliers represents:


 Abberant observations not representative of the population
 Observations of small or insignificant segments within the
population and of no interest to the research objectives

◦ They should be retained if a undersampling/poor


representation of relevant groups in the population; the
sample should be augmented to ensure representation of
these group.
 Outliers can be identified based on the
similarity measure by:

◦ Finding observations with large distances from all


other observations.

◦ Graphic profile diagrams highlighting outlying cases.

◦ Their appearance in cluster solutions as single –


member or small clusters.
 The researcher should ensure that the sample size is
large enough to provide sufficient representation of all
relevant groups of the population

 The researcher must therefore be confident that the


obtained sample is representative of the population.
 Clustering variables that have scales using widely
differing numbers of scale points or that exhibit large
differences in standard deviations should de standardized.

◦ The most common standardization conversion is Z score


(with mean equals to 0 and standard deviation of 1).
 There are number of different methods that can be
used to carry out a cluster analysis; these methods
can be classified as follows:

 Hierarchical Cluster Analysis

 Nonhierarchical Cluster Analysis


 Combination of Both Methods
 The stepwise procedure attempts to identify relatively
homogeneous groups of cases based on selected
characteristics using an algorithm either agglomerative
or divisive, resulting to a construction of a hierarchy or
treelike structure (dendogram) depicting the formation
of clusters. This is one of the most straightforward
method.
 HCA are preferred when:
◦ The sample size is moderate (under 300 – 400, not exceeding
1000).
 Agglomerative Algorithm
 Divisive Algorithm

*Algorithm- defines how similarity is defined


between multiple – member clusters in the clustering
process.
 Hierarchical procedure that begins with each object or
observation in a separate cluster. In each subsequent step,
the two clusters that are most similar are combined to build a
new aggregate cluster. The process is repeated until all
objects a finally combined into a single clusters. From n
clusters to 1.

 Similarity decreases during successive steps. Clusters can‟t


be split.
 Begins with all objects in single cluster, which is then divided
at each step into two additional clusters that contain the most
dissimilar objects. The single cluster is divided into two
clusters, then one of these clusters is split for a total of three
clusters. This continues until all observations are in a single –
member clusters. From 1 cluster to n sub clusters
Divisive Method Aglomerative
Method
 Among numerous approaches, the five most popular
agglomerative algorithms are:
◦ Single – Linkage
◦ Complete – Linkage
◦ Average – Linkage
◦ Centroid Method
◦ Ward‟s Method
◦ Mahalanobis Distance
 Single – Linkage
◦ Also called the nearest – neighbor method, defines
similarity between clusters as the shortest distance from
any object in one cluster to any object in the other.
 Complete Linkage
◦ Also known as the farthest – neighbor method.
◦ The oppositional approach to single linkage assumes
that the distance between two clusters is based on
the maximum distance between any two members in
the two clusters.
 Average Linkage
 The distance between two clusters is defined as the
average distance between all pairs of the two clusters‟
members
 Centroid Method
◦ Cluster Centroids
- are the mean values of the observation on the
variables of the cluster.

◦ The distance between the two clusters equals the


distance between the two centroids.
 Ward‟s Method
◦ The similarity between two clusters is the
sum of squares within the clusters summed
over all variables.

◦ Ward's method tends to join clusters with a


small number of observations, and it is
strongly biased toward producing clusters
with the same shape and with roughly the
same number of observations.
 The Hierarchical Cluster Analysis provides an excellent
framework with which to compare any set of cluster
solutions.

 This method helps in judging how many clusters should be


retained or considered.
 In contrast to Hierarchical Method, the NCA do not involve the
treelike construction process. Instead, they assign objects
into clusters once the number of clusters is specified.

◦ Two steps in Non HCA

1.) Specify Cluster Seed – identify starting points

2.) Assignment – assign each observation to one of the


cluster seeds.
• Sequential Threshold Method
• Parallel Threshold Method
• Optimizing Procedures

 All of this belongs to a group of clustering algorithm known


as K – means.

◦ K – means Method

 This method aims to partition n observation into k clusters in which each


observation belongs to the cluster with the nearest mean.

 K – means is so commonly used that the term is used by some to refer to


Nonhierarchical cluster analysis in general.
 Simplicity. With the development of Dendogram, the HCA so
afford the researcher with a simple, yet comprehensive
portrayal of clustering solutions.

 Measures of Similarity. HCA can be applied to almost any type


of research question.

 Speed. HCA have the advantage of generating an entire set of


clustering solutions in an expedient manner.
 To reduce the impact of outliers, the researcher may wish to
cluster analyze the data several times, each time deleting
problem observations or outliers.

 Hierarchical Cluster Analysis is not amenable to analyze large


samples.
 The results are less susceptible to outliers in the data, the
distance measure used, and the inclusion of irrelevant or
inappropriate variables.

 Non Hierarchical Cluster Analysis can analyze extremely large


data sets.
 Even a nonrandom starting solution does not guarantee an optimal
clustering of observations. In fact, in many instances, the researcher
will get a different final solution for each set of specified seed
points. How is the researcher to select the optimum answer? Only by
analysis and validation can the researcher select what is considered
the best representation of structure, realizing that many alternatives
may be acceptable.

 Nonhierarchical methods are also not so efficient when a large


number of potential cluster solutions. Each cluster solution is a
separate analysis, in contrast to the hierarchical techniques that
generate all possible cluster solutions in a single analysis.
Thus, nonhierarchical techniques are not as well suited to exploring
a wide range of solutions based on varying elements such as
similarity measures, observations included, and potential seed
points.
A combination approach using a hierarchical approach followed by a
nonhierarchical approach is often advisable.

• First, a hierarchical technique is used to select the number of


clusters and profile clusters centers that serve as initial cluster
seeds in the nonhierarchical procedure.
• A nonhierarchical method then clusters all observations using the
seed points to provide more accurate cluster memberships.

In this way, the advantages of hierarchical methods are complemented by


the ability of the nonhierarchical methods to refine the results by
allowing the switching of cluster membership.
 The cluster centroid, a mean profile of the cluster on each
clustering variable, is particularly useful in the interpretation
stage:

◦ Interpretation involves examining the distinguishing


characteristics of each cluster‟s profile and identifying
substantial differences between clusters.

◦ Cluster solutions failing to show substantial variation


indicate other cluster solutions should be examined.

◦ The cluster centroid should also be assessed for


correspondence with the researcher‟s prior expectations
based on theory or practical experience.
 Validation is essential in cluster analysis because the clusters
are descriptive of structure and require additional support for
their relevance:

• Cross – validation empirically validates a cluster solution by


creating two subsamples (randomly splitting the sample)
and then comparing the two cluster solutions for
consistency with respect to the number of clusters and the
clusters profiles.

• Validation is also achieved by examining differences on


variables not included in the cluster analysis but for which a
theoretical and relevant reason enables the expectation of
variation across the clusters.
Cluster Analysis on SPSS
 Hierarchical Cluster Analysis

 Non Hierarchical Cluster Analysis

 Two – Step Cluster Analysis


Example:
This file only includes 20 cases, each responding to items on
demographics (gender, qualifications, days absence from
work, whether they smoke or not), on their attitudes to
smoking in public places (subtest totals for pro and
anti), plus total scale score for self-concept. We are
attempting to determine how many natural groups exist and
who belongs to each group.
The initial step is
determining how many
groups exist. The SPSS
hierarchical analysis
actually calculates every
possibility between
everyone forming their own
group (as many
clusters as there are cases)
and everyone belonging to
the same group, giving a
range in our data of from 1
to 20 clusters.
Click „Plots‟
Click „Method‟

Select Continue then OK.


DENDOGRAM
Parsing the classification
tree to determine the
number of clusters is a
subjective process.
Generally, you begin by
looking for "gaps" between
joining along the horizontal
axis. Starting from the right,
there is a gap between 5
and 10, which splits the
data into two clear clusters
and a minor one.
AGLOMERATION SCHEDULE The agglomeration
schedule is a numerical
summary of the cluster
solution.

At the first stage, cases 7


and 9 are combined
because they have the
smallest distance

The cluster created by


their joining next appears
in stage 8

In stage 8, the
observations 5 and 7 were
joined. The resulting
cluster next appears in
stage 13.

When there are many


cases, this table becomes
rather long, but it may be
easier to scan the
coefficients column for
large gaps rather than
scan the dendrogram.
A good cluster solution sees a sudden jump (gap) in the distance coefficient. The
solution before the gap indicates the good solution.

Table 23.2 is a reformed table to see the changes in the coefficients as the
number of clusters increase. The final column, headed 'Change‟, enables us to
determine the optimum number of clusters. In this case it is 3 clusters as
succeeding clustering adds very much less to distinguishing between cases.
 Repeat step 1 to 3 to place cases into one of three clusters.

The number you place in the box is the number of clusters that seem best to
represent the clustering solution in a parsimonious way.
Finally click OK.
A new variable has
been generated at
the end of your
SPSS data file
called clu3_1
(labelled Ward
method in variable
view). This
provides the
cluster
membership for
each case in your
sample

Nine respondents have been classified in cluster 2, while


there are seven in cluster 1 and four in cluster 3.
We now proceed by conducting a one-way
ANOVA to determine on which classifying
variables are significantly different between
the groups
Qualifications did not produce any significant associations.
 Crosstab analysis of the nominal variables gender, qualifications and whether
smoke or not produced some significant associations with clusters.
Cluster 1 : consists of male non smokers
Cluster 2 : consists of smoking and non smoking males
Cluster 3 : consists of smoking females
Cluster 1 is characterized by low self-concept, average absence rate, average
attitude score to anti-smoking, non-smoking males.
Cluster 2 is characterized by moderate self-concept, low absence rate, average
attitude score to anti-smoking, smoking and non-smoking males.
Cluster 3 is characterized by high self-concept, high absence rate, low score to
antismoking, smoking females.
Example:
The telecommunication provider wants to segment its customer base
by service usage patterns. If customer can be classified by usage, the
company can offer more attractive package to its customers. The
following variables are;

 Multiple lines
 Voice mail
 Paging service
 Internet
 Caller ID
 Call waiting
 Call forwarding
 3-way calling
 Electronic billing
Number iteration or repetition of
combining different clusters.
Specify number of clusters
Determines when iteration cease and
represent a proportion of the min.
distance bet. Initial cluster center.
STATISTICS
it will show the
information for each
group.
The initial cluster centers are the variable values of the k well-
spaced observations.
Iteration History
The iteration history shows the progress of the clustering process at
each step.

By the 14th iteration, they have


In early iterations, the settled down to the general area of
cluster centers shift quite a their final location, and the last four
lot. iterations are minor adjustments.
ANOVA

The ANOVA table indicates which variables contribute the most to your
cluster solution.

Variables with large F values provide the greatest separation between


clusters.
FINAL CLUSTER CENTERS

The final cluster centers are


computed as the mean for each
variable within each final cluster. The
final cluster centers reflect the
characteristics of the typical case for
each cluster.

Customers in cluster 1 tend to be big


spenders who purchase a lot of
services.
FINAL CLUSTER CENTERS

Customers in cluster 2 tend to Customers in cluster 3 tend to


be moderate spenders who spend very little and do not
purchase the "calling" services. purchase many services.
DISTANCES BETWEEN FINAL CLUSTER
CENTER
This table shows the Euclidean distances
between the final cluster centers.
Greater distances between clusters
correspond to greater dissimilarities.

Clusters 1 and 3 are most different.

Cluster 2 is approximately
equally similar to clusters
1 and 3.
Number of Cases in Each Cluster

A large number of cases were assigned to the third


cluster, which unfortunately is the least profitable group.

Perhaps a fourth, more profitable, cluster could be


extracted from this "basic service" group.
MAIN DIALOG BOX
SAVE DIALOG BOX

Creates a new variable


indicating the final cluster
membership of each case.

Creates a new variable


indicating the Euclidean
distances bet. Each cases
and its classification
center.
FINAL CLUSTER CENTERS
This table shows that an important grouping is missed in the three-cluster solution

Members of clusters 1 and 2 are Clusters 3 and 4 seem to correspond


largely drawn from cluster 3 in the to clusters 1 and 2 from the three-
three-cluster solution, and they are cluster solution.
unlikely to be big spenders.
DISTANCE BETWEEN FINAL CLUSTER
CENTERS

Cluster 4 is still equally


similar to the other
clusters.

Clusters 1 and 2 are the


most similar, which makes
sense because they were
combined into one cluster
in the three-cluster
solution.
NUMBER OF CASES IN EACH
CLUSTER

Nearly 25% of cases belong to the newly created


group of "E-service" customers, which is very
significant to your profits.
 The Two Step Cluster Analysis procedure is an exploratory
tool designed to reveal natural groupings (or clusters) within
a data set that would otherwise not be apparent. The
algorithm employed by this procedure has several desirable
features that differentiate it from traditional clustering
techniques:
• The ability to create clusters based on both categorical and
continuous variables.
• Automatic selection of the number of clusters.
• The ability to analyze large data files efficiently.
In order to handle categorical and continuous variables, the TwoStep Cluster Analysis procedure uses a
likelihood distance measure which assumes that variables in the cluster model are independent.
Further, each continuous variable is assumed to have a normal (Gaussian) distribution and each
categorical variable is assumed to have a multinomial distribution. Empirical internal testing indicates that
the procedure is fairly robust to violations of both the assumption of independence and the distributional
assumptions, but you should try to be aware of how well these assumptions are met.

The two steps of the TwoStep Cluster Analysis procedure's algorithm can be summarized as follows:

 Step 1. The procedure begins with the construction of a Cluster Features (CF) Tree. The tree begins by
placing the first case at the root of the tree in a leaf node that contains variable information about that
case. Each successive case is then added to an existing node or forms a new node, based upon its
similarity to existing nodes and using the distance measure as the similarity criterion. A node that
contains multiple cases contains a summary of variable information about those cases. Thus, the CF tree
provides a capsule summary of the data file.

 Step 2. The leaf nodes of the CF tree are then grouped using an agglomerative clustering algorithm. The
agglomerative clustering can be used to produce a range of solutions. To determine which number of
clusters is "best", each of these cluster solutions is compared using Schwarz's Bayesian Criterion (BIC) or
the Akaike Information Criterion (AIC) as the clustering criterion.
Car manufacturers need to be able to appraise the current market to
determine the likely competition for their vehicles. If cars can be
grouped according to available data, this task can be largely
automatic using cluster analysis.

Вам также может понравиться