Вы находитесь на странице: 1из 2

S No Analytic Technique 1 Frequency 2 Cross Tabulation

Critical Element Valid % & Cumulative % Chi Square

What to look for Pattern of distribution Significance value

Interpretation If significance value is less than .05 reject the null hypothesis that there is no relation between the 2 variables being cross tabulated If variables are binary (using Jaccard method) then clustering happens due to number of yes matches and if variables are interval type (using euclidean distance) then clustering happens based on the difference in values. The number of clusters is decided statistically based on the point at which the next object to join the he cluster c does so at a relatively longer distance. You will also have to use your intuition to decide on the number of clusters. If cluster centres are differe ferent, the clusters are well separated and will will qualify to be selected for further analysis. Build profile of clusters in terms of demographics or other variables. Variables bles used for clustering should not be used. Clusters will be interpreted as per the distance criteria used Lesser the Objective function value lesser the error in the model Clusters will be interpreted as per the distance criteria used Lesser the Objective function value lesser the error in the model

Proximity table 3 Hierarchical Clustering Dendrogram

How the proximities are calculated

Number of clusters

Final Cluster Centres 4 K- Means Clustering Final Cluster membership Perceptual Map Objective function value Percptual Map Attribute Based Perceptual Mapping Objective function value

The cluster centres should be different from each other There should be enough (substantial) members in each cluster. Clustering of objects Minimum value Clustering of objects & relation between attributes Minimum value

5 Overall Similarity - Perceptual Mapping

Communalities Table Eigen Values 7 Factor analysis Rotated Component Matrix

Extractions Components having eigen value greater then 1

The extraction value should be above 0.5 We can use the components ents having eigen value greater than 1 in place of the original variables if the cumulative variance explained exp is substantial (60% or more) Each component will have ve the t dominant variables as those which have a high absolute value in that component and a low value in all other components. Low value is usually taken as 0.3 A low Wilks Lambda value (Nearer to 0) and a high eigen value indicate that the groups are well formed Unstandardized coefficients are used to derive the regression equation which will provide the discriminant score. The highest coefficient is the one which will have the largest larg effect on determining the group to which a case will belong. Negative sign of coefficient will wil pull down the discriminant score with increase in value of variable & vice versa. The prediction of which group grou the case will belong to depends on which side of the average the discriminant score of the case falls. The percentage of values correctly predicted determine the accuracy accura of the discriminant model The highest importance percentage per is the factor which is relatively more important. The utility difference is obtained ob by adding up the highest positive and the highest negative values in absolute terms (without the +ve or -ve signs). The best combination of factors is predicted by looking at the highest values of factor levels. This chart shows which factor is most important for the sample as a whole.

Dominant Variables Values of Wilks Lambda & Eigen

Wilks Lambda & Eigen Value

Unstandardised ed coefficients 8 Discriminant analysis

Values & sign

Functions at Group centroids

Average of the 2 values (in case of 2 group discriminant) Percebtage of values correctly predicted Highest percentage

Summary Table Importance Percentages

9 conjoint analysis

Utility values

Utility difference

Summary importance

Summary importance table

Вам также может понравиться