Академический Документы
Профессиональный Документы
Культура Документы
Prepared For
Course Instructor
Prepared By
Course Instructor
Jahangirnagar University
Savar, Dhaka-1342
Sir:
The report has shown which factors are most important to select a rental
house in Dhaka city. And in which factors households give emphasize in
selecting rental house.
I think that the report will help to visualize the problems faced by
households in selecting the rental household in Dhaka city. From the
factor analysis a homeowner should give emphasize to factors which are
most important to tenants.
If you have any queries, you can call me directly. I will be pleased to give
the answer.
Sincerely yours,
ACKNOWLEDGEMENT
Executive Summary VI
Chapter I
1.0 Introduction, Objectives And Methodology 01
1.4 Methodology 01
3.3 Communalities 05
Chapter IV
Conclusion 11
Appendix
EXECUTIVE SUMMARY
Factor analysis is used for marketing research, product pricing and many
other marketing researches. A marketer can easily find which factors he or
she should give emphasize to increase the sale of the company or to boost
up the profit. By making questionnaire and conducting real research
marketing managers get an overview of consumer demand and
expectation.
Chapter I
Introduction, Objectives and Methodology
1.6 Limitation
We have used only one statistical technique to interpret the collected
data. We haven’t used all the techniques of SPSS to determine the output.
Such as to determine the number of factors we have used only one
determinant and other techniques like factor loading plot, factor
determination based on significance test has not used.
Chapter II
Literature review
The first output of SPSS is correlation matrix. This table contains the
Pearson correlation coefficient between all pairs of questions. First, scan
the correlation coefficients themselves and look for any value greater than
0.9. If any are found then there may arise a problem because of
singularity (variables that are perfectly correlated) of the data.
SPSS output 2 shows very important parts of the output: The Kaiser-
Meyer-Olkin measure of sampling adequacy and Bartlett’s test of
sphericity. It determines whether the model will fit or not. The KMO
statistic varies between 0 and 1. A value of 0 indicates that the sum of
partial correlations is large relative to the sum of correlations and hence
the factor analysis is likely to inappropriate. A very close to 1 indicates
that the patterns of correlations are relatively compact and the factor
analysis will give distinct and reliable factors. Kaiser recommends
accepting values greater than 0.5 is acceptable. Furthermore, values
between 0.5 to 0.7 are mediocre, values between 0.7 to 0.8 are good,
values between 0.8 to 0.9 are great and values above 0.9 are superb.
Here the value is .772, which falls into the range of being good. So we can
be confident that factor analysis is appropriate for these data.
In Barlett’s measure tests the null hypothesis that the original correlation
matrix is an identity matrix. We need significance level less than 0.05
here. For these data, Bartlett’s test is highly significant (p<0.001), and
therefore the factor analysis is appropriate.
3.3 Communalities
Initial Extraction
VAR00001 1.000 .476
VAR00002 1.000 .569
VAR00003 1.000 .695
VAR00004 1.000 .511
VAR00005 1.000 .783
VAR00006 1.000 .774
VAR00007 1.000 .546
VAR00008 1.000 .459
VAR00009 1.000 .698
VAR00010 1.000 .733
VAR00011 1.000 .690
VAR00012 1.000 .681
VAR00013 1.000 .616
Extraction Method: Principal Component Analysis.
SPSS output 3 shows the table of communalities before and after
extraction. Principal component analysis works on the initial assumption
that all variance is common. And therefore before extraction the
cummunalities are all 1. In the column labeled extraction reflect the
common variance in the data structure. So we can say that 47.6% of the
variance associated with question 1.
SPSS output 4 lists the eigenvalues associated with each factor before
extraction, after extraction and after rotation. Before extraction, SPSS has
identified 13 linear components within the data set. Here we know that
there are as many component or factors as variables. SPSS displayed
percentage of variance (such as factor 1 has 32.702% of total variance).
We can see that the first 4 factors shows relatively large amount of
variance whereas other subsequent factors shows only small amount of
variance. There are 4 eigenvalues greater than 1, so we can take our
decision that based on eigenvalues we should select 4 factors for this
factor analysis.
Scree Plot
2
lu
nE
ig
a
v
e
1 2 3 4 5 6 7 8 9 10 11 12 13
Component Number
The scree plot in SPSS section 5 also helps to determine the number of
factors we should select for the factor analysis. In the scree plot above
after component number 4 the scree plot is not so much downward
sloping. From that point we can take assumption that we should take 4
component or factor for our factor analysis.
Component
1 2 3 4
VAR00013 .691
VAR00008 .641
VAR00011 .641 .435
VAR00004 .635
VAR00007 .634
VAR00005 .593 .585
VAR00012 .590 .400 -.406
VAR00006 .571 -.528
VAR00002 .567 -.468
VAR00001 .504
VAR00010 .431 .726
VAR00003 .482 -.582
VAR00009 .663
Extraction Method: Principal Component Analysis.
a 4 components extracted.
This matrix contains loadings of each variable onto each factor. SPSS
displays loading over .4 because we have requested for loading over .4.
But in another case where there is no bondage of loadings SPSS will show
all the loadings. This matrix is not an important tool for interpreting
information.
Reproduced Correlations
VAR VAR VAR VAR VAR VAR VAR VAR VAR VAR VAR VAR VAR
000 000 000 000 000 000 000 000 000 000 000 000 000
01 02 03 04 05 06 07 08 09 10 11 12 13
Rep VAR .
rodu 000 476( .096 .082 .353 .379 .162 .304 .318 .476 .425 .335 .255 .220
ced 01 b)
Corr VAR .
elati 000 .096 569( .515 .277 .083 .252 .365 .425 .045 .157 .411 .425 .464
on 02 b)
VAR .
-.06
000 .082 .515 695( .336 .007 .171 .483 .340 .195 .143 .131 .480
9
03 b)
VAR .
000 .353 .277 .336 511( .512 .474 .503 .345 .273 .115 .245 .194 .501
04 b)
VAR .
000 .379 .083 .007 .512 783( .682 .393 .252 .112 .128 .252 .208 .449
05 b)
VAR .
-.15 -.08
000 .162 .252 .171 .474 .682 774( .407 .244 .213 .218 .559
3 7
06 b)
VAR .304 .365 .483 .503 .393 .407 . .366 .293 .059 .215 .169 .532
000 546(
07 b)
VAR .
000 .318 .425 .340 .345 .252 .244 .366 459( .263 .380 .495 .466 .400
08 b)
VAR .
-.15
000 .476 .045 .195 .273 .112 .293 .263 698( .403 .183 .077 .087
3
09 b)
VAR .
-.06 -.08
000 .425 .157 .115 .128 .059 .380 .403 733( .597 .546 .028
9 7
10 b)
VAR .
000 .335 .411 .143 .245 .252 .213 .215 .495 .183 .597 690( .676 .305
11 b)
VAR .
000 .255 .425 .131 .194 .208 .218 .169 .466 .077 .546 .676 681( .292
12 b)
VAR .
000 .220 .464 .480 .501 .449 .559 .532 .400 .087 .028 .305 .292 616(
13 b)
Resi VAR
-.16 -.10 -.01 -.02 -.25 -.04 -.07 -.00
dual 000 .019 .044 .014 .075
7 5 5 6 0 5 7 9
(a) 01
VAR
-.20 -.00 -.04 -.06 -.05 -.17 -.03
000 .019 .027 .046 .049 .058
9 3 5 8 9 9 5
02
VAR
-.20 -.01 -.05 -.05 -.09 -.10
000 .044 .081 .012 .050 .039 .061
9 5 9 7 3 3
03
VAR
-.16 -.01 -.03 -.07 -.14 -.00 -.03 -.03 -.11
000 .027 .054 .023
7 5 1 9 8 4 5 0 6
04
VAR
-.10 -.03 -.11 -.01 -.00 -.08
000 .046 .081 .002 .012 .002 .015
5 1 6 5 6 1
05
VAR
-.01 -.00 -.07 -.11 -.04 -.07 -.03
000 .012 .015 .103 .029 .043
5 3 9 6 4 2 2
06
VAR
-.04 -.05 -.14 -.01 -.04 -.12 -.10 -.09
000 .014 .029 .031 .068
5 9 8 5 4 5 3 9
07
VAR
-.02 -.06 -.05 -.00 -.12 -.01 -.07 -.14 -.05 -.04
000 .002 .015
6 8 7 4 5 4 9 3 3 9
08
VAR
-.25 -.09 -.03 -.10 -.01 -.10
000 .049 .012 .103 .091 .022 .055
0 3 5 3 4 1
09
VAR
-.04 -.07 -.10 -.08 -.11 -.01
000 .058 .050 .054 .002 .029 .029
5 9 1 9 7 1
10
VAR
-.07 -.05 -.03 -.00 -.14 -.08 -.09 -.02
000 .039 .043 .031 .091
7 9 0 6 3 9 3 0
11
VAR
-.00 -.17 -.07 -.05 -.11 -.09 -.00
000 .061 .023 .015 .068 .022
9 9 2 3 7 3 2
12
VAR
-.03 -.10 -.11 -.08 -.03 -.09 -.04 -.01 -.02 -.00
000 .075 .055
5 3 6 1 2 9 9 1 0 2
13
Extraction Method: Principal Component Analysis.
a Residuals are computed between observed and reproduced correlations. There are 37 (47.0%)
nonredundant residuals with absolute values greater than 0.05.
b Reproduced communalities
Table 3.7 shows the reproduced correlation matrix, which is an important
factor for determining whether the model does fit or not. If the
nonredundant residuals percentage is greater than 50% then the model or
the factor analysis is inappropriate and the model doesn’t fit. So it is
necessary that the nonredundant residuals will be less than 50%. In the
table of reproduced correlation matrix the nonredundant residuals
percentage is below 50%. There are 37 nonredundant residuals which
have value greater than 0.05. So the model is appropriate and fit for
further analysis.
Component
1 2 3 4
VAR00005 .855
VAR00006 .832
VAR00004 .541
VAR00012 .799
VAR00011 .792
VAR00010 .723 .435
VAR00008 .480
VAR00003 .827
VAR00002 .631
VAR00013 .524 .559
VAR00007 .420 .534
VAR00009 .818
VAR00001 .566
Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization.
a Rotation converged in 7 iterations.
Factor analysis is a classic tool for analyzing the variables and reducing
the variables. Factor analysis gives the researcher a good result about
various factors. Researches which are mainly Likert scale based fit best
with factor analysis. There are many implications of the factor analysis.
We can use it for marketing campaign, in product research, advertising
studies and for pricing studies. Because the data is mainly primary data so
there is a great chance to get the real idea about any factor. Marketers
can implement this standard tool for analyzing different types of
marketing research.
APPENDIX
(Note: We are from IBA-JU to conduct the survey only to fulfill our
academic purpose. We give the assurance to keep your given
information secret.)
Name: Profession:
Location of home:
Home district:
Factors 1 2 3 4
Safe area
Below 5000
5000-10000
10000-15000
15000-20000
Above 20000