Вы находитесь на странице: 1из 7

Submitted by

Shariq Ejaz – 142199

Report on
Data Analysis on SPSS & AMOS

Business Research Methods

Submitted to
Sir. Moin Ahmad Moon
PROJECT REPORT

Data analysis procedures


For data analysis SPSS and AMOS version 23.0 were used. In order to test the relationships
between the proposed hypotheses, structural equation modeling (SEM) was used.
Result and Discussions
Before data analysis, the screening of the data is always required in order to review the possible
number of errors to be detected. Data screening (sometimes referred to as "data screaming") is
the process of ensuring your data is clean and ready to go before you conduct further statistical
analyses. Data must be screened in order to ensure the data is useable, reliable, and valid for testing
causal theory. For this, we carried out some initial tests for data screening. Out of 643 final cases,
there were no missing and were two aberrant values (one in age and the other one in income) which
were replaced by their mode. Missing values occur when no data value is stored for the variable
in an observation. The threshold for missing data is flexible, but generally, if you are missing more
than 10% of the responses on a particular variable, or from a particular respondent, that variable
or respondent may be problematic. There are several ways to deal with problematic variables:

 Just don't use that variable.


 If it makes sense, impute the missing values. This should only be done for continuous or
interval data (like age or Likert-scale responses), not for categorical data (like gender).
 If your dataset is large enough, just don't use the responses that had missing values for that
variable. This may create a bias, however, if the number of missing responses is greater than
10%.
Outliers were treated with the mode of corresponding values (Cousineau & Chartier, 2015). The
results of our study showed that data is normally distributed and the values of skewness and
kurtosis were not within the recommend thresholds (±1, ±3) as suggested by Tabachnick, Fidell,
and Osterlind (2001) and Cameron (2004).

ES1 ES2 ES3 PI1 PI2 PI4 PI3 PU1 PU2 PU3 UA1 UA2 UA3 UA4 UA5

N Valid 643 643 643 643 643 643 643 643 643 643 643 643 643 643 643
Missing 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Skewness -.484 -.596 -1.788 -2.375 -2.024 -1.726 -1.817 -1.577 -1.581 -1.810 -1.809 -2.022 -2.106 -1.603 -1.702
Std. Error of
.096 .096 .096 .096 .096 .096 .096 .096 .096 .096 .096 .096 .096 .096 .096
Skewness
Kurtosis -1.238 -1.119 3.069 6.301 6.273 3.877 4.154 .942 .920 1.921 1.751 2.562 3.075 1.021 1.270
Std. Error of
.192 .192 .192 .192 .192 .192 .192 .192 .192 .192 .192 .192 .192 .192 .192
Kurtosis
Furthermore, the tolerance level and variance inflation factors (VIF) were used to examine the
multi-collinearity among independent variables (Diamantopoulos & Winklhofer, 2001). The
thresh hold include VIF should be less than 10 and the tolerance level should be greater than 0.1.
The VIF value of the first order variables varied from 1.040 to 1.316 and tolerance values varied
from 0.760 to 0.962. Therefore, multicollinearity is not a concern for all the variables. After
conducting the following analysis we found that our data is heteroscedastic.

Our data was highly significant on the basis of linearity analysis because the values of deviation
linearity were below 0.05.

Sum of
Squares df Mean Square F Sig.
MeanES * Between Groups (Combined) 75.713 28 2.704 1.714 .013
MeanUA Within Groups 968.718 614 1.578
Total 1044.430 642
MeanPI * Between Groups (Combined) 41.508 28 1.482 2.557 .000
MeanUA Within Groups 355.971 614 .580
Total 397.479 642
MeanPU * Between Groups (Combined) 359.903 28 12.854 10.413 .000
MeanUA Within Groups 757.893 614 1.234
Total 1117.795 642
Descriptive Statistics

Descriptive statistics are brief descriptive coefficients that summarize a given data set, which
can be either a representation of the entire population or a sample of it. Descriptive statistics are
broken down into measures of central tendency and measures of variability, or spread.

N Min Max Mean S.D Skewness Kurtosis Mean Cronbach’s M.V


Alpha
ES1 643 1 7 4.30 1.838 -.484 -1.238 0
ES2 643 1 7 4.53 1.784 -.596 -1.119 0
ES3 643 1 7 5.51 1.398 -1.694 2.497 4.780 .650 0
PI1 643 1 7 5.71 1.344 -2.172 4.862 0
PI2 643 1 7 5.94 1.087 -2.093 5.792 0
PI3 643 1 7 5.77 1.137 -1.747 3.622 0
PI4 643 1 7 5.78 1.195 -1.813 3.686 5.800 .726 0
PU1 643 1 7 5.39 1.770 -1.504 .689 0
PU2 643 1 7 5.37 1.729 -1.498 .638 0
PU3 643 1 7 5.58 1.619 -1.719 1.540 5.448 .729 0
UA1 643 1 7 5.43 1.660 -1.708 1.361 0
UA2 643 1 7 5.47 1.553 -1.904 2.066 0
UA3 643 1 7 5.55 1.538 -1.998 2.582 0
UA4 643 1 7 5.41 1.716 -1.520 .731 0
UA5 643 1 7 5.37 1.657 -1.609 .947 5.443 .793 0

Exploratory Factor Analysis


Exploratory Factor Analysis (EFA) is a statistical approach for determining the correlation among
the variables in a dataset. This type of analysis provides a factor structure (a grouping of variables
based on strong correlations). In general, an EFA prepares the variables to be used for cleaner
structural equation modeling. An EFA should always be conducted for new datasets.

KMO and Bartlett's Test


Kaiser-Meyer-Olkin Measure of Sampling Adequacy. .794
Bartlett's Test of Sphericity Approx. Chi-Square 2256.847
df 91
Sig. .000
Total Variance Explained
Rotation
Sums of
Squared
Initial Eigenvalues Extraction Sums of Squared Loadings Loadings
% of Cumulative % of Cumulative
Component Total Variance % Total Variance % Total
1 3.786 27.046 27.046 3.786 27.046 27.046 3.164
2 1.995 14.254 41.299 1.995 14.254 41.299 2.318
3 1.456 10.404 51.703 1.456 10.404 51.703 2.725
4 1.132 8.086 59.789 1.132 8.086 59.789 1.819
5 .869 6.210 65.999
6 .739 5.279 71.278
7 .659 4.710 75.988
8 .593 4.232 80.221
9 .540 3.856 84.077
10 .525 3.753 87.830
11 .475 3.394 91.224
12 .436 3.115 94.339
13 .407 2.911 97.250
14 .385 2.750 100.000
Extraction Method: Principal Component Analysis.
a. When components are correlated, sums of squared loadings cannot be added to obtain a total variance.

Structural equation modeling


Next, we used two step approach recommended by Anderson and Gerbing (1988), where
measurement model was tested first for the reliability and validity of the first only and then the
structural model was tested for the proposed hypothesis. “Structural equation modeling (SEM)
grows out of and serves purposes similar to multiple regression, but in a more powerful way which
takes into account the modeling of interactions, nonlinearities, correlated independents,
measurement error, correlated error terms, multiple latent independents each measured by multiple
indicators, and one or more latent dependents also each with multiple indicators. SEM may be used
as a more powerful alternative to multiple regression, path analysis, factor analysis, time series
analysis, and analysis of covariance. When conducting a second order CFA, establishing the
reliability and validity of first order latent variables is necessary. Following section details the first
order construct reliability and validity.
First order measurement model
Confirmatory Factor Analysis (CFA) is the next step after exploratory factor analysis to determine
the factor structure of your dataset. In the EFA we explore the factor structure (how the variables
relate and group based on inter-variable correlations); in the CFA we confirm the factor structure
we extracted in the EFA. In specification search, the confirmatory factor analysis (CFA) was
conducted with 4 first order latent variables and fifteen observed variables. Maximum likelihood
estimation (MLE) was used for model assessment. The latent variables include E word of mouth
(ES), purchase intention (PI), perceived usefulness (PU) and utilitarian attitude (UA). In the initial
run of CFA, few items had factor loadings less than the minimum recommended threshold value
(FL ≥ 0.7) (Kline, 2015) and fit indices indicated a good fit. Therefore, in re-specification, we
eliminated items with factor loadings below 0.7 (Kline, 2015). The re-specified model fitness
(CMIN/DF = 2.375, GFI = 0.970, AGFI = 0.953, CFI = 0.961, RMSEA = 0.046, NFI = 0.935, TLI
= 0.949, IFI = 0.962, PCLOSE = 0.702) indicated a good fit. Furthermore, as part of measurement
model analysis, we used reliability, convergent validity, and discriminant validity to examine the
strength of measures of constructs used in the proposed model (Fornell, 1987). The reliability was
measured with the values of Cronbach’s Alpha and composite reliability (CR). Cronbach’s alpha
≥ 0.70 (Hair et al., 2010) and CR ≥ 0.70 (Fornell & Larcker, 1981) is considered as a minimum
threshold for assessing reliability of a construct. Cronbach’s alpha and CR values as low as 0.6 are
considered acceptable in social sciences (Bagozzi & Yi’s, 1988; Nunnally & Bernstein, 1994).

Measures Thresholds
Relative Chi-Square (cmin/df) 0-5
Goodness of fit (GFI) ≥0.9
Adjusted goodness of fit (AGFI) ≥0.8
Comparative fit index (CFI) ≥0.9
Incremental fit index (IFI) ≥0.9
Normative fit index (NFI) ≥0.9
Tucker-Lewis fit index (TLI) ≥0.9
Root mean square error of approximation (RMSEA) ≤0.05
Probability of closeness (PCLOSE) ≥0.05

Structural Model & Hypothesis Testing


H1: ES has a relationship with UA.
H2: PU has a relationship with UA.
H3: UA has a relationship with PI.

Hypothesis Structural Path 𝛾 p-Value Decision


H1 ES→UA 0.03 0.01 Supported
H2 PU→UA 0.59 0.01 Supported
H3 UA→PI 0.19 0.01 Supported
Measurement Model

Structural Model

Вам также может понравиться