Вы находитесь на странице: 1из 9

Statgraphics

4

3

: 230671:
..
..
: ..
______________

2011

Statgraphics

, Statgraphics
: ,
.

,
STATGRAPHICS
: X, Y Z
(Z = RANDOM(Y)). X Y 4.1 (Y ).
1 -
X

10 1.985 20 1.913 30 2.007 40 2.227

1. .
Y X
:
Simple Regression - Y vs. X
Dependent variable: Y
Independent variable: X
Linear model: Y = a + b*X
Coefficients
Parameter
Intercept
Slope

Least Squares
Estimate
2.26875
-0.0129051

Analysis of Variance
Source
Sum of Squares
Model
0.887662
Residual
2.19117
Total (Corr.)
3.07883

Standard
Error
0.0773823
0.00328914
Df
1
38
39

T
Statistic
29.3188
-3.92353

Mean Square
0.887662
0.0576624

P-Value
0.0000
0.0004
F-Ratio
15.39

P-Value
0.0004

Correlation Coefficient = -0.536946

R-squared = 28.8311 percent
R-squared (adjusted for d.f.) = 26.9582 percent
Standard Error of Est. = 0.24013
Mean absolute error = 0.177813
Durbin-Watson statistic = 0.400435 (P=0.0000)
Lag 1 residual autocorrelation = 0.72532
The output shows the results of fitting a linear model to describe the relationship between Y and X. The equation of the fitted model
is
Y = 2.26875 - 0.0129051*X
Since the P-value in the ANOVA table is less than 0.05, there is a statistically significant relationship between Y and X at the 95.0%
confidence level.
The R-Squared statistic indicates that the model as fitted explains 28.8311% of the variability in Y. The correlation coefficient
equals -0.536946, indicating a moderately strong relationship between the variables. The standard error of the estimate shows the
standard deviation of the residuals to be 0.24013. This value can be used to construct prediction limits for new observations by
selecting the Forecasts option from the text menu.
The mean absolute error (MAE) of 0.177813 is the average value of the residuals. The Durbin-Watson (DW) statistic tests the
residuals to determine if there is any significant correlation based on the order in which they occur in your data file. Since the Pvalue is less than 0.05, there is an indication of possible serial correlation at the 95.0% confidence level. Plot the residuals versus
row order to see if there is any pattern that can be seen.

1 -

()
, . ,
,
5 < X < 25 X > 30.
2. .
Y, X
2 :
Polynomial Regression - Y versus X
Dependent variable: Y
Independent variable: X
Order of polynomial = 2

Parameter
CONSTANT
X
X^2

Estimate
2.45053
-0.038873
0.000633363

Analysis of Variance
Source
Sum of Squares
Model
1.11516
Residual
1.96368
Total (Corr.)
3.07883

Standard
Error
0.114977
0.0129333
0.000305914
Df
2
37
39

T
Statistic
21.3132
-3.00564
2.0704

Mean Square
0.557579
0.0530723

P-Value
0.0000
0.0047
0.0454

F-Ratio
10.51

P-Value
0.0002

R-squared = 36.2202 percent

R-squared (adjusted for d.f.) = 32.7726 percent
Standard Error of Est. = 0.230374
Mean absolute error = 0.151967
Durbin-Watson statistic = 0.430411 (P=0.0000)
Lag 1 residual autocorrelation = 0.752386
The output shows the results of fitting a second order polynomial model to describe the relationship between Y and X. The equation
of the fitted model is
Y = 2.45053-0.038873*X + 0.000633363*X^2
Since the P-value in the ANOVA table is less than 0.05, there is a statistically significant relationship between Y and X at the 95%
confidence level.
The R-Squared statistic indicates that the model as fitted explains 36.2202% of the variability in Y. The adjusted R-squared statistic,
which is more suitable for comparing models with different numbers of independent variables, is 32.7726%. The standard error of
the estimate shows the standard deviation of the residuals to be 0.230374. This value can be used to construct prediction limits for
new observations by selecting the Forecasts option from the text menu. The mean absolute error (MAE) of 0.151967 is the average
value of the residuals. The Durbin-Watson (DW) statistic tests the residuals to determine if there is any significant correlation based
on the order in which they occur in your data file. Since the P-value is less than 0.05, there is an indication of possible serial
correlation at the 95% confidence level. Plot the residuals versus row order to see if there is any pattern that can be seen.
In determining whether the order of the polynomial is appropriate, note first that the P-value on the highest order term of the
polynomial equals 0.0454477. Since the P-value is less than 0.05, the highest order term is statistically significant at the 95%
confidence level. Consequently, you probably don't want to consider any model of lower order.

2 -

,
Y X: 0 < X < 25
( ,
), 30 .
3. .
X, Y Z :
Cluster Analysis
Data variables:
X
Y
Z

Number of complete cases: 40

Clustering Method: Group Average
Distance Metric: City-Block
Clustering: observations
Standardized: yes

Cluster Summary
Cluster Members
1
40
Centroids
Cluster X
1
20.5

Percent
100.00

Y
2.0042

Z
0.075

This procedure has created 1 cluster from the 40 observations supplied. The clusters are groups of observations with similar
characteristics. To form the clusters, the procedure began with each observation in a separate group. It then combined the two
observations which were closest together to form a new group. After recomputing the distance between the groups, the two groups
then closest together were combined. This process was repeated until only 1 group remained. To specify the number of final
clusters, press the alternate mouse button and select Analysis Options. To determine a reasonable value for the number of clusters,
look at the Agglomeration Distance Plot available from the list of Graphical Options.

3 -

,
X = 8-24.
X = 25-31,37,38. X = 1-6 X = 32,33,35,36.
, 1-7, 14, 32-36, 39, 40
.
4. .
X, Y Z :
Factor Analysis
Data variables:
X
Y
Z

Data input: observations

Number of complete cases: 40
Missing value treatment: listwise
Standardized: yes
Type of factoring: principal components
Number of factors extracted: 2
Factor Analysis
Factor
Number
Eigenvalue
1
1.54835
2
1.02953
3
0.422117
Variable
X
Y
Z

Percent of
Variance
51.612
34.318
14.071

Cumulative
Percentage
51.612
85.929
100.000

Initial
Communality
1.0
1.0
1.0

This procedure performs a factor analysis. The purpose of the analysis is to obtain a small number of factors which account for most
of the variability in the 3 variables. In this case, 2 factors have been extracted, since 2 factors had eigenvalues greater than or equal
to 1.0. Together they account for 85.9294% of the variability in the original data. Since you have selected the principal components
method, the initial communality estimates have been set to assume that all of the variability in the data is due to common factors.

4 - X, Y Z

, 1.
86%
.

,
Statgraphics ,
.

Нижнее меню

Получите наши бесплатные приложения

Авторское право © 2021 Scribd Inc.