Вы находитесь на странице: 1из 57

Instructors Manual Business Research Methods Pre-Msc: Statistics (2008/09)

Week 46 & 47: Chapter 1

Chapter 2

Chapter 4

Chapter 6

Chapter 7

Chapter 8

10

11

12

Week 48: Chapter 9

13

14

15

16

Chapter 10

17

18

Chapter 11

19

20

21

22

23

24

Week 49: Chapter 12

25

26

27

28

29

30

Chapter 13

31

32

33

Week 50 Chapter 15

34

SPSS-exercises 15.47 Analyze > Descriptive Statistics > Crosstabs (Row Variable: Education; Column Variable: Section). Press the button Statistics and select Chi-square, then OK.

2 = 86.615, p=0.000. Conclusion: There is sufficient evidence to infer that educational level affects the way adults read the newspaper. 15.48 For this exercise it is better to use Excel. Make a table with the observed (which are given) and expected frequencies (p*the total number of draws=1/49*312=6.37). Compute the chi-square

35

statistic and compare the obtained value with the critical value (df = 48; =0.05). In Excel, the corresponding p-value can be computed by using the function: CHIDIST. 2 = 38.22, p=0.8427. Conclusion: There is not enough evidence to infer that the numbers were not generated randomly.

36

37

38

Chapter 19

39

40

41

SPSS-exercises 19.79 Conduct a Spearman Rank Correlation (Analyze > Correlate > Bivariate and select Spearman). Rho = 0.574 and p<0.05, which means that there is sufficient evidence to conclude that more education and higher incomes are linked. 19.80 Create a new variable (say Section) in which the observations for both variables are stacked. Another variable (e.g., Group) indicates the accompanying section for each value. Analyze > Nonparametric Tests > 2 Independent Samples> (Test Variable: Section; Grouping Variable: Group).

Z=2.65 and p<0.05. There is enough evidence to conslude that the two teaching methods differ. 19.85 a. The one-way analysis of variance and the Kruskal-Wallis test should be considered. If the data are normal apply the analysis of variance, otherwise use the Kruskal-Wallis test. b. Create a new variable (say Binding) in which the observations for both variables are stacked. Another variable (e.g., Group) indicates the accompanying section for each value.

42

Analyze > Nonparametric Tests > K Independent Samples (Test Variable: Binding; Grouping Variable: Group).

43

Week 51: Section 4.4

44

45

46

Chapter 16

47

48

49

SPSS-exercises 16.81 Analyze > Regression > Linear (Dependent: Customer; Independent: Ads).

50

a. y = 296.92 + 21.36 * x . b. On average each additional ad generates 21.36 (22) customers. c. Coefficient b1 is not significant (t=1.495, p>0.05). There is not enough evidence to conclude that the larger the number of ads, the larger the number of customers. d. R square = 0.085. There is a weak linear relationship between the number of ads and the number of customers. e. The linear relationship is too weak for the model to produce predictions.

51

Chapter 17

52

53

54

SPSS-exercises 17.40 Analyze > Regression > Linear (Dependent: Yield; Independents: Fertilizer, Water).

55

a. y = 164.01 + 0.14 x1 + 0.31x2 . For each additional unit of fertilizer crop yield increases by 0.14 (holding the amount of water constant). For each additional unit of water crop yield increases on average by 0.31 (holding the fertilizer constant). b. b1 is not significant (t-value = 1.717, p>0.05). There is not enough evidence to infer that there is a linear relationship between crop yield and the amount of fertilizer. c. b2 is significant (t-value = 4.637, p<0.05). There is enough evidence to infer that there is a linear relationship between crop yield and the amount of water. d. The standard error of the estimate is 63.087 and R square is 0.475; the model fits moderately well. e. Do the regression analysis again (Analyze > Regression > Linear), but indicate that you want to save the standardized and unstandardized residuals and the unstandardized predictions. Make a histogram of the (standardized) residuals (Graphs > Regression > Linear > Plots > Standardized Residuals Plots > select Histogram). Conclusion the errors appear to be normal.

56

Plot the (unstandardized) residuals against the predicted values (Graphs > Legacy Dialogs > Scatter/Dot. Here seems to be a problem.

f. Choose Analyze > Regression > Linear > Save, and select besides unstandardized predictions also individual prediction interval. The first row of your dataset give the upper- (349.3) and lowerband (69.2) of your prediction (209.3).

57

Вам также может понравиться