Академический Документы
Профессиональный Документы
Культура Документы
AGAPITO, QUEENTRISHA
BARELA, CYRIL D.
BUMANGLAD, MARICAR R.
CUBINAR, EZRA
RAMOS, GLORY FAITH T.
define important terms about
research.
familiarize with Statistical Treatments
Apply the Research Terms and
Statistical Treatments
Exploratory research
It allows the researcher to familiarize him/herself with the problem or concept to
be studied, and perhaps generate hypotheses to be tested.
It helps determine the best research design, data collection method and selection
of subjects, and sometimes it even concludes that the problem does not exist!
It test concepts before they are put in the marketplace, always a very costly
endeavor. In concept testing, consumers are provided either with a written
concept or a prototype for a new, revised or repositioned product, service or
strategy.
It is meant to provide information that is useful in reaching conclusions or
decision-making.
It is divided into two:
INFERENTIAL
Formula:
Where: % = Percent
f = Frequency
N = Number of cases
2. Mean Used to get average or
central value (e.g. level, extent,
status, etc.)
3. T-test
It is a strong statistical technique that is used to show difference between two or
more means or components through significance tests.
Where,
F = Anova Coefficient
MST = Mean sum of squares due to treatment
MSE = Mean sum of squares due to error
Where,
SST = Sum of squares due to treatment
p = Total number of populations
n = The total number of samples in a population
Where,
SSE = Sum of squares due to error
S = Standard deviation of the samples
N = Total number of observations
Average
Calculate the Anova coefficient. Types of Number Standard
Domestic
Animals of animals Deviation
Solution: animals
Construct the following table: Dogs 5 12 2
p=3 Cats 5 16 1
n=5
N = 15 Hamsters 5 20 4
x̄ = 16
SST = ∑n (x−x̄)2
SST= 5(12−16)2+5(16−16)2+11(20−16)2 Animal
n x S S2
= 160 name
MST = SSTp−1 Dogs 5 12 2 4
MST = 1603−1
MST = 80 Cats 5 16 1 1
SSE = ∑ (n−1)S2 Hamster 5 20 4 16
SSE = 4×4 + 4×1 + 4×16
SSE = 84
MSE= SSEN−p
MSE=841538415−3
MSE = 7
F = MSTMSE
F = 807
F = 11.429
4.
Used to find the degree of the association of two sets of variables, X and Y or
to test the significant relationship between the two variables Multiple
Correlation Used to test if the independent variables have influence on the
dependent variables
r = .67. That is, as height increases so does basketball performance. This
makes sense. However, if we plotted the variables the other way around and
wanted to determine whether a person's height was determined by their
basketball performance (which makes no sense), we would still get r = .67.
This is because the Pearson correlation coefficient makes no account of any
theory behind why you chose the two variables to compare.
It is important to realize that the Pearson correlation coefficient, r, does not
represent the slope of the line of best fit. Therefore, if you get a Pearson
correlation coefficient of +1 this does not mean that for every unit increase in
one variable there is a unit increase in another. It simply means that there is
no variation between the data points and the line of best fit.
must be either interval or ratio measurements (see our Types of
Variable guide for further details).
The variables must be approximately normally distributed (see our Testing for
Normality guide for further details).
There is a linear relationship between the two variables (but see note at
bottom of page). We discuss this in the next section.
Outliers are either kept to a minimum or are removed entirely. We also
discuss this on page 2.
There is homoscedasticity of the data. This is also discussed on page 2.
To test to see whether your two variables form a linear relationship you
simply need to plot them on a graph (a scatterplot, for example) and visually
inspect the graph's shape. In the diagram below, you will find a few different
examples of a linear relationship and some non-linear relationships. It is not
appropriate to analyse a non-linear relationship using a Pearson product-
moment correlation.
Assumption #6:
Your data must not
show multicollinearity, which
occurs when you have two or
more independent variables
that are highly correlated with
each other.