Вы находитесь на странице: 1из 78

PSYCHOANALYSIS

Psychoanalysis is the most widely publicized psychological system, especially to the non psychologists.
Although it has long been rejected by some academic psychologists, it has been more popular in other
scientific and technical areas, in literary article and with the lay public. More recently, it has become the
growing concern within the some groups of academic psychologists.
The body of psychoanalytic writing is enormous. Freuds collected work alone, in their English
translation, run to nearly 24 volumes. No chapter can attempt to give a comprehensive picture of even
one psychoanalytic theory. Freud and his followers made an enormous contribution to psychology; the
importance of that contribution is well accepted.
IMPORTANT MEN IN PSYCHOANALYSIS
Historical Antecedent
influences
G.W. Leibniz
Johann Wolfgang Goethe
Gustav Theodor Fechner
Charles Darwin

PSYCHOANALYSIS
Pioneers
Johann F. Herbart
Arthur Schopenhauer
Jean Martin Charcot
Joseph Breuer

Founders

Sigmund Freud

Developers
Alfred Adler
Carl Gustav Jung
Otto Rank
Karen Horney
Harry S. Sullivan
Erich Fromm
Sandor Ferenczi

HISTORICAL ANTECEDENTS OF PSYCHOANALYSIS


In the development of psychoanalysis, there are two kinds of influences; there was an intellectual
tradition in which Freud can be placed, and there is another set of more direct, personal influences in
Freud. Let us consider the former first.
-

Early in the 18th century, Leibniz developed a theory about the elements of reality, which he
called as monads. They are described as centers of energy and motivation in individuals. He
also point out to the unconscious and to the degrees of consciousness in brief.
A century later, Herbart worked on these ideas and suggested the occurrence of a conflict of
ideas as they strive to become conscious. So Freud was not the first one who discover the
unconscious; his unique contribution was in his detailed characterization of the unconscious and
its mode of operation.
Schopenhauer put forward the idea of repression into unconscious and resistance to
recognizing repressed material; however Freud said that he developed the same ideas without
having read of Schopenhauers work.
Freud apparently chose a scientific carrier after reading some of Goethes essays on nature.

Freud tended to make a biological view of man in accordance with Darwins biological and
evolutionary views; an example is the death instinct, which he said depends upon speculation
about the origin of life.
Freuds concern with the intensity of stimulation, with mental energy, and the topographical
model of mind was related to Fechners prior works on mind-body issue.

THE LIFE OF SIGMUND FREUD


Sigmund Freud (1856-1939) is almost universally considered as a giant among psychologists, even by
those who think that he was a misguided giant. The kind of system which Freud evolved is more
intimately related to his life; therefore some understanding of his life is important in evaluating his
system.
Freud was born in Czechoslovakia, in Austria, on May 6, 1856 in a poor Jewish family. The family moved
to Vienna, when he was 4 years of age. He early showed great academic aptitude and decide to become
a physician. But he didnt like the job well. Later Freud felt an interest in neurology or psychiatry among
the medical specialties and he studied both.
In 1885, Freud went to Paris and studied under Charcot, a famous hypnotist, teacher and authority on
hysteria. Freuds interest in hypnosis as a treatment method was strengthened by Charcot.
By 1895, Freud and Breuer; Freuds close friend and physician- together published a work Studies in
hysteria, which marked the beginning of the psychoanalytic school.
Perhaps the greatest milestone in Freuds carrier was the publication of the Interpretation of Dreams
in 1990.
In the year 1923, a cancer was discovered in freuds mouth, characteristically due to the heavy smoking.
During the last 16 years of life Freud received a continuous series of almost 30 operations and he died
on September 23, 1939 due to his illness.
THE FOUNDING OF PSYCHOANALYSIS
The germ of psychoanalysis appeared in the paper Studies on hysteria, published by Breuer and Freud
in 1895. Both were interested in hypnotism as a treatment method and many of the ideas that provided
the basis for psychoanalysis come from Breuers observations of his patient Anna.O., and others from
Freuds observations of hysterical patients.
Freud observed that, not all his patients could be hypnotized, and perhaps feels that his technique was
deficient. He began to modify his method and was decided to save the talking cure. During this, Freud
was exerted a great deal of guidance on the patients processes of association. But one patient told him
that he was interrupting too much and that he could be keep quite. This suggestion was the final
impetus that converted Freud from the hypnotic trance to free association as a method of treatment.

Another important idea was conviction of the importance of unconscious processes in the etiology of
neuroses. This evidence comes partly from the observation that symptoms often seemed to be
expressions of events which the patients could not remember or of impulses of which he was unaware.
Freud himself observed that most of his hysterical patients reported traumatic sexual experiences, often
with the members of their own family, in their childhood. He concluded that no neurosis is possible in a
person with a normal sex life. But Breuer was not sure about the role of sexuality and begin to left the
psychoanalysis to Freud.
The importance of symbolism was also recognized by Freud; symptoms seem to be distorted, but
symbolic representations of repressed events or conflicts. Only through their recovery and working out
(abreaction) can the patient be cured.
In the quest for origin of symptoms, Freud was forced further and further back into the childhood; his
belief in the importance of childhood experiences in the production of neuroses was growing. However
he believed that, these experiences gained their traumatic force only after the patient reached the
puberty.
The last, and possibly the most important discovery was the transference relation. He already seen
how Breuer became fond of his patient Anna.O (countertransference); it was also true that she also
became fond of him. It seemed that the patient transferred to the therapist, the feelings that earlier she
had for other important people in life. These feelings may be positive or negative and become stronger
at some stages of therapy. The patient has to understand and must live through this for the complete
cure. Thus, the transference is an important and most useful tools of the therapist.
FREUDS SYSTEM
Freud had a surprising attitude toward the reality of his conceptions. He might admit that, they were
convenient fictions invented for explanatory purposes, but his usual attitude was that he was dealing
with real things.
It seems that Freud really regarded unconscious as a country which he was exploring rather than as a
system which he was constructing. Freud believed that he found two states within this country- the
conscious and the unconscious. Different kinds of laws determine what happens in these two states. The
unconscious operates according to a set which Freud called the primary process. Ordinary logic is not
applied here and the mechanisms that can be observed in dreams characterize the action of the primary
process. For instance, some of the things that can occur are the condensation of several thoughts into
a single symbol, or the displacement of an impulse or affect from one symbol to another, the
timelessness characteristic of dreams, the conversion of an impulse into its opposite, and so on. The
illogicality of dreams is the characteristic of the primary process as a whole.
The conscious operates according to the secondary process, which is based on ordinary logic.

Part of the energy for the mental apparatus is called libido; its source is in biological tensions, and the
most important of these is sexual instinct. Most of the sexual energy derives from the erogenous
zones- bodily areas especially sensitive to stimulation.
The id is the primordial reservoir of this energy and being unconscious operates according to the
primary process. Various instincts which reside in the id press toward the discharge of their libidinal
energy. Each instinct therefore has a source in biological tensions, an aim of discharge its energy in
some particular activity, and an object which will serve to facilitate the discharge.
The id operates according to the pleasure principle. In general, the elimination of tension is what
defines pleasurableness. Departure from a low level of tension or any heightening of tension is
unpleasurable. Id does not distinguish between the hallucinatory fulfilling of a need and the actual
fulfilling of that need. However, the tension does not remain reduced except through contact with real
object.
Accordingly, another psychic structure develops and complements the id. It is called the ego. It
operates according to the laws of secondary process, and hence in being contact with reality, operates
according to the reality principle. That is, it is an evaluative agency which intelligently selects the line
of behaviors which minimize the pain and maximize pleasure. So the ego is still in the service of the
pleasure principle through the reality principle, but sometimes turns aside the direct gratification of ids
needs in order that their overall gratification will be greater, i.e, ego only postpones gratification.
As a result of contact with the cultural realities, especially as embodied in the parents, a third mental
agency develops. It functions as a suppressor of pleasurable activities in the same way that external
agencies did at one time. It is called superego, and has two subsystems; a conscience which punishes
and an ego ideal which rewards the suitable behavior. Conscience brings about the feelings of guilt and
the ego ideal brings about the feelings of pride. The superego is unlike the ego in its attempt to halt
completely certain pleasurable activities. The ego just postpones and tries to find appropriate ways of
satisfaction of pleasurable activities of id. The operation of superego is largely unconscious; large part of
its operation follows the laws of primary process.
Freud postulated that the instincts active throughout the psychic apparatus could be divided into two
groups: life instincts and the destructive or death instincts, since their aim is the death of the individual.
Freud viewed the instincts as conservative; that is, they aim for a return to a previous state and thus
explains the repetition compulsion which manifests itself in some behavior. Since the living matter
arises from the dead matter, the ultimate previous state must be a state of complete quiescence or
death. The death instinct works for the disintegration of the individual, while the life instinct works for
the complete integration of the individual. The death instinct is the part of his theory least frequently
accepted by other analysts.
The energy in the service of life instincts was called as libido. No name was given especially ot the
energy that activates the death instinct. As the individual develops his ego, more and more of the
available psychic energy comes under the dominion of the ego rather than of id. The ego attaches this
energy to psychic representations of external objects; such an attachment is called cathexis. The kind

of object cathected depends upon the instinct which has energy available. The distribution of energy
over the instincts is flexible and the distribution was assumed to change gradually, so that more and
more energy was available for the self preservative instincts of the ego, and less energy for the
aggressive instincts of the id.
In the course of the individual development, there is a stage in which much of the libidinal energy is
cathected on to the parent of the opposite sex; in the case of the boy, this leads to the development of
the Oedipal conflict. His sexual feelings are directed to the mother, but the child is blocked from the
direct expression. Because of his impulses the boy has a fear of castration by the father, thus he
represses it into the unconscious. This strongly repressed that sexual urges enter into the latency period.
They emerge again at puberty, when the increase in sexual tensions is sufficient to disturb the psychic
economy and overcome the repressive forces.

Bems Explanation for Behaviour


In 1978, Thomas Gilbert published Human Competence: Engineering Worthy Performance which
described the Behaviour Engineering Model (BEM) for performance analysis. This model consists of
three Leisurely Theorems that:
1) Distinguished between accomplishment and behaviour to define worthy performance,
2) Identified methods for determining the potential for improving performance (PIP) 3) Described six
essential components of behaviour that can be manipulated to effect performance.
Determine Worthy or Desired Performance
The first step to using the BEM involves identifying desired or worthy performance. This level of
performance is characterised by behaviour (B), or what a person does, and accomplishment (A), the
outcomes of the behaviour.
Determine the Potential for Improving Performance (PIP)
The gap between desired and current performance can be determined by comparing the very best
instance of that performance with what is typical Exemplary performance is demonstrated when

behaviours result in the best outcomes. Typical performance is the current level of performance. The
potential for improving performance is the ratio between the two and can be expressed as:
The PIP is the performance gap. The greater the gap, the greater the potential for typical performers to
improve their performance to the exemplary level. Rather than viewing this gap as a problem, this
model helps people see the potential for improvement more positively (Chyung, 2002).
21
In order to understand what changes must be made to a management system to achieve worthy
performance, a performance technologist must first determine the influences on behaviour. Gilbert
(1978) states that behaviour is the product of the personal characteristics of an individual (repertory)
and the environment where behaviours occur. Within each of these aspects of behaviour there are
conditions that can be examined for deficiencies and ultimately manipulated to improve performance.
These six conditions of behaviour are data, instruments, incentives, knowledge, capacity and motives.
Following the sequence of steps in the cause analysis process is most likely to uncover the variables that
can be improved with the least costly intervention strategies first. Improvements to environmental
conditions generally have the greatest leverage for performance improvement. Providing people with
clear expectations of and feedback on performance, the right tools for the job, and appropriate rewards
and recognition for performance are often the most cost effective changes that can be implemented
within a management system.

TELLIGENCE
3.5.1
Robinson & Robinson defined intelligence as, Intelligence refers to the whole class of cognitive
behaviours which reflect an individuals capacity to solve problems with insight, to adapt himself to new
situations, to think abstractly and to profit from his experiences.
3.5.2
Cognitive Development3.5.4 Measurement of Intelligence
Measurement of intelligence started with Sir Francis Galton, who attempted to study gifted children.
However the first standardised test of intelligence was developed by Binet and Simon in 1905. It was an
individual test.
The first group test was Army Alpha Test developed in 1916 during the First World War for mass
recruitment of soldiers.
Intelligence test can be classified into many categories and these are:

administered on one person at a time.


administered on more than one person at a time.
requires the use of language by the testee.
-verbal intelligence test involves doing some task by the testee which do not necessitate use of
language.
s of intelligence involves items which are devoid of bias toward any culture.
3.5.5 Uses of Intelligence Tests
Intelligence tests are used for various purposes. However these purposes can be broadly categorised as
under:
Estimation of general intelligence: Most obvious use of intelligence tests is to determine the general
intelligence level of an individual. Intelligence tests are used to determine the current potential of a
person and evaluate his present achievements on this basis.
Prediction of academic success: No matter what is the type of intelligence test, it can be used to predict
academic success of an individual. If the intellectual estimation according to the test is high we can
expect a person to excel in academics.
Appraisal of personality: Scores of an individual on intelligence tests tell not only about his intellectual
capabilities but his personality as well. For example, if a person fails on items of a test. Then what
reasons he ascribes for his failure tell a lot about his personality. Does he invent some excuse for that?
Does he fail on easy items but is able to pass the difficult ones. Recent studies reveal that a lot many
psychological problems can be diagnosed on this basis.
3.6 THEORIES OF INTELLIGENCE
G factor and S factor theory of Intelligence Charles Spearman is generally credited with defining general
intelligence. Based on the results of a series of studies collected in Hampshire, England, and other
places, Spearman concluded that there was a common function (or group of functions) across
intellectual activities including what he called intelligence. This common function became known as g
or general intelligence.
To objectively determine and measure general intelligence, Spearman invented the first technique of
factor analysis (the method of Tetrad Differences) as a mathematical proof of the Two-Factor Theory.
The Two-Factor Theory of Intelligence holds that every test can be divided into a g factor and an s
factor. The g-factor measures the general factor or common function among ability tests. The s-factor
measures the specific factor unique to a particular ability test.
Theory of Multiple Intelligence

Howard Gardner defined intelligence in terms of distinct set of processing operations that permit
individuals to solve problems, create products , and discover new knowledge in a wide range of
culturally valued activities. Gardner declined the idea of spearmans g. His theory of multiple
intelligences is based on studies not only of normal children and adults but also based on studies of
gifted individuals (including so-called savants), of persons who have suffered brain damage, of experts
and virtuosos, and of individuals from diverse cultures. This led Gardner to break intelligence down into
at least eight different components: logical, linguistic, spatial, musical, kinesthetic, interpersonal,
intrapersonal. Gardner further stated that each intelligence has unique biological basis and course of
development, and different performance.
Triarchic Theory of Intelligence
Robert Sternberg proposed the Triarchic Theory of Intelligence to provide a more comprehensive
description of intellectual competence. The Triarchic Theory describes three fundamental aspects of
intelligence. Analytic intelligence includes mental processes through which intelligence is manifested.
Creative intelligence is necessary to deal with novel situations or when an individual is automatizing
performance of a task. Practical intelligence finds expression in social situation. It includes adaptation to,
selection of, and shaping of the environment to fit the social situation. It considers general intelligence
as part of analytic intelligence, and only by considering all three aspects of intelligence can the full range
of intellectual functioning be fully understood.
Sternberg defined intelligence as an individuals assessment of success in life by the individuals own
(idiographic) standards and within the individuals sociocultural context. Success depends upon
combinations of analytical, creative, and practical intelligence.
3.8 LET US SUM UP
In the preceding paragraphs we explored the meaning of cognitive development. Now we know that
cognitive development includes higher mental processes like reasoning, perceptual development and
acquisition of language etc. We also studied the phenomenon of attention, how it develops and its
different types. What role it plays in the development of the child. We discussed what we understand by
the word language. How a child comes to acquire language. We now also know what role language has
in the development of a child. Besides we have discussed the concept of executive functions and when
are they called in action by the individual. We studied intelligence and various concepts related to it. We
tried to acquaint our selves with relative importance of heredity and environment in our development.
UNIT 2 INTRODUCTION TO PSYCHOLOGICAL EXPERIMENTS AND TESTS
EXPERIMENT
The most powerful scientific method is the experiment. Experiment provides the strongest tests of
hypothesis about cause and effect. The investigator carefully controls conditions often in a laboratory
and take measurement in order to discover the causes of relationship.
Definition

According to Festinger & Katz (1953) the essence of experiment may be described as the observing the
effect on a dependent variable of the manipulation of independent variable. According to Edwards
(1971) when certain variable can be controlled or manipulated directly in a research problem by the
investigator, the research procedure is often described as experiment. According to Chapin (1974) an
experiment is an observation under controlled condition.
You can understand the basic idea of experiment through the following example; Imagine that some
researcher wants to study the effect of alcohol consumption on thinking ability before administering a
thinking test, they would manipulate alcohol consumption [by giving some people strong drink laced
with alcohol and others the same drink without alcohol] by randomly assigning people to the two
condition holding other factors constant. This would eliminate alternative explanation for why thinking
might vary with drinking. With the variation in the experimental factor [here, for example the alcohol]
the behaviour changes, then the factor is having an effect.
On the basis of these definitions and related examples above, two salient features of the experiment
emerge i.e. (i) Manipulation of the variables and (ii) Control in experiment.
Manipulation of the Variables
Manipulation refers to the deliberate or active change introduced by the experimenter in an event to
see its effect on behaviour. It involves arranging for the appearance of different quantities or different
values of a variable.
Control in Experiment
No experiment is better than its poorest control. This statement is the major issue involved in any
experimental attempt to arrive at factual knowledge. Boring (1954) has identified some uses of the word
control
a) Control is used to refer to the comparison of results for the purpose of verification.
b) Control is also used to refer to arranging conditions in such a way that some behaviour will occur. (c)
Control is used to refer to the practice of eliminating or holding constant the effects of extraneous
variables.
Since the purpose of psychological research is to discover the relationships between independent and
dependent variables, care must be taken that the changes that are observed in the dependent variable
are due to the independent variable and not to extraneous variables.
In order to achieve control over the relevant variables, which are extraneous to the conduct of an
experiment, experimenters use certain techniques. Some of the important techniques are listed below:
Since the goal of an experimenter is to minimize the extraneous variables, the best way to handle this
problem is to eliminate that variable from the experimental settings.

If elimination is not possible then effort needs to be made to hold them constant so that their effect
remains the same throughout the experimental setting.
In case of controlling organism and background variables, matching is also used. In this procedure, the
relevant variable are equated or held constant across all the conditions of experiment.
To control practice and fatigue effects counter balancing methods is used. It is an interesting way to
minimize the effect of order or sequence. Suppose there are two tasks to be given in an experiment. The
experimenter may interchange the order of the tasks. Thus, half of the group may receive the tasks in
the order of A and B while the other half in the order of B and A.
Random assignment of participants to different groups eliminates the potential systematic differences
between groups.
Advantages and Disadvantages of Experiment
Advantage of experimentation:
1) Through the experiment, cause and effect relationship can be established.
2) An experiment is replicable.
3) An experiment provides a great degree of precision in manipulation of the independent variables.
4) An experiment has sufficient degree of internal validity.
5) Findings, drawn as a result of experimentation, are said to be more objective, accurate and reliable.
6) Thus experiment is the best method of hypothesis testing.
Disadvantage of Experimentation
1) During an experiment, behaviour is often studied in an artificial environment, that is the laboratory
where the situation and conditions are controlled so heavily that behaviour is a distortion of what would
occur naturally.
2) Research subjects typically know they are in an experiment and are being tested and measured.
3) They may react to this awareness by trying to please the researcher.
4) They may guess the purpose of the research, and thus will change behaviour accordingly.
5) If they know they are being monitored, they will behave in a different way than if they were unaware
of being monitored.
6) An experiment is a simplified version of reality, and at times it is too simplified to tell us much about
real life.

7) Some people challenge experimental research on ethical grounds.


Precisely, experiment is the crown jewel among psychologys method or answering questions about
behaviour. When it is used with skill and care, the experimental method yields results that help us not
only to answer complex questions about human behaviour, but also to understand the causes of such
behaviour. Thus, experimentation is psychologys ultimate answer to the question.
INDEPENDENT AND DEPENDENT VARIABLES
EXTRANEOUS VARIABLES
EXPERIMENTAL AND CONTROL GROUPS
In its simplest form, an experiment might have two groups of subjects, an experimental group and a
control group. The experimental group receives the experimental manipulation or treatment of interest
while the control or comparison group is treated in the same way as the experimental group but without
the manipulation or treatment of interest.
Random Assignment of Participants to Groups
It is crucial in a true experiment for the participants in the sample to be randomly assigned to the
different groups in the research design. Random means that before assignment every participant has an
equal chance to be exposed to the different conditions created by the investigator. In other words
participants must be assigned to the experimental and control groups on random basis. The goal of
random assignment is to exclude or control all possible extraneous variables, so that the only difference
obtained between the two groups should be attributable to the factor manipulated by the investigator.
Random assignment reduces the likelihood that the results of experiment will be due to any preexisting
differences. Random assignment is an unbiased method of assigning the participants to experimental
and control conditions of an experimental.
Let us take an example. The problem to be studied is whether yogic exercises affect general health in
the patients of coronary heart disease. In this study yogic exercise is the independent variable. The
experimenter manipulated the amount of the yogic exercise by having the heart patients engaged in
those exercises five times a week under the supervision of a trained instructor.
We need to have one group of heart patients engaged in yogic exercises and the other not engaged in
yogic exercises. We randomly assign heart patients to these two groups and make observations of both
the groups in the presence and absence of yogic exercises. The differences in their general health are
then measured. These steps are shown in the table below.
Experimental Group
Manipulation of Independent variable exercise
Measurement of dependent variable general health

Comparison of behaviours of control group and experimental group


Obtained subjects
Random assignment

Control group No exercise General health


Designing the Experiment
After defining variables, the experimenter decides about levels of independent variable, specifies the
dependent variables, collects the experimental materials, and prepares the procedure to be used. Thus
a blue print of how the experiment will be conducted is finalized. The experimental conditions known as
treatments are decided. The overall blueprint of the experiment is called experimental design. It
contains the specification of the plan and structure of the experiment. For the sake of precision the
variables and their measures are defined and specific instructions for the experimental conditions are
clearly written. There are different kinds of designs in psychological studies which will be discussed in
later pages
INTRODUCTION OF TEST
We have been subjected to various tests since our school days; hence we have preliminary idea about
what a test means to us. A test is a measurement device or technique used to quantify behaviour or aid
in the undertaking and prediction of behaviour. According to dictionary, test is defined as a series of
questions on the basis of which some information is sought. In psychology, the meaning of test is
something more than this.
Today, psychological tests represent a major contribution by psychologists. They are connected with
assessment of almost all aspects of psychological functioning, namely; personality, intelligence, value,
aptitude, attitudes, interest and creativity etc. According to Anastasi (1988) a psychological test is
essentially an objective and Standardised measure of a sample of behaviour. Freeman (1955) has
defined a psychological test as a Standardised instrument designed to measure objectively one or more
aspect of a total personality by means of samples of performance on behaviour. According to Singh
(2006), a psychological or educational test is a Standardised procedure to measure quantitatively or
qualitatively one or more than one aspect of a trait by means of sample of verbal or non-verbal
behaviour. According to McIntire and Miller (2007), Psychological tests are instruments that require the
testee to perform some behaviour. According to Gregory (2004), a test refers to a Standardised
procedure for sampling behaviour and describing it with categories of scores. In addition, most tests
have norms or standards by which the result can be used to predict other more important behaviour.
After analysing above definition, it appears that a test consists of series of items, on the basis of which
some information is sought about one or more aspect of an individual or groups traits, abilities, motives,
attitude, and so on. There are many types of behaviour. Overt behaviour is an individuals observable
actively. Some psychological tests attempts to measure the extent to which someone might engage in or

emit a particular overt behaviour. Other tests measure how much a person has previously engaged in
some overt behaviour. Behaviour can be covert - that is, it takes place within an individual and cannot
be directly observed. For example, your feelings and thoughts are types of covert behaviour. Some tests
attempt to measure past or current behaviour. Psychological and educational tests, thus, measure past
or current behaviour.
2.6.1 Characteristics of Good Psychological Test
After analysing the above definitions we can discuss the chief characteristics of a good psychological
test:
Objectivity: a test must have the trait of objectivity i.e., it must be free from the subjective element so
that there is complete interpersonal agreement among experts regarding the meaning of the items and
scoring the test. By objectivity of items is meant that the items should be phrased in such a manner that
they are interpretated in exactly the same way by all those who take the test. Objectivity of scoring
means that the scoring method of the test should be a standard one so that complete uniformity can be
maintained when the test is scored by different experts at different times.
Standardised Procedure: is an essential feature of any psychological test. A test is considered to be
Standardised if the procedure for administering it is uniform from one examiner and setting to another.
Standardisation depends to some extent upon competence of the examiner. This is ensured by outlining
clear cut instructions for the administration of test in the test manual.
Sample of Behaviour: a test targets a well defined and finite behaviour or domain known as the sample
of behaviour due to constraints involved in comprehensive testing. For example Wechsler Adult
Intelligence Scale (WAIS) uses 35 carefully selected words to judge the vocabulary of testee.
Norms: refer to the average performance of a representative sample on a given test. Norms are
statistical standards based on the scores of a large sample so that an individuals score may be
compared with those of others in a defined group. A score on a psychological test is not an absolute
score. It is relative to a normative group. A given score needs to be compared with the scores of the
members of the normative group. For instance, the mean score may be used to indicate the norm.
Another persons score may be interpreted in the context of groups average. There are four common
types of norms; age norms, grade norms, percentile norms, and standard score norms. Norms help in
interpretation of the scores. In the absence of norms no meaning can be added to the score obtained on
the test.
Practicability: a test must also be practicable from the point of view of neither the time taken in its
completion, length, scoring, etc. in other words the test should not be lengthy and the scoring method
must not be difficult nor one which can only be done by highly specialised persons.
Reliability: A test must also be reliable. Synonyms for reliability is: dependability, stability, consistency,
predictability, accuracy. Reliability means consistency of a test that makes it trustworthy. Stated simply,
if a test yields similar results on different occasions then it is considered reliable. For instance if you

administer a test of intelligence on a child and find him a genius and repeat the test after two months
and the child is still scoring in the same category then the test will be considered as the reliable one.
Validity: A second key characteristic of a good test is validity the test must really measure what it has
been designed to measure. Validity is most often assessed by exploring how the test scores correspond
to some criterion that is some behaviour, personal accomplishment, or characteristics that reflects the
attribute. The rest is design to gauge.
TYPES OF PSYCHOLOGICAL TEST
Typology of tests is a purely arbitrary determination (Gregory 2004). However, using different criteria,
psychological tests can be classified in the following manner:
2.7.1 On the Basis of Mode of Administration
Tests have been classified on the basis of administrative conditions into two types: Individual Test and
Group Test
Individual Test : Individual test is that test that are administered to one person at a time. Kohs Block
Design test is an example of the individual test. They are often used in clinical evaluations. The problems
with individual test are that they are time consuming, higher in cost, and is labour intensive.
Group Tests : These are primarily designed for mass testing that is, they can be administered to more
than one individual at time. They are economical and time saving. For example, Army Alpha and Alpha
Beta test.
2.7.2 On the Basis of the Nature and Contents of the Items
A test may be classified on the basis of the nature of the items or the contents used therein. Important
types of the test on this criterion are:
i) Verbal Test: is one whose items emphasize reading, writing, and oral expression as the primary mode
of communication. Herein, instructions are printed or written. These are read by the examinees and
accordingly items are answered. Jalota group general intelligence test and Mehta group test of
intelligence are some common examples. Verbal test are also called paper pencil test because the
examinee has to write on a piece of paper while answering the test items.
ii) NonVerbal Test: These are tests which emphasise but do not altogether eliminate the role of language
by using symbolic materials like pictures, figures, and so on. Such test uses the language in instructions
but in items they do not use language. Ravens progressive matrices are a good example of non verbal
test.
iii) Performance Test: are those that require the examinees to perform a task rather than answer some
questions. Such test prohibits the use of language in items. Occasionally oral language is used to give
instruction or the instruction may also be given through gesture and pantomime.

iv) Objective Test: There is another classification according to the nature of test items, where test can be
classified into objective type test in which the responses are of multiple choice types and essay type
tests in which the responses are of long answer type.
On the Basis of Mode of Scoring
The test can be classified into self scored versus expert scored or hand scored versus machine scored
test. In self-scored test the testee himself/herself can score his responses with the help of a scoring key
while in the case of expert scored test, the test responses are scored by an expert person (generally the
test administrator). Hand scored tests are test that are scored manually while machine scored tests are
the tests that are scored with the help of a machine (computer aided) for example the optical mark
recognition (OMR) sheet responses used for various educational and mass assessment.
2.7.4 On the Basis of Rate of Performance or Time Limit in Producing a Response
The test can be classified into speed test and power test. Speed test are timed tests that is they examine
the subjects speed of responding within the stipulated period of time. Test items in a speed test are of
uniform difficulty but time limit is such that no examinee can attempt all the items (Chadha 1996). A
pure speed test is a test composed of items so easy that the subject never gives a wrong answer and his
score is equal to number of questions attempted by him for example the clinical speed and accuracy
test.
Power test, on the other hand, offers enough time for the subject to attempt all the questions.
A power test is one which has a generous time limit so that most examinees are able to attempt every
item. Usually such tests have items which are generally arranged in increasing order of difficulties. In
many intelligence tests there are certain items that are too difficult for anyone to solve; for example;
Ravens progressive matrices (Raven and court 1998).
2.7.5 On the Basis of Criterion of Behavioural Attitude Measured
According to the behavioural attitudes assessed, tests can be classified into personality test, ability
(intelligence, aptitude, achievement and creativity) tests and tests of attitudes, values and interest.
Personality Tests: These tests are designed to measure a persons individuality in terms of unique traits
and behaviour. Personality test measures the traits qualities or behaviours that determine a persons
individuality such test include checklists, inventories for example (16 PF, MMPI, Maudsley personality
inventories) and projective techniques like the Rorschach Test, Thematic Apperception Test and so on.
Ability Tests: These are the qualities that enable an individual to do specific task at a specified time and
can be classified into intelligence, aptitude, achievement, and creativity.
Intelligence Test: This measures an individuals ability in relativity global areas such as verbal
comprehension, perceptual organisation or reasoning and thereby help determine potential for
scholastic work or certain occupations. For example, Wechsler adult intelligence test (WAIS).

Aptitude Test: This test measures the capability for a relativity specific task or type of skill Aptitude test
is in effect a narrow form of ability testing. Example of aptitude test are Seashore measures of musical
talents (Seashore 1938), SAT (Scholastic Assessment test previously called as scholastic aptitude test).
Creativity Test: This test assesses the novel original thinking and the capacity to find unusual or
unexpected solutions especially the vaguely defined problem. For example Torrance test of creative
thinking, thinking by E.Paul Torrance (1966) and creativity self report by Feldhusen (1965).
Achievement Test: This measures a persons degree of learning, success, or accomplishment in a subject
of task. For example test of achievement and profiency (TAP 1996), lowa test of basic skill (1992).
Apart from these, tests can be also classified on the basis of behavioural dimensions that are attitude
test, values test, interest test and neuropsychological test and so on.
Attitude Tests: These tests refer to our evaluations about various aspects of the world and tests of
attitude measure a persons tendency to evaluate favorably or unfavorably a class of events, objects
or persons. Examples of attitude tests are criminal attitude test (CATS), (Taylor 1968); Attitude towards
Retarded (Efron and Efron1967).
Values Test: These tests refer to the normative frameworks related to individual / group behaviour or
expectations. Examples of value test are Allport, Vernon and Lindzey test of values.
Interest Tests: These tests measure an individuals preference for certain activities or topics and thereby
help determine occupational choice. For example strong interest inventories (1927) , vocational
preference battery (Holland).
Neuropsychological Tests: These measure cognitive, sensory, perceptual and motor performance to
determine the extant, locus, and behavioural consequences of brain
damage. For example louria Nebraska neuropsychological batteries (1989), Bender visual motor gestalt
test (1938).

2.8 USES OF PSYCHOLOGICAL TESTS


Tests are used for many purposes which are indicated below:
1) They are used for screening the normals from the abnormal.
2) They are used for testing the intelligence, aptitude and interest of the individuals
3) Psychological tests of creativity are used for delineating the creative potentials of a person.
4) Tests are also used for occupational and vocational counselling.
5) Psychological tests give an idea about the mental status of an individual.

6) Tests are of great use in understanding the personalities of individuals which includes, children,
adolescents, adults and even older persons.
7) Diagnostic tests are used to ascertain if an individual suffers fromcertain mental disorders and if so
exactly what disorders.
8) Tests are used for recruitment, selection, training and placement in the industrial setting.
9) Tests are of great value in psychological research.
10)It helps identify people with specially required qualities, such as leadership quality etc.
RESEARCH DESIGN
Winner (1971) compared the research design to an architects plan for the structure of a building. The
designer of researcher performs a role similar to that of the architect. The owner of the building gives
his basic requirements to the architect, who then exercising his expertise, prepares a plan or a blue print
outlining the final shape of the structure. Similarly, researcher has to do planning or prepare a structure
before starting data collection and analysis.
According to Myers (1980), the research design is the general structure of the experiment, not its
specific content. In fact, the research design is the conceptual structure within which research is
conducted; it constitutes the blueprint for the collection, measurement and analysis of data.
According to Kerlinger (1986) research design is the plan, structure, and strategy of investigation
conceived so as to obtain answers to research questions and to control variance. The definition of
Kerlinger reveals three important components, which are:
i) Research Design is the Plan: The plan is the overall scheme or program of the research. It includes an
outline of what the investigator will do from writing the hypotheses and their operational implications
to the final analysis of data.
ii) Research Design is the Structure: The structure of the research is more specific. It is the outline, the
scheme, the paradigm, of the operation of the variables. When we draw diagrams that outline the
variables and their relation and juxtaposition, we build structural schemes for accomplishing operational
research purposes.
iii) Research Design is the Strategy: Strategy as used here is also more specific than plan. It includes the
methods to be used to gather and analyze the data. In other words strategy implies how the research
objectives will be reached and how the problems encountered in the research will be tackled.
PURPOSE OF RESEARCH DESIGN
The purpose of research design is to provide a maximum amount of information relevant to the problem
under investigation at a minimum cost. The research design has the toll purposes:
Answers to Research Questions

Research design is formulated to enable the researcher to answer research questions such as validity,
objectivity, accuracy, and describe research plans as economically as possible. Research design sets up
the framework for adequate test of relations among variable. The research design in a way tells us what
observations to make, how to make them and how to analyze the quantitative representations of the
observations. It also tells us as to what types of statistical analysis to use. Finally, an adequate design
outlines possible conclusions to be drawn from statistical analysis. Thus a research design after moving
through the sequence of different related steps enables the researcher to draw a valid and objective
answer to research questions.
Research Design Acts as Variance Control
The main technical function of research design is to control variance. Research design acts as control
mechanism and enables the researcher to control unwanted variances and it is a central theme of
research design. Variance is a measure of the dispersion or spread of a set of scores. It describes the
extent to which the scores differ from each other.
Types of Variance
The researcher is directly concerned with three types of variance namely experimental variance,
extraneous variance and error variance. Main functions of research design are to maximize the effect of
systematic variance, control extraneous variance and minimize error variance. A discussion of these
variances is presented below.
Systematic Variance: Systematic variance is the variability in the dependent measure (DV) due to the
manipulation of the experimental variable (IV) by the experimenter. An important task of the
experimenter is to maximize this variance. This objective is achieved by making the level of the
experimental variable as different as possible. Suppose an experimenter is interested in studying the
effect of intensity of light on visual acuity. The experimenter decides to study the effect by manipulating
three levels of light intensity, i.e. 10 ml, 15ml, 20 ml. but as the difference between any two levels of the
experimental variable is not substantial, and there is little chance of separating its effect from the total
variance. In this experiment it would be appropriate, then to modify the levels of light intensity to 10 ml,
20 ml, and 30 ml so that the difference between any two levels is substantial.
Extraneous variance is produced by the extraneous variables or the relevant variables. An experimenter
always tries to control the relevant variables and thus, also wants to eliminate the variances produced
by these variables. There are four ways to control the extraneous variances. A discussion of these
procedures is given below:
1) Randomization: an important and powerful method of controlling extraneous variables is
randomization. It is considered to be the most effective way to control the variability due to all possible
extraneous sources. If through randomization has been achieved then the treatment groups in the
experiment could be considered statistically equal in all possible ways. In other words it is a procedure
for equating groups with respect to secondary variable. Randomization means random selection of the
experimental units from the larger population. Random assignment means that every experimental unit

has an equal chance of being placed in any of the treatment conditions or groups. In using
randomization method some problems may be encountered. It is possible to select a random sample
from a population, but then assignment of experimental units to groups may get biased. Random
assignment of subjects is critical to internal validity. If subjects are not assigned randomly, confounding
may occur.
Randomized group design and randomized block design are the examples of research design in which
randomization is used to control the extraneous variable.
2) Elimination: this procedure is the easiest way to controlling the unwanted extraneous variable
through elimination of variable. Suppose, the sex of the subject as unwanted secondary variable, is
found to influence the variable in an experiment. Therefore the variable of sex has to be controlled. The
researcher may decide to take either all males or all females in an experiment and thus, controlled
through elimination the variability due to the sex variable.
By using elimination for controlling the extraneous variables, researcher looses the power of
generalization. If the researcher selects the subject from a restricted range then the researcher can
generalize the results within restricted range and not outside it. Elimination procedure is used in nonexperimental design.
3) Matching: is also a non-experimental design procedure, is used to control the extraneous source of
variance. In case of controlling organismic and background variable matching is used. in this procedure
the relevant variable are equated or held constant across all conditions of experiments. Suppose if the
researcher finds that the variable of intelligence is highly correlated with the dependent variable, it is
better to control the variance through matching on the variable of intelligence. However as a method of
control matching limits the availability of subjects. If the researcher decides to match subjects on two or
three variables he may not find enough subjects for the experiment. Besides this the method of
matching biases the principles of randomization.
4) Statistical Control: in this approach, no attempt is made to restrain the influence of secondary
variables. In this technique, one or more concomitant secondary variables (covariates) are measured
and the dependent variable is statistically adjusted to remove the effects of the uncontrolled sources of
variation. Analysis of covariances is one such technique. It is used to remove statistically the possible
amount of variation in the concomitant secondary variable.
Error Variance
The third function of a research design is to minimize the error variance. The error variance is defined as
those variance or variabilitys in the measures, which occurs as a function of the factors not controllable
by the experimenter. Such factors may be related to the individual differences among the subjects
themselves such as to their attitude, motivation, need, ability etc. They may be related to what is
commonly called the errors of measurements such as the differences in trials differences in conditions
of experiment, temporary emotional state of the subject, fatigability etc.

Statistical controls can be applied to minimize such error variance. For example, repeated measures
design can be used to minimize the experimental error. By this technique the variability due to the
individual differences is taken out from the total variability, and thus, the error variance is reduced.
Analysis of covariances is also a technique to reduce the error variance. Further, error variance can be
controlled by increasing the reliability of measurement by giving clear and unambiguously instructions
and by using a reliable measuring instrument etc
CRITERIA OF RESEARCH DESIGN
As you know that there are various types of research design. Some are weak design and some are good
design. Behavioural researchers have been able to formulate certain criteria on the basis of which you
can distinguish the good design from weak design. These criteria have proved very useful in guiding the
researches in right direction. These criteria are mentioned below.
Capability to Answer Research Questions Adequately
A good research design is the design that answers research questions adequately. Sometimes, the
researcher selects a design which is not appropriate for answering the research question in hand. Such
designs constitute the example of weak research design. It is a common practice that students while
trying to answer a research question by conducting experiment or doing research, often match sex, age
intelligence of the subjects on the assumption that such matching would lead to the setting of a better
experimental group and control group. The reality is that if there is no relation between sex, age and the
dependent variable then matching will be irrelevant. Therefore, any design based upon matching would
be a weak design.
Control of Variables
Another criterion of a good research design is that it should control the effects of extraneous variables.
A design which fails to control the effect of extraneous variables is considered a weak one and the
research should avoid such designs.
There are various ways to control the effects of extraneous variables. Of these ways randomization is
considered by many as one of the best techniques of controlling the extraneous variables. There are
three basic phases in randomization- random selection of subjects, random assignment of subjects into
control and experimental groups and random assignments of experimental treatments among different
groups. This increases the internal validity of the research.
Generalisability
Generalisibility is the external validity of the research. In other words it refers to the extent to which the
results of the experiment or research obtained can be generalised to subjects, groups or conditions not
included in sample of the research. If the design is such as the obtained results can be generalised to
larger groups or subjects, the design is considered to be a good one.
QUALITIES OF RESEARCH DESIGN

A good design is characterized by flexible; appropriate, efficient, economical and so on. The design
which minimizes bias and maximizes the reliability of the data collected and analyzed is considered a
good design. The design which gives the smallest experimental error is supposed to be the best design in
many investigations. Similarly, a design which yields maximal information and provides an opportunity
for considering many different aspects of a problem is considered the most appropriate and efficient
design. Thus, the question of good design is related to the purpose or objective of the research problem
and also with the nature of the problem to be studied. One single design cannot serve the purpose of all
types of research problem. Throughout the design construction task, it is important to have in mind
some endpoint, some criteria which are to be achieved before accepting a design strategy. The criteria
below are only meant to be suggestive of the characteristics found in good research design.
Theory base: Good research strategies reflect the theories which are being investigated. Where specific
theoretical expectations can be hypothesized these are incorporated into the design. For example,
where theory predicts a specific treatment effect on one measure but not on another, the inclusion of
both in the design improves discriminant validity and demonstrates the predictive power of the theory.
Situational: Good research designs reflect the settings of the investigation. This was illustrated above
where a particular need of teachers and administrators was explicitly addressed in the design strategy.
Similarly, intergroup rivalry, demoralisation, and competition might be assessed through the use of
additional comparison groups who are not in direct contact with the original group.
Feasible: Good designs can be implemented. The sequence and timing of events are carefully thought
out. Potential problems in measurement, adherence to assignment, database construction and the like,
are anticipated. Where needed, additional groups or measurements are included in the design to
explicitly correct for such problems.

Redundant: Good research designs have some flexibility built into them. Often, this flexibility results
from duplication of essential design features. For example, multiple replication of a treatment helps to
insure that failure to implement the treatment in one setting will not invalidate the entire study.
Efficient: Good designs strike a balance between redundancy and the tendency to overdesign. Where it
is reasonable, other, less costly, strategies for ruling out potential threats to validity are utilised.
This is by no means an exhaustive list of the criteria by which we can judge good research design.
Nevertheless, goals of this sort help to guide the researcher toward a final design choice and emphasise
important components which should be included.
EXPERIMENTAL DESIGN (CONTROL GROUP DESIGN AND TWO FACTOR DESIGN)
The term experimental design may be used in two different ways (Kirk, 1968).
i) It may be used to refer to the sequence of steps necessary to conduct an experiment (stating the
hypothesis, detailing the data collection process, and so on).

ii) It may be used to refer to the plan by which subjects are assigned to experimental conditions.
The experimental design is relatively simple, as for example, when one group of subjects is exposed to
an independent variable and another is not. On the other hand, it may be much more complex, involving
two or more than two independent variables and repeated measurements of the dependent variables.
The overall blueprint of the experiment is called experimental design. It contains the specification of the
plan and structure of the entire experiment. For the sake of precision, the variables and their measures
are defined and specific instructions for the experimental conditions are clearly written. A good
experimental design minimizes the influence of extraneous or uncontrolled variation and increases the
likelihood that an experiment will produce valid and consistent results.
BASIC ELEMENTS OF VALID EXPERIMENTAL DESIGN
a) Factor: The independent variables of an experiment are often called the factors of experiment. An
experiment has always one factor, or independent variables. It is possible for an experiment to have
more than one independent variables. To have an experiment, it is necessary to vary some independent
variable, or some factors.
b) Level: a level is a particular value of an independent variable. Level refers to the degree or intensity of
a factor. Any factor may be presented in one or more of several levels, including a zero level.
c) Condition; is the broadest term used to discuss independent variables. It refers to a particular way in
which subjects are treated.
d) Main effect: Main effect is the effect of one independent variable, averaged over all levels of another
independent variable.
e) Interaction; when the effect of one independent variable depends on the level of another
independent variable.
f) Treatment: the treatment is used to refer to a particular set of experimental condition. In
experiments, a treatment is something that researchers administer to experimental units.
Two particular elements of a design provide control over so many different threats to validity that they
are basic to good experimental designs; (1) the existence of a control group or a control condition and,
(2) the random allocation of subjects to groups. Random allocation ensures that the groups will be equal
in all respects, except as they may differ by chance and control over the internal threats to validity;
allows one to conclude that dependent variable is associated with independent variable and not with
any other variables.
In discussing experimental design, Campbell & Stanley(1963) have used some symbols with which a
student/reader is expected to be acquainted.

or experimental variable which is manipulated. When treatments are compared they are
levelled as X1, X2, X3 and so on.

O2, O3 and so on, is used.


TYPES OF DESIGNS
Designs can be classified into a simple three fold classification by asking some key questions. First, does
the design use random assignment to groups? If random assignment is used, you can call the design a
randomized experiment or true experiment. If random assignment is not used, then you have to ask a
second question: Does the design use either multiple groups or multiple waves of measurement? If the
answer is yes, you can be labeled as quasi-experimental design. If no, you can call it a non-experimental
design. This threefold classification is specially useful for describing the design with respect to internal
validity.
Now, let us understand true experiment before we go to explain control group design and two factor
design.
True Experiment at Design
The experimenter has complete control over the experiment: the who, what, when, where, and how.
Control over the who of the experiment means that the experimenter can assign subjects to conditions
randomly for example we can put A in example 1, B in example 2, C in example 1, D in example 2
etc. Control over the what, when, where, and how of the experiment means that the experimenter has
complete control over the why the experiment is to be conducted. True experiments include two major
types of experimental designs: A) single factor design (between subjects and within subjects), B) two
factor design (factorial design)
Experimental design differs in relation to research purpose. Control group designs are also called true
experimental design. A discussion of control group design and two factor design are given below.
Control Group Design
Description: Two parallel experiments are set up, identical in all respects except that only one includes
the treatment being explored by the experiment. The people in both groups should be similar. Ideally,
these are selected and assigned randomly, though in practice some groups come as one (such as school
classes) or are selected on a pseudo-random basis (such as people on the street).The control group may
have no treatment, with nothing happening to them, or they may have a neutral treatment, such as
when a placebo is used in a medical pharmaceutical experiment.
Types of Control Group Design
There are four types of control group design which are described hereunder:
Post-Test Only, Equivalent Group Design

This design is the most effective and useful true experimental design, which minimizes the threats to the
experimental validity. This design can be diagrammed as given below.
R1 X O R2 O
In the above design are two groups. One group R1 is given treatment (X), usually called the experimental
group, and the other group R2 is not given any treatment and R2 is called the control group. Both groups
are formed on the basis of random assignment of the subjects and hence, they are equivalent. Not only
is that, subjects of both groups initially randomly drawn from the population (R). This fact controls for
selection and experimental mortality. Besides these, in this design no pretest is needed for either group,
which saves time and money. As both groups are tested after the experimental group has received the
treatment, the most appropriate statistical tests would be those tests which make a comparison
between the mean of O1 and O2. Thus either t- test or ANOVA is used as the appropriate statistical test.
Let us take an example. Suppose the experimenter, with the help of the table of random numbers,
selects 50 students out of a total of 500 students. Subsequently, these 50 students are randomly
assigned to two groups. The experimenter is interested in evaluating the effect of punishment over
retention of verbal task. The hypothesis is that punishment enhances the retention score. One group is
given punishment (X) while learning a task and another group receives no such punishment while
learning a task. Subsequently, both groups are given the test of retention. A simple comparison of mean
retention scores of the two groups is carried out through the t- test which provides the basis for refuting
or accepting the hypothesis.
The only problem with the post-test is that there is no direct indication of what actual change is found in
the treatment group. This is corrected by measuring them before and after the treatment. The control
group is still useful as additional factors may have had an effect, particularly if the treatment occurs over
a long time or in a unique context.
Pretest- Posttest Control Group Design
This is also called the classic controlled experimental design, and the randomized pre-test/post-test
design because it
1) Controls the assignment of subjects to experimental (treatment) and control groups through the use
of a table of random numbers. Researchers may substitute matching for random assignment. Subjects
in the two groups are matched on a list of characteristics that might affect the outcome of the research
(e.g., sex, race, income). This may be cheaper but matching on more than three or four characteristics is
very difficult. And if the researcher does not know which characteristics to match on, this compromises
internal validity.
2) Controls the timing of the independent variable (treatment) and which group is exposed to it. Both
groups experience the same conditions, with the exception of the experimental group, which receives
the influence of the independent variable (treatment) in addition to the shared conditions of the two
groups.

3) Controls all other conditions under which the experiment takes place.
The steps in the classic controlled experiment are:
1) Randomly assign subjects to treatment or control groups;
2) Administer the pre-test to all subjects in both groups;
3) Ensure that both groups experience the same conditions except that in addition the experimental
group experiences the treatment;
4) Administer the post-test to all subjects in both groups;
5) Assess the amount of change on the value of the dependent variable from the pre-test to the posttest for each group separately.
These steps are diagramed as follows:
R1 O1 X O2 R2 O1 O2
Experimental group (R1): 1st observation (measurement) of the dependent variable O1 = Pre-test
Exposure to the Treatment (X) (independent variable)
2nd observation (measurement) of the dependent variable O2 = Post-test
Control group (R2): only difference is that there is no treatment (X).
The difference in the control groups score from the pre-test to the post-test indicates the change in the
value of the dependent variable that could be expected to occur without exposure to the treatment
(independent) variable X.
Control group Post test scores control group Pre test scores = control group difference
Experimental group pre-test score Experimental group post-test scores = the difference obtained as a
result of independent variable.
The difference in the experimental groups score from the pre-test to the post-test indicates the change
in the value of the dependent variable that could be expected to occur with exposure to the treatment
(independent) variable X.
The difference between the change in the experimental group and the change in the control group is the
amount of change in the value of the dependent variable that can be attributed solely to the influence
of the independent (treatment) variable X.
Control group difference experimental group difference = difference attributable to X (the
manipulation of the independent variable)

There are many situations where a pre-test is impossible because the participants have already been
exposed to the treatment, or it would be too expensive or too time-consuming. For large groups, this
design can control most of the threats to internal and external validity as does the classic controlled
experimental design. For many true experimental designs, pretest-posttest designs are the preferred
method to compare participant groups and measure the degree of change occurring as a result of
treatments or interventions.
Pretest-posttest designs grew from the simpler posttest only designs, and address some of the issues
arising with assignment bias and the allocation of participants to groups.
One example is education, where researchers want to monitor the effect of a new teaching method
upon groups of children. Other areas include evaluating the effects of counseling, testing medical
treatments, and measuring psychological constructs. The only stipulation is that the subjects must be
randomly assigned to groups, in a true experimental design, to properly isolate and nullify any nuisance
or confounding variables.
Problems with Pretest-Posttest Designs
The main problem with this design is that it improves internal validity but sacrifices external validity.
There is no way of judging whether the process of pre-testing actually influenced the results because
there is no baseline measurement against groups that remained completely untreated. For example,
children given an educational pretest may be inspired to try a little harder in their lessons, and both
groups would outperform children not given a pretest, so it becomes difficult to generalise the results to
encompass all children.
The other major problem, which afflicts many sociological and educational research programs, is that it
is impossible and unethical to isolate all of the participants completely. If two groups of children attend
the same school, it is reasonable to assume that they mix outside of classrooms and share ideas,
potentially contaminating the results. On the other hand, if the children are drawn from different
schools to prevent this, the chance of selection bias arises, because randomization is not possible.
The two-group control group design is an exceptionally useful research method, as long as its limitations
are fully understood. For extensive and particularly important research, many researchers use the
Solomon four group methods, a design that is more costly, but avoids many weaknesses of the simple
pretest- posttest designs.
Solomon Four Group Design
The Solomon four-group design developed by Solomon (1949) is really a combination of the two
equivalent group designs described above, namely the posttest only design and pretest posttest
design and represents the first direct attempt to control the threats of the external validity. This design
may be diagrammed as shown below:
R O1 X O2 R O3 O4 R X O5 R O6

It is clear from this diagram that in this design four-groups are randomly set by the experimenter. In this
design two simultaneous experiments are conducted and, hence the advantages of replication are
available here. The effect of X treatment is replicated in four ways: O2>O1, O2>O4, O5>O6 and O4>O3.
This design makes it possible to evaluate the main effects of testing as well as the reactive effect of
testing, thus increasing the external validity or generalisibility. The factorial analysis of variance can be
used as the appropriate statistical test. Because the design is complex from the methodological as well
as the statistical point of view, it is less preferred to the above two true experimental designs.
TWO FACTOR DESIGN
Until now, only those designs were considered in which the researcher studied the effect of one
independent variable. Forthcoming discussion focuses on true experiments with factorial design in
which two variables or factors are manipulated simultaneously. Experiments in which two or more
variables are manipulated are designed as factorial experiments. Factorial experiments are the
expansion of very popular post test design to study two independent variables simultaneously. These
expanded true experiments are called factorial experiments.
A factorial design is often used to understand the effect of two or more independent variables upon a
dependent variable. Factorial experiments permit researchers to study behaviour under conditions in
which independent variables, called in this context factors, are varied simultaneously. Thus, researchers
can investigate the joint effect of two or more factors on a dependent variable.
The factorial design also facilitates the study of interactions, illuminating the effects of different
conditions of the experiment on the identifiable subgroups of subjects participating in the experiment. A
factorial design is which two or more variables, or factors, are employed in such a way that all the
possible combination of selected values of each variables are used. In the simplest case we have two
variables, each of which has two values or level. This is known as two-by-two (2*2) factorial design
because of the two levels of each variable. The 2*2 design makes four combinations that you will
understand via underneath layout:

Let us take an example to illustrate the meaning of factorial design.


Suppose the experimenter wants to know about how do the levels of arousal and task difficulty affect
the task performance. Obviously, there are two independent variables or factors. One is arousal level
and the other is task complexity. You can denote the first independent variable as A and second
independent variable as B.
The dependent variable is task performance. Further, suppose the arousal level is manipulated in two
ways: high arousal and low arousal - these are two levels of A. Let the high arousal be A1 and low
arousal A2 . Similarly the task complexity is manipulated in two ways: Simple and Complex Task. Let the
simple task be called B1 and the complex task be called B2. Now you see that both independent
variables have been manipulated in two ways. The resulting factorial design is 2*2.

Layout of 2*2 Factorial Designs


Factor A (Arousal level), Factor B (Task complexity)
Values or intensity for factor A: A1 (high) & A2 (low)
Values or intensity for factor B: B1 (simple) & B2 (complex)
Design: A1 B1 A1 B2
A2 B1 A2 B2
In the simplest form, with two levels of each independent variable there would be four groups of
randomly assigned participants. They would receive treatments that represent all possible combinations
of the two levels of arousal high and low and the two levels of task complexity simple and complex. This
design is known as a 2*2 factorial experiments with 2*2=4 groups in all. In the notation system we have
been using all along, the design for this factorial experiment could be diagrammed as follows.
G1 R XA1B1 O1 G2 R XA1B2 O2 G3 R XA2B1 O3 G4 R XA2B2 O4
The hypothesised outcome of this study would be O2 > O1 and O3 > O4. That is, the prediction would be
that performance on the simple task would be better under conditions of high arousal, while
performance on the complex task would be worse under conditions of higher arousal an interaction or
interactive effect of the two independent variables.
Two Factor, Multi-Level Factorial Design
The type of factorial design that we have discussed above is one in which there are two independent
variables, each having two levels. Hence, it was referred to as a 2*2 factorial design. Likewise, a factorial
design with two independent variables may be of a 3*2, 3*3, 4*3, 4*4 etc. In this way a generalised
factorial design for two independent variables may be written as K*L factorial design in which K stands
for the first independent variable and L stands for the second independent variable. The value of K and L
indicates the number of ways in which the first and the second independent variables have been
manipulated. These ways are known as levels of the independent variable. Thus, a 3*2 factorial design
indicates that the first independent variable has three levels and the second independent variable has
two levels. In this design there will be six experimental conditions. A 3*3 design similarly indicates that
each of the two independent variables has three levels.
USES OF FACTORIAL DESIGN
Factorial Designs can be used to:
on the response.

simultaneously (if and only if


you test many levels of each factor).
cified factors influence response to a given treatment

chance of missing an effect because the chosen material is insensitive.


ADVANTAGES AND DISADVANTAGES OF FACTORIAL DESIGN
The factorial experiments have several advantages over single independent variable experiment:
1) In a factorial design two or more than two independent variables are simultaneously manipulated
whereas in a single independent variable experiment as its name implies a separate experiment is
designed to study the effect of independent variable. Thus a factorial experiment provides economy of
time labour and money.
2) The factorial experiment also permits the evaluation of interaction upon the dependent variable. In a
single independent variable experiment one cannot evaluate the effect of the interaction of the
independent variables because only one independent variable is manipulated at a time.
3) The experimental results of a factorial experiment are more comprehensive and can be generalised to
a wider range due to the manipulation of several independent variables in one experiment. From this
point of view the single IV experiments suffer a major setback.
4) In factorial experiments there is an additional gain occurring due to the hidden replication arising
from the factorial arrangement itself.
Disadvantages
Despite these advantages, factorial design has some disadvantages which are as follows:
1) Sometimes the experimental setup and the resulting statistical analysis become so complex that the
experimenter may wish to drop this design and return to a single IV experiment. This is especially true
when more than three independent variables each with three or more levels are to be manipulated
together.
2) In factorial experiments when the number of treatment combinations or treatments becomes large, it
becomes difficult for the experimenter to select a homogeneous experimental unit (or subject).
3) Sometimes, it happens that some treatment combinations arising out of the simultaneous
manipulation of several independent variables becomes meaningless. Then, the resources spent in those
combinations are simply wasted.

TYPES OF STATISTICS
After knowing the concept and definition of statistics, let us know the various types of statistics.
Though various bases have been adopted to classify statistics, following are the two major ways of
classifying statistics: (i) on the basis of function and (ii) on the basis of distribution.
1.3.1 On the Basis of Functions
As statistics has some particular procedures to deal with its subject matter or data, three types of
statistics have been described.
A) Descriptive statistics: The branch which deals with descriptions of obtained data is known as
descriptive statistics. On the basis of these descriptions a particular group of population is defined for
corresponding characteristics. The descriptive statistics include classification, tabulation measures of
central tendency and variability. These measures enable the researchers to know about the tendency of
data or the scores, which further enhance the ease in description of the phenomena.
9
Introduction to StatisticsB) Correlational statistics: The obtained data are disclosed for their inter
correlations in this type of statistics. It includes various types of techniques to compute the correlations
among data. Correlational statistics also provide description about sample or population for their
further analyses to explore the significance of their differences.
C) Inferential statistics: Inferential statistics deals with the drawing of conclusions about large group of
individuals (population) on the basis of observations of few participants from them or about the events
which are yet to occur on the basis of past events. It provide tools to compute the probabilities of future
behaviour of the subjects.
1.3.2 On the Basis of Distribution of Data
Parametric and nonparametric statistics are the two classifications on the basis of distribution of data.
Both are also concerned to population or sample. By population we mean the total number of items in a
sphere. In general it has infinite number therein but in statistics there is a finite number of a population,
like the number of students in a college. According to Kerlinger (1968) the term population and
universe mean all the members of any well-defined class of people, events or objects. In a broad sense,
statistical population may have three kinds of properties (a) containing finite number of items and
knowable, (b) having finite number of articles but unknowable, and (c) keeping infinite number of
articles.
Sample is known as a part from population which represents that particular populations properties. As
much as the sample selection will be unbiased and random, it will be more representing its population.
Sample is a part of a population selected (usually according to some procedure and with some purpose
in mind) such that it is considered to be representative of the population as a whole.

Parametric statistics is defined to have an assumption of normal distribution for its population under
study. Parametric statistics refers to those statistical techniques that have been developed on the
assumption that the data are of a certain type. In particular the measure should be an interval scale and
the scores should be drawn from a normal distribution.
There are certain basic assumptions of parametric statistics. The very first characteristic of parametric
statistics is that it moves after confirming its populations property of normal distribution. The normal
distribution of a population shows its symmetrical spread over the continuum of 3 SD to +3 SD and
keeping unimodal shape as its mean, median, and mode coincide. If the samples are from various
populations then it is assumed to have same variance ratio among them. The samples are independent
in their selection. The chances of occurrence of any event or item out of the total population are equal
and any item can be selected in the sample. This reflects the randomized nature of sample which also
happens to be a good tool to avoid any experimenter bias.
In view of the above assumptions, parametric statistics seem to be more reliable and authentic as
compared to the nonparametric statistics. These statistics are more powerful to establish the statistical
significance of effects and differences among variables. It is more appropriate and reliable to use
parametric statistics
10
Introduction to Statistics in case of large samples as it consist of more accuracy of results. The data to be
analysed under parametric statistics are usually from interval scale.
However, along with many advantages, some disadvantages have also been noted for the parametric
statistics. It is bound to follow the rigid assumption of normal distribution and further it narrows the
scope of its usage. In case of small sample, normal distribution cannot be attained and thus parametric
statistics cannot be used. Further, computation in parametric statistics is lengthy and complex because
of large samples and numerical calculations. T-test, F-test, r-test, are some of the major parametric
statistics used for data analysis.
Nonparametric statistics are those statistics which are not based on the assumption of normal
distribution of population. Therefore, these are also known as distribution free statistics. They are not
bound to be used with interval scale data or normally distributed data. The data with non-continuity are
to be tackled with these statistics. In the samples where it is difficult to maintain the assumption of
normal distribution, nonparametric statistics are used for analysis. The samples with small number of
items are treated with nonparametric statistics because of the absence of normal distribution. It can be
used even for nominal data along with the ordinal data. Some of the usual nonparametric statistics
include chi- square, Spearmans rank difference method of correlation, Kendalls rank difference
method, Mann-Whitney U test, etc.

Significance of the Difference of Frequency


Significance of the Difference of Frequency: Chi-Square
5
UNIT 1 SIGNIFICANCE OF THE DIFFERENCE OF FREQUENCY: CHI-SQUARE
Structure 1.0 Introduction 1.1 Objectives 1.2 Parametric and Non-Parametric Statistics Tests 1.2.1 Chisquare Test-Definitions 1.3 Assumptions for the Application of x2 Test 1.3.1 x2 Distribution 1.4
Properties of the Chi-square Distribution 1.5 Application of Chi-square Test 1.5.1 Test of Goodness of Fit
1.5.2 Test of Independence 1.5.3 Test of Homogeneity 1.6 Precautions about Using the Chi-square Test
1.7 Let Us Sum Up 1.8 Unit End Questions 1.9 Answers to Self Assessment Questions 1.10 Glossary 1.11
Suggested Readings
1.0 INTRODUCTION
In psychology some times the researcher is interested in such questions as to whether one type of soft
drink preferred by the consumer is in any way influenced by the age of the consumer , and whether ?
Similarly among the four colours, red, green, indigo, yellow, is there any significant difference in the
subjects in regard to the preference of colours, etc. To approach questions like these, we find the
number of persons who preferred particular colour. Here we have the data in the form of frequencies,
the number of cases falling into each of the four categories of colours. Here the researcher compares
the observed frequencies characterizing the choices of the 4 categories by a large number of individuals.
Some prefer red, some white, some green and some yellow etc. Hypothetically speaking one may say
that all colours are equally preferred and are equal choice for the individuals. But in reality there may
be more number of persons choosing red or blue than yellow and similarly quite a few may choose
white etc. Thus what is expected as choice which is equal for all colours may be seen differently when
actual choice is made by the persons. Now we note the number of persons who had chosen red, blue,
yellow and white respectively and note it in the table. Hypothetically we said that there will be the
same number of persons choosing each of the 4 colours. This also we note in the table. Now we
compare and see for each colour what is the difference between the actual number of choices made as
compared to the hypothetical equal number. The differences thereof are noted. To find out

6
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
6

if these differences are really of statistical significance amongst the persons in regard to choice of
colours, we use a statistical technique called Chi-square. In this unit we will discuss about the concept of
chi-square and its application.
1.1 OBJECTIVES
After reading this unit, you will be able to:

for the Application of chi-square;

-square distribut
-square test.

e Assumptions
-

1.2 PARAMETRIC AND NON-PARAMETRIC STATISTICS TEST


A variety of statistical tests are available for analyzing data. Broadly statistical tests can be classified into
two categories one is parametric statistical test and other is non-parametric statistical test. Which
statistical method we should select for analysing our data depends on
cale of measurement of
data
Shape of the population distribution. There are four scales of measurement, nominal, ordinal, interval
and ratio.
In nominal scale numbering or classification is always made according to similarity or differences
observed with respect to some characteristic. For example if we take eye colour as a variable then we
will be able to classify individuals in categories like blue eyed, brown eyed, black eyed and assign
number within each category to identify the individuals belonging to a group.
In ordinal scale, the number reflects their rank order within their group with respect to some quality,
property or performance. For example on the basis of marks obtained we give the ranks 1, 2, 3 and so
on and students are ordered on the basis of marks.
The defect in such scales lies in the fact that the units along the scale are unequal in size. The difference
in the marks between the first and second rank holder may not be the same, as the difference in marks
in second and third position holders.
The interval scale does not merely identify or classify individuals in relation to some attribute by a
number (on the basis of sameness or difference) or rank (in order of their merit position) but advances
much ahead by pointing out the relative quantitative as well as qualitative differences. The major
strength of interval scales is that they have equal units of measurement. They however do not posses a
true zero.

7
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square

7
The measures of ratio scale are not only expressed in equal limits but are also taken from a true zero.
Independence of the observation means the inclusion or exclusion of any case in the sample should not
unduly affect the results of the study. Similarly there may be different size of sample of the study. In
psychology if the number of subjects in a group is 30 or more than thirty it is known as large sample.
The distribution of the population from which the samples have been drawn may be normal or skewed.
If the level of measurement on the collected data is in the form of an interval scale or ratio scale, or if
the sample is very large on which the data has been collected, then it may be assumed that the
population is normally distributed. If the observations are independent and the population have the
same variance as the sample which has been drawn from it, then we use the Parametric Statistics.
However is many situations, the size of the sample is quite small, the assumption like normality of the
distribution of scores in the population are doubtful, and the measurement of the data is available in
the form of classification, or in the form of ranks then we use the non-parametric tests.
There are number of non-parametric tests. Such as, chi-square test, Wilcoxon- Mann-Whitney U test,
Rank difference methods, (rho and tau) coefficient of concordance (w), Median test, Kruskal-Wallis H
test, Friedman test. In this unit we will discuss the chi-square test.
1.2.1 Chi-Square Test: Definitions This is one of the most important non-parametric statistics. As we see
further that chi-square is used for several purposes.
The term chi (is the Greek letter, it is pronounced ki). The chi-square test was originally developed by
Kart-Peason in 1900 and is sometimes called the Peason Chi-square. According to Garecte (1981) The
difference between observed and expected frequencies are squared and divided by the expected
number in each case and the sum of these quotients is chi-square.
According to Guilford (1973) By definition a x2 is the sum of ratio (any number can be summed), each
ratio is that between a squared discrepancy of difference and an expected frequency.
On the basis of above definitions it can be said that the discrepancy between observed and expected
frequencies is expressed in term of a statistics named chi-square (x2).
1.3 ASSUMPTIONS FOR THE APPLICATION OF X2 TEST
Before using chi-square as a test statistic to test a hypothesis, the following assumptions are necessary:

8
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square

8
-square test is conducted in terms of
frequencies or data that can be readily transformed into frequency, it is best viewed conceptually as a
-square test is applied only to discrete data. However, any continuous
data can be reduced to the categories is such a way that they can be treated as discrete data and then
the application of chi-square is justified.
observation should be independent. For example let us say that we wanted to find out the color
preference of 100 females. We select 100 subjects randomly, the preference of one individual is not
predictable from that of any other. If there is a relationship between two variables or the subjects are
matched, then chi-square cannot be used to test. Correlated data like matched pair are not subjected to
chi-square treatment.
should be random.
more. Because for observation less than 5, the value of x2 shall be over estimated, resulting in the
more than 20% of the expected values
may be lesser than 5 and no expected value may be lesser than 1. If more than 20 percent of the cells
have expected frequencies less than 5 then x2 should not be applied.
large size is assumed. If sample size is small then x2 will yield an inaccurate inference. In general, larger
the sample size, the less affected by chance is the observed distribution, and thus the more reliable the
test of the hypothesis.
in original units, rather than in percentage or
ratio form. Such precaution helps in comparison of attributes of interest. 1.3.1 x2 Distribution The term
non-parametric does not mean that the population distribution under study has no parameters. All
populations have certain parameters which define their distribution. The sampling distribution of x2 is
called x2 distribution. Other hypothesis testing procedures, the calculated value of x2 test statistic is
compared with its critical (or table) value to know whether the null hypothesis is true. The decision of
accepting a null hypothesis is based on how 'close' the sample results are to the expected results. If the
null hypothesis is true, the observed frequencies will vary from their corresponding expected
frequencies according to the influence of random sampling variation. The calculated value of x2 will be
smaller when agreement between observed frequencies and expected frequencies are smaller. The
shape of a particular chi-square distribution depends on the number of degrees of freedom. Let us now
see what is degrees of freedom? Let us say there are 5 scores, as for example, 25,35,45,55,65 The
mean here is 45. Let us say that from this mean of 45 we try to see the difference in each of the other
scores. While all the other 4 scores are taken for the difference, this 45 is not available for other
calculations, as from this

9
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
9

Mena=45 , all other calculations are made. From this Mean we note the differences in the remaining
scores, and thus amongst the 5 scores one score is not movable or in other words the group of 5
subjects has lost one degree of freedom. Suppose we go for higher level calculations, then accordingly
there will be more number of items that cannot be moved and thus we may have greater degrees of
freedom. In a 3 3 table, we will be using one cell from the row and one cell from the column for the
purpose of chi-square calculations. Thus the degrees of freedom will be (3 1) (3 1) = 4. That is (r- 1) (
k-1 ) (r = row and k = column) It may also be noted that chi-square assumes positive values only. As such
a chi-square curve starting from the original (zero point) lies entirely to the right of the vertical axis. If
we draw a curve with degrees of freedom on the X-axis and chi-square values on the Y axis, the he
shape of a chi-square distribution curve will be skewed for very small degree of freedom. As the degrees
of freedom increase, the shape also changes. Eventually, for larger degrees of freedom, the curve looks
similar to the curve of a normal distribution. A point worth nothing is that the total area under a chisquare distribution curve is 1.0 as is the case in all other continuous distribution curves.
Self Assessment Questions 1) Given below are statements. Indicate in each case whether the statement
is true or false? i) The utility of chi-square test depends largely on the quality of data used in the test.
(T/F) ii) The number of degree of freedom in a chi-square test depends on both the number of row and
the number of column in contingency table. (T/F) iii) The shape of the chi-square distribution depends
on the degree of freedom. (T/F) iv) The value of chi-square depends on the size of the difference
between observed frequency and expected frequency. (T/F) v) Chi-square is a parametric test. (T/F)
2) Fill in the blanks : i) A chi-square value can never be ......................................................... ii) When
frequencies in any cell in less than 5 we use ........................... iii) If the obtained value of chi-square is
more than the critical values given in table we ............................................. null hypothesis.
1.4 PROPERTIES OF THE CHI-SQUARE DISTRIBUTION
The following properties of chi-square test statistic are to be kept in mind when we analyse its sampling
distribution.

10
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
10
-square is non-negative in value; it positively valued, because all discrepancies are squared, both
-square will be
zero only in the unusual event that each observed frequency exactly equals the corresponding expected
frequency.
t symmetrical; it is skewed to the right as mentioned earlier. The reason for this is
because all the values are positive and hence so instead of having a symmetrical distribution, we have a

distribution all on the positive side.


of x2 does not contain any parameter of
population, Chi-square test statistic is referred to as a non-parametric test. Thus chi- square distribution
does not depend upon the form of the parent population.
-square distributions. As
with the t-distribution, there is a different chi-square distribution for the value of each degree of
freedom. It we know the degrees of freedom and the area in the right tail of a chi-square distribution,
we can find the value of chi-square from the Table of chi-square. We give below two examples to show
how the value of chi-square can be obtained from this table. Example 1: Find the value of chi-square for
10 degree of freedom and an area of .05 in the right tail of the chi-square distribution curve. Solution: In
order to find the required value of chi-square, we first locate 10 in the column for degree of freedom
(df.) and .05 in the top row in table of chi-square. The required chi-square value is given by the entry at
the intersection of the row for 10 and the column for .05. This value is 18.307. Example 2: Find the value
of chi-square for 20 degree of freedom and an area of .10 in the left tail of the chi-square distribution
curve. Solution: This example is different from the above one. Here, the area is in the left tail of the chisquare distribution curve is given. In such a case, we have to first find the area in the right tail. This is
obtained as follows: Area in the right tail= 1 - area in the left tail Area in the right tail = 1 - .10 = .90 Now
we can find, in the same manner as used earlier, the value of x2. We locate 20 in the column for df and
.90 on top row in the table of chi-square. The required value of x2 is 12.443.
1.5 APPLICATION OF CHI-SQUARE TEST
Chi-square is used for categorical data that is for data comprising unordered quantitative categories
such as colours, political affiliation etc. The important application of x2 test are as follows:
or homogeneity

11
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
11
1.5.1 Test of Goodness of Fit On several occasion a decision - maker needs to understand whether an
actual sample distribution matches with a known theoretical distribution such as binomial, poisson,
normal and so on. Similarly some time researcher compares the observed (sample) frequencies
characterising the several categories of the distribution with those frequencies expected according to
his or her hypothesis.
The x2 test for goodness of fit is a statistical test of how well given data support an assumption about
the distribution of a population. That is, whether it supports a random variable of interest. In other
words, the test determines how well an assumed distribution fits the given data.

To apply this test, a particular theoretical distribution is first hypothesised for given population and then
the test is carried out to determine whether or not the sample data could have come from the
population of interest with the hypothesised theoretical distribution. The observed frequencies come
from the observation of sample and the expected frequencies come from the hypothesised theoretical
distribution. The goodness of fit test focusses on the difference between the observed frequencies and
expected frequencies.
This can be explained with the help of following examples.
In a particular study, a researcher wants to study that among four brands of glucose biscuits, is there a
difference in the proportion of consumers who prefer the taste of each. We formulate the null
hypothesis that there is no differential preference among consumers for the four brand of glucose
biscuits. Here we test the hypothesis that there are equal probability of individuals preferring each
brand i.e. 1/4 = .25. Or if we take 100 persons to test this hypothesis, 25 each will prefer theoretically
the 4 brands of biscuits.
The alternative hypothesis is that a preference exists for the first brand of the biscuits and the reminder
are equally less preferred, or that the first two are preferred more than the second two and so on.
The use of chi-square tells us whether the relative frequencies observed in the several categories of our
sample frequency distribution are in accordance with the set of frequencies hypothesised.
To test the null hypothesis in the above study, we might allow subjects to taste each of the four brand of
glucose biscuits and then find their preference. We control possible extraneous influence, such as
knowledge of brands name and order of presentation. Suppose we had as mentioned earlier randomly
selected 100 subjects and that our observed frequencies of preference are as given in the table below:
Brand A Brand B Brand C Brand D
Observed Frequencies
20 18 30 32

12
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
12
We calculate the expected frequency for each category by multiplying the proportion hypothesised to
characterise that category in the population by the sample size. According to null hypothesis, the
expected proportionate preference for each biscuit is 1/4 and the frequencies of preference for each is
therefore 25 (1/4) (100/4) = 25.

In any experiment, we anticipate that the observed frequency of choice will vary from the expected
frequencies. We do not use these differences directly. One reason is that some differences are positive
and some are negative. Thus they would cancel each other out. To get around this, we square each
difference. But how much variation to expect ? Some measures of discrepancy is required to test the
null hypothesis, this can be tested by chi-square. We calculate the chi-square (the method of
computation of chi-square will be explained in the next unit). If the obtained chi-square value is more
than the critical values at .05 or .01 level than we reject the null hypothesis and if obtained value of chisquare is less than the given values then we retain the null hypothesis.
The obtained value of chi-square depends on the size of the discrepancy between observed frequency
and expected frequencies. Larger the differences between observed frequencies and expected
frequencies the larger will be the chi-square.
The size of the discrepancy relative to the magnitude of the expected frequency contribute in the
determination of the value of chi-square. For example, if we toss a number of coins and find out how
many times the heads or tails have appeared. To take a numerical example, when we toss coins in 12
times and 1000 times. Then according to null hypothesis our expected frequency will be 6, and 500
respectively. If we obtain 11 heads in 12 tosses the discrepancy is 5. However, if we obtain 505 heads in
1000 tosses, the discrepancy is also 5.
The value of chi-square depends on the number of discrepancies involved in its calculation. For example
if we use three brands of biscuits not the four, there would be less discrepancy to contribute to the total
of chi-square. This will influence the degrees of freedom, that is, when the number of discrepancies are
4 the degrees of freedom will be 3 and when number of discrepancies are 3 the degree of freedom will
be 2. When degrees of freedom is 4 and obtained chi-square is more than 9.48 then we reject the null
hypothesis at .05 level. On the other hand when the degrees of freedom is 4 and obtained chi-square is
7.81 then we retain the null hypothesis at .05 level.
1.5.2 Test of Independence In the above discussion so far, we have considered the application of chisquare only to one variable case. We have considered testing whether the categories in an observed
frequency distribution differ significantly from one another. The chi-square test has a much broader use
in social science research; to test whether one observed frequency distribution significantly differ from
another observed frequency. In other words, it can be said that chi-square is used for the analysis of
bivariate frequency.
For example, social researcher is interested in surveying the attitudes of high school students concerning
the importance of getting a college degree. She questions a sample of 60 senior high school students
about whether they

13
Significance of the Difference of Frequency

Significance of the Difference of Frequency: Chi-Square


13
believe that college education is becoming more important, less important or staying the same and
whether male students respond differently from female students ?
As we know that nominal and ordinal variables are generally presented in the form of a cross-tabulation.
Specifically cross tabulation are used to compare the distribution of one variable often called dependent
variable, across categories of some other variable the independent variable. In a cross tabulation the
focus is on the difference between group such as between males and females in terms of the
dependent variable for example, opinion about the changing value of a college education. In the
above example we want to study whether there exists gender differences in belief regarding the
importance of a college degree - are statistically significant.
To study this question, we classify the data (observed frequencies) in a bivariable distribution. For each
variable, the categories are mutually exclusive. The data is classified in the following table:
More Important Less Important About the Same Male 25 6 8 Female 10 4 7
Bivariate frequency distribution of two types are known as contingency tables. From such a table we
may inquire what cell frequencies would be expected if the two are independent of each other in the
population. The chi- square test may be used to compare the observed cell frequencies with those
expected under the null hypothesis of independence. If the (fo-fe) discrepancy are small, chi-square will
be small suggesting that the two variable could be not different. Conversely, a large chi-square points
toward a contingency relationship.
As in the case of the t ratio there is a sampling distribution for chi-square that can be used to estimate
the probability of obtaining a significant chi-square value by chance alone rather than by actual
population difference. The test is used to study the significance of difference between mean. On the
other hand, chi-square is used to make comparison between frequencies rather than mean scores. As a
result the null hypothesis for the chi-square test states that the populations do not differ with respect to
frequencies of occurrence of a given characteristic. In general, the null hypothesis of independence for a
contingency table is equivalent to hypothesising that in the population the relative frequencies for any
row (across the categories of the column variable) are the same for all rows, or that in the population
the relative frequencies for any column (across the categories of the two variables) are the same for all
columns.
If the null hypothesis of independence is true at the population level, we should expect that random
sampling will produce obtained value of chi-square that are in accord with the tabulated distribution of
that statistic. If the hypothesis is false in any way, the obtained value of chi-square will tend to be larger
than when the alternate hypothesis, stating that there will be a significant difference between the
variables, is true.

14
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
14
In the case of a 2 2 contingency table, we can test for a difference between two frequencies or
proportions from independent samples (just as with the 1 2 table, where we can test a hypothesis
about a single proportion.
1.5.3 Test of Homogeneity The test of homogeneity is useful in a case when we are interested in
verifying, whether several populations are homogeneous with respect to some characteristic of interest.
For example a movie producer is bringing out a new movie. In order to develop an advertising strategy
he wants to determine whether the movie will appeal most to a particular age group or whether it will
appeal equally to all age groups. We formulate the null hypothesis that the opinion of all age groups is
same about the new movie. Hence, the test of homogeneity is useful in testing a null hypothesis that
several populations are homogeneous with respect to a character. This test is different from the test of
independence on account of the following reasons: i) Instead of knowing whether two attributes are
independent or not, we may like to know whether different samples come from the same population. ii)
Instead of taking only one sample for this test two or more independent samples are drawn from each
population. iii) To apply this test first a random sample is drawn from each population, and then in each
sample the proportion falling in each category is determined. The sample data so obtained is arranged in
contingency table. The procedure for testing of hypothesis is same as for test of independence. Self
Assessment Questions 1) What are the properties of Chi-square distribution? Give examples
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
.................................................................................................................. 2) What are the applications of
chi-square test ?
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
.................................................................................................................. 3) Describe the Goodness of fit
test? Why do we say that chi-square is a goodness of fit test?
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................

15
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
15
4) What is meant by test of independence? Give suitable examples
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
.................................................................................................................. 5) Why is chi-square test vcalled
the test of homogeneity? Explain with examples.
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
..................................................................................................................
1.6 PRECAUTIONS ABOUT USING THE CHI- SQUARE TEST
In order to use a chi-square test properly, one has to be extremely careful and keep in mind certain
precautions:
i) A sample size should be large enough. If the expected frequencies are too small, the value of chisquare gets over estimated. To overcome this problem we must ensure that the observed frequency in
any cell of the contingency table should not be less than 5.
ii) When the calculated value of chi-square turns out to be more than the critical or theoretical value at a
predetermined level of significance. We reject the null hypothesis. In contrast, when the chi-square
value is less than the critical theoretical value, the null hypothesis is not rejected. However, when the
chi-square value turns out to be zero, we have to be extremely careful to confirm that there is no
difference, between the observed and expected frequencies. Such a situation sometimes arises on
account of faulty method used in the collection of data.
1.7 LET US SUM UP
-square test is the most commonly used non-parametric test. Non parametric techniques are
used when a researcher works with ordinal or nominal data or with a small number of cases
representing a highly asymmetrical underlying distribution.
-square test is used as a test of
-square test is

used for two broad purposes. Firstly it is used as a test of goodness of fit and secondly as a test of
independence.

16
Significance of the Difference of Frequency
Significance of the Difference of Frequency: Chi-Square
16
-square tells us whether the frequencies observed among the categories
of a variable are tested to determine whether they differ from a set of expected hypothetical
frequencies.
-square is usually applied for testing the relationship between two
variables in two way. First by testing the null hypothesis of independence and second by computing the
value of contingency coefficient, a measurement of relationship existing between the two variable.

1.8 UNIT END QUESTIONS


1) Describe the properties of chi-square test. 2) What do you understand by the goodness of fit test ?
Describe with examples. 3) What is the chi-square test for independence ? Explain with examples. 4)
What precautions would you keep in mind while using chi-square. 5) Write short notes on: Yates
correction and expected frequencies in contingency table 1.9 ANSWERS TO SELF ASSESSMENT
QUESTIONS 1) i) F (ii) T (iii) T (iv) T (v) F
2) i) Negative (ii) Yates Correction (iii) Reject
1.10 GLOSSARY
Chi-square Distribution : A distribution with degree of freedom as the only parameter. It is skewed to
the right for small degree of freedom. Chi-square Test : A statistical techniques used to test significance
in the analysis of frequency distribution. Contingency Table : A table having rows and column wherein
each row corresponds to a level of one variable and each column to a level of another variable. Expected
Frequencies : The frequencies for different categories which are expected to occur on the assumption
that the given hypothesis is true. Observed Frequencies : The frequencies actually obtained from the
performance of an experiment.

17

Significance of the Difference of Frequency


Significance of the Difference of Frequency: Chi-Square
17
1.11 SUGGESTED READINGS
Garrett, H.E. (1971), Statistics in Psychology & Education, Bombay, Vakils, Seffer & Simoss Ltd. Guilford,
J.P. (1973), Fundamental Statistics in Psychology & Education, Newyork, McGraw Hill.

34
Significance of the differences between Means (T-Value)
34
Significance of the Difference of Frequency
UNIT 3 SIGNIFICANCE OF THE DIFFERENCES BETWEEN MEANS (T-VALUE)
Structure 3.0 Introduction 3.1 Objectives 3.2 Need and Importance of the Significance of the Difference
between Means 3.3 Fundamental Concepts in Determining the Significance of the Difference between
Means 3.3.1 Null Hypothesis 3.3.2 Standard Error 3.3.3 Degrees of Freedom 3.3.4 Level of Significance
3.3.5 Two Tailed and One Tailed Tests of Significance 3.3.6 Errors in Making Inferences 3.4 Methods to
Test the Significance of Difference between the Means of Two Independent Groups t-test 3.4.1 Testing
Significance of Difference between Uncorrelated or Independent Means 3.5 Significance of the
Difference Between two Correlated Means 3.5.1 The Single Group Method 3.5.2 Difference Method
3.5.3 The Method of Equivalent Groups 3.5.4 Matching by Pairs 3.5.5 Groups Matched for Mean and SD
3.6 Let Us Sum Up 3.7 Unit End Questions 3.8 Glossary 3.9 List of Formula 3.10 Suggested Readings
3.0 INTRODUCTION
In psychology some times we are interested in research questions like Do the AIDS patients who are
given the drug AZT show higher T-4 blood cell counts than patients who are not given that drug? Is the
error rate of typist the same when work is done in a noisy environment as in a quiet one? Whether
lecture method of teaching is more effective than lecture cum discussion method?
Consider the question whether lecture method of teaching is more effective than discussion method.
For this investigator divides the class in to two groups. One group is taught by lecture method and other
by discussion method. After a few months researchers administer an achievement test for both the
group and find out the mean achievement scores of the two groups

35
Significance of the differences between Means (T-Value)
35
Significance of the Difference of Frequency
say M1 and M2. The difference between these two mean is then calculated. Now the questions is
whether the difference is a valid difference or it is because of sampling fluctuation or error of sampling.
Whether this difference is significant or not significant. Whether on the basis of this difference, could we
conclude that one method of teaching is more effective than the other method.
These question can be answered by the statistical measures which we are going to discuss in this unit.
To test the significance of difference between mean we can use either the t-test or Z test. When the
sample size is large, we employ Z test and when sample is small, then we use the t test. In this unit we
are concerned with t test. We will get acquainted with the various concepts related, to computation and
description of t test.
3.1 OBJECTIVES
After reading this unit, you will be able to:

portance of the significance of


andard error, degrees of
freedom, level of significance, two tailed and one tailed test of significance, type I error and type II error;
and
he significance of difference between mean ( t-test) when groups are independent,
when there are correlated groups, groups matched by pair and groups matched by mean and standard
deviation.
3.2 NEED AND IMPORTANCE OF THE SIGNIFICANCE OF THE DIFFERENCE BETWEEN MEANS
In psychology sometimes we are interested in knowing about the significance of the differences
between populations. For example we are interested to discover whether ten year old boys and ten year
old girls differ in their linguistic ability. Or we want to find out if children from high SES perform and
score better academically than children from low SES. We may also try to find out at times, if two
groups of persons coming from different background differ in their agility factor. Thus many questions
are asked and to be answered in psychology for which one of the measures we use is the Mean.
Let us take the first question on linguistic ability of boys and girls. First we randomly select a large
sample of boys and girls (large sample means the group comprises of 30 or more than 30 persons.).
Then we administer a battery of verbal test to measure the linguistic ability of the two groups, compute
the mean scores on linguistic ability test of the two groups. Let us say the obtained mean scores for
boys and girls are M1 and M2. Now we try to find the difference between the two means. If we get a
large difference(M1 M2) in favour of the girls then we can confidently say that girls of 10 years of age
are significantly more able linguistically than 10 years old boys. On the contrary if we find small

difference between two means then we would conclude that ten years old girls and boys do not differ
in linguistic ability.

36
Significance of the differences between Means (T-Value)
36
Significance of the Difference of Frequency
An obtained mean is influenced by sampling fluctuation or error of sampling and whatever differences
are obtained in the means, it may be due to this sampling error. Even mean of population 1 and mean
of the population 2 may be the same but because of sampling error we may find the difference in the
range of 2 samples drawn from two populations. In order to test the significance of an obtained
difference we must first have a standard error(SE) of the difference. Then from the difference between
the sample mean and standard error of difference we can say whether the difference is significant or
not. Now the question arises what do we mean by significant difference? According to Garrett (1981) a
difference is called significant when the probability is high and that it cannot be attributed to chance
that is (Temporary and accidental factors) and hence represent a true difference between population
mean.
A difference is non significant when it appears reasonably certain that it could easily have arisen from
sampling fluctuation and hence imply no real or true differences between the population means.
3.3 FUNDAMENTAL CONCEPTS IN DETERMINING THE SIGNIFICANCE OF THE DIFFERENCE BETWEEN
MEANS
3.3.1 Null Hypothesis This is a useful tool in testing the significance of differences. Null hypothesis
asserts that there is no true difference between the two population means, and the difference found
between the sample mean is therefore, accidental or unimportant ( Garreet 1981). In the course of a
study or an experiment, the null hypothesis is stated so that it can be tested for possible rejection. For
example to study the significant difference in linguistic ability of 8 years old girls and boys we select
random sample of girls and boys and administer a battery of verbal test, compute the means of the two
groups. In this study the null hypothesis may be stated thus: there exists no significant difference
between the linguistic ability of boys and girls. If this null hypothesis is rejected then we can say one
group is superior to the other. 3.3.2 Standard Error The primary objective of statistical inference is to
make generalisation from a sample to some population of which the sample is part. Standard error
measures (1) error of sampling and (2) error of measurement. Suppose we have knowledge of the true
mean, means of the population, we randomly select 100 representative sample from the population and
compute their mean and standard deviations. The standard deviation obtained from this representative
sample is known as standard error of the mean. The standard error of the mean can be calculated by
the following formula: SEm or m = /N Where = The standard deviation of the sample mean N=

The number of cases in the sample. If the standard error of measurement is large it shows considerable
sampling error.

37
Significance of the differences between Means (T-Value)
37
Significance of the Difference of Frequency
3.3.3 Degrees of Freedom When a statistics is used to estimate a parameter the number of degrees of
freedom (d.f) available depends upon the restriction placed upon the observations. One d.f. is lost for
each restriction imposed. For example we have five scores as 5,6,7,8 and 9 the mean is 7 and deviation
of our scores from 7 are -2, -1, 0, 1 and 2. The sum of these deviations is zero. In consequence if any four
deviations are known the remaining deviations may be automatically determined. In this way, out of the
five deviations, only four (N-1) are free to vary as, the condition that the sum equals to Zero impose
restriction upon the independence of the 5th deviation. Originally there were 5(N=5) degrees of
freedom in computing the mean because all the observation or scores were independent. But as we
made use of the mean for computing standard deviation we lost one degree of freedom. Degrees of
freedom varies with the nature of the population and the restriction imposed. For example in the case
of value calculated between means of two independent variables, where we need to compute deviation
from two means, the number of restrictions imposed goes up to two consequently d.f. is (N- 1+N-2).
3.3.4 Level of Significance Whether a difference between the means is to be taken as statistically
significant or not depends upon the probability that the given difference could have arisen by chance.
The researcher has to take a decision about the level of significance at which he will test his hypothesis.
In social sciences .05 and .01 level of significance are most often used. When we decide to use .05 or 5%
level of significance for rejecting a null hypothesis it means that the chances are 95 out of 100 that is not
true and only 5 chances out of 100 the difference is a true difference.
In certain types of data, the researcher may prefer to make it more exact and use .01 or 1% level of
significance. If hypothesis is rejected at this level it shows the chances are 99 out of 100 that the
hypothesis is not true and only 1 chance out of 100 it is true. The level of significance which the
researcher will accept should be set by researcher before collecting the data.
3.3.5 Two Tailed and One Tailed Tests of Significance In many situations we are interested in finding the
difference between obtained mean and the population mean. Our null hypothesis states that the M1
amd M2 do not vary and the difference between them is zero. (i,e. Ho:M1- M2=0). Whether this
difference is positive or negative we are not interested in the direction of such a difference. All that we
are interested is whether there is a difference. For example we hypothesised that two groups will differ
from each other we dont know which group will have higher mean scores and which group lower. This

a non directional hypothesis and it gives rise to a two-tailed hypotheses test. In other words the
difference may be in either direction and thus is said to be non directional.
In many experiments our primary concern is with the direction of the difference rather than with its
existence in absolute term. For example if we are interested to determine the gain in vocabulary
resulting from additions by weekly reading assignment. Here we are interested in finding out the gain in
vocabulary. To take another example, if we say that training in yoga will

38
Significance of the differences between Means (T-Value)
38
Significance of the Difference of Frequency
reduce the degree of tension in persons, then we are clearly stating that there will be a reduction in the
tension. In cases like this we make use of the one tailed or non directional test to test the significance of
difference between the means.
3.3.6 Errors in Making Inferences If the null hypothesis is true and we retain it or if it is false we reject
it, we had made a correct decision. But sometimes we make errors. There are two types of errors Type I
error and Type II error.
A Type I error, also known as alpha error, is committed when null hypothesis (Ho) is rejected when in
fact it is true.
A Type II error, also known as beta error, is committed when null hypothesis is retained and in fact it is
not true. For example suppose that the difference between two population means (1- 2) is actually
zero, and if our test of significance when applied to the sample mean shows that the difference in
population mean is significant we make a Type I error. On the other hand if there is true difference
between two population mean, and our test of significance based on the sample mean shows that the
difference in population mean is not significant we commit a type II error.
3.4 METHODS TO TEST THE SIGNIFICANCE OF DIFFERENCE BETWEEN MEANS OF TWO INDEPENDENT
GROUPS (t-test)
3.4.1 Testing Significance of Difference Between Uncorrelated or Independent Means The step used to
find out significance of differences between independent mean are as below: Step 1. Computation of
the mean Step 2. Computation of the combined standard deviation by using the following formula:
x1=X1-M1 (deviation of scores of first sample from its mean) x2= X2-M2 (deviation of scores of second
sample from its mean) SD = (x12+ x22)/(N1-1)+(N2-1) Step 3. Computation of the Standard error
of the difference between two means by using following formula:
SED= SD / (1/N1+1/N2) Step 4.
To Compute the t value for the different in two independent sample mean. The following formula is

used to determine t value is


t = (M1-M2)-0/SED Step 5. Find out the degree to freedom. The
(df) degree of freedom is calculated using the following formula
df=(N1-1)+(N2-1) Step 6. We
then refer to table of t (can be found in any statistics book) distributions with the calculated degree of
freedom df and read the t value

39
Significance of the differences between Means (T-Value)
39
Significance of the Difference of Frequency
given under column .05 and .01of two tailed test. If our computed t value is equal or greater than the
critical t value given in table then we can say that t is significant. If the computed value is lesser than
given value then we will say that it is non-significant. Let us illustrate the whole process with the help of
example Example: An interest test was administered to 6 boys and 10 girls. They obtained following
scores is the mean difference between two groups significant ? Table: Scores of boys and girls and the t
value calculation
Scores of boys
Scores of girls

X1 x1 x X2 x2 x2
28 -2 4 20 -4 16
35 5 25 16 -8 64
32 2 4 25 1 1
24 -6 36 34 10 100
36 -4 16 20 -4 16
35 5 25 28 4 16
31 7 49
24 0 0
27 3 9
15 -9 81

X1=180 x1= 110 X2=240 x2 =352


Calculation M1=(X1/N1) M1=180/6 =30 M2=(X2 / N2) M1=240/10 =24 SD= *( x21+x22)/(N11)+(N2-1)+ SD = *( 110+352)/(6-1)+(10-1)+ = 5.74 SEd= SD *( (N1+N2)/(N1N2)+
= 5.74 / *
(16)/(60)]
= 5.74 .5164 = 2.96 t= (M1-M2)-0/SED = (30-20)-0/2.96 = 2.03 df=(N1-1)+(N2-1) df=(61)+(10-1) =14

40
Significance of the differences between Means (T-Value)
40
Significance of the Difference of Frequency
Entering the value in table on 14 d.f. We get 2.14 at the .05 and 2.98 at the.01 level, since our t is less
than 2.14, therefore we will say that the mean difference between boys and girls is non significant. Let
us take another example Example On an academic achievement test 31 girls and 42 boys obtained the
following scores. Table: Mean scores and Standard deviation Mean SD N df Boys 40.39 8.69 31 30 Girls
35.81 8.33 42 41 Is the mean difference in favour of boys and significant at the .05 level. First we will
compute the pooled standard deviation by the following formula SD = *(SD1)2 (N1-1) + (SD2)2
(N2-1)]/(N1+N2) SD1= Standard Deviation of group 1 i.e. 8.69 N1= Number of subject is group 1 i.e. 31
SD2= Standard devotion of groups 2 i.e. 8.33 N2= Number of subject in group 2 i.e. 42 SD = [(8.69)2
(31-1)+(8.33)2 (42-1)/(N1-1)+(N2-1)] SED= SD [( (N1+N2)/(N1N2)] SED= 8.48 [(31+42)/(31X42)] t=
(M1-M2)-0/SED = (40.39-35.81)-0/2.01 = 2.28 df=(N1-1)+(N2-1) df=(31-1)+(42-1) =71 Entering Table
with 71 df we find t entries of 2.00 at the .05 level and 2.65 at the .01 level. The obtained t of 2.28 is
significant at .05 level but not at the .01 level. We may say boys academic achievement is better in
comparison to that girls. 3.5 SIGNIFICANCE OF THE DIFFERENCE BETWEEN TWO CORRELATED MEANS
3.5.1 The Single Group Method In the previous section we discussed the problem of determining the
significance of difference between mean obtained by two independent groups of boys and girls. Some
time we have single group and we administer the same test twice. For example if we are intending to
find out the effect of training on the students educational achievement, first we take the measures of
subjects educational achievement before training , then we introduce the training programme, and
again we take the measure of educational achievement. In this we have single group and administer
educational achievement test twice. Such type of design is known as single group design. In order to get
the significance of the

41
Significance of the differences between Means (T-Value)

41
Significance of the Difference of Frequency
difference between the means obtained in the before training and after training we use the following
method.
SED or D = *2M1+ 2M2- 2r M1 M2+ Where M1= Standard error of the initials test
M2=Standard error of the finals test r= coefficient of correlation between scores on initial test and final
test t=(M1-M2)-0/ D Let us illustrate the above formula will the help of following example.
Example At the beginning of the session an educational achievement test in maths was given to 100 IX
grade students. Their mean was 55.4 and SD was 7.2. After six months an equivalent from of the same
test was given and the mean was 56.9 and SD was 8.0. The correlation between scores made on the first
testing and second testing was .64. Has the class made significant progress in maths during six month?
We may tabulate our data in the following manner Table: Scores in the initial and final test of students
Initial Test Final Test No. of students 100 100 Mean 55.4 56.9 SD 7.2. 8.0 Standard error of the mean
0.72 0.80 r12 .64 Calculation SED or D = *2M1+ 2M2- 2r M1 M2+ SEd= *( (.72)2+(.80)22x.64x.72x.80]
= [.5184+.6400-.7373] = .649 t= (M1-M2)-0/SED = 1.5-0/.649 = 2.31 df=(N11)=(100-1) , df=99 From Table we look at the df = 99, and find that the value at .05 level is 1.98 and at
.01 level is 2.63. Our obtained value is 2.31 therefore this value is significant at .05 level and not on .01
level. Here we can say that class made substantial improvement in mathematical achievement in six
months. 3.5.2 Difference Method When groups are small then we must prefer the difference method to
that given above. Let us illustrate the use of this method with the help of following example.

42
Significance of the differences between Means (T-Value)
42
Significance of the Difference of Frequency
Example Ten subjects were tested on an attitude scale. Then they were made to read some literature in
order to change their attitude. Their attitude were again measured by the same scale. The results of the
initials and final testing are as under. Table: Results of initial and final testing of attitude
Initial condition
Final condition
Difference Cond 2- cond 1 (Mean = 8)
X = D-M x

50 62 12 4 (12-8) 16 42 40 -2 -10(-2--8) 100 35 30 -5 -13(-5-8) 169 51 61 10 2 (10-8) 4 42 52 10 2 (10-8)


4 26 35 9 -1 (9-8) 1 41 51 10 2 (10-8) 4 42 52 10 2 (10-8) 4 60 68 8 0 (8-8) 0 70 84 14 -6 (14-8) 36 55 63 8
0 (8-8) 0 38 50 12 4 (12-8)) 16 Total D = 96 x = 354
Note: The sum of scores of final condition is more than sum of initial condition therefore we subtract
scores of initial condition from scores of final condition (Final condition Initial condition) and add the
score to find D. Mean = (D)/N Mean = 96/12=8 SDD= *(x2)/N-1+ SDD = *354/11+ = 5.67
SEMD= SDD/ (N) SEMD= 5.67/ (12) t= MD-0/SEMD t=8-0/1.64 t=4.88 d.f.=12-1 Entering in table
with 11 df, we find t entries of 2.20 and 3.11 at the .05 and at the .01 levels. Our t of 4.88 is far above
the .01 level. We can conclude that subjects attitude changed significantly from initial to final condition.

43
Significance of the differences between Means (T-Value)
43
Significance of the Difference of Frequency
3.5.3 The Method of Equivalent Groups In experiments when we want to compare the relative effect
of one method of treatment over another we generally take two groups, one is known as experimental
group and the other is known as control group. Here we have two groups not a single group. For the
desired results these two groups need to be made equivalent. This can be done by (i) Matched pair
technique or (ii) Matched groups technique. These are explained below i) Matched pair technique: In
this techniques matching is done by pair. Matching is done on variables which are going to affect the
results of the study like age, intelligence, interest, socio-economic status. ii) Matched groups technique:
In this technique instead of person to person matching, matching of groups is carried out in terms of
Mean and S.D. 3.5.4 Matching by Pair Formula for calculation of standard error of difference between
mean is: SED or D = *2M1+ 2M2- 2r M1 M2+ . Here 2M1= Standard error of mean 1 2M2=
Standard error of mean 2 r=correlation between the two groups scores t = ( M1-M2 )-0 /SED Example
Two groups X and Y of Children 72 in each group are paired child to child for age and intelligence. Both
groups were given group intelligence scale and scores were obtained. After three weeks experimental
group subjects were praised for their performance and urged to try better. The control groups did not
get the incentive. Group intelligence scale again was administered on the groups. The data obtained
were as follows. Did the praise affect the performance of the group or is there a significant difference
between the two groups. Table: Experimental group Control group No. of children in each group 72 72
Mean score of final test 88.63 83.24 SD of final test 24.36 21.62 Standard error of the mean of final test
2.89 2.57 Correlation between experimental and control group scores .65 SED or D = *2M1+ 2M22r M1 M2+ SED or D = *( (2.89)2+(2.57)2-2x.65x 2.89x2.57]
= 2.30 t= (88.63-83.24)0/2.30=2.34 d.f. = 72-1= 71

44
Significance of the differences between Means (T-Value)
44
Significance of the Difference of Frequency
If we see the table we find that at 71 d.f. the value at .05 is 2.00 and at .01 is 2.65. The obtained t is 2.34
therefore this value is significant at .05 level and not at .01 level. On the basis of the results is can be
said that praise did have significant effect in stimulating the performance of children.
3.5.5 Groups Matched for Mean and SD When groups matched in terms of mean and S.D., the following
formula is used to calculate t. SED = *2M1+ 2M2 (1-r2] t = (M1-M2)-0/ SED SED= Standard error of
difference 2M1 = Standard error of mean 1 2M2 = Standard error of mean 2 r= Correlation between
final scores of two tests The above formula can be illustrated by the following example Example The 58
students of academic college and 72 students of technical college were matched for mean and SD upon
general intelligence test. Then the achievement on a mechanical aptitude test was compared. The
Questions is do the two groups enrolled in different courses differ in mechanical ability? Academic
Technical No. of children in each group 58 72 Mean on Intelligence GTest (Y) 102.50 102.80 SD on
Intelligence Test Y 33.65 31.62 Mean on Mechanical Aptitude (X) 48.52 53.51 SD on Mechanical Aptitude
X 10.60 15.36 r 12 .50 SED = *2M1 / N1+ + *2M2 / N2 +-(1-r2 ) Therefore SED or D = *(10.60)2/58
+( 15.35)2/72]-[1-.25] = 1.97 t = (53.61-48.52)-0/1.97 = 2.58 d.f.=(N1-1)+(N2-1)=(58-1)+(72-1)=128
Entering the value in table we find that on 125 df (which is near to 128) the value are 1.98 at .05 level
and 2.62 at .01 our obtained value is 2.58. This is significant at .05 level. We may say that two group
differ in mechanical aptitude.

Introduction to Correlation Product Moment Coefficient of Correlation


22

Understand how to interpret the correlation coefficient;

ction and strength of

significance of r and the


related assumptions; and Know how to and when to use other methods of correlation when r is not
appropriate to use.
2.2 BUILDING BLOCKS OF CORRELATION

Understanding product moment correlation coefficient requires understanding of mean, variance and
covariance. We shall understand them once again in order to understand correlation.
2.2.1 Mean
Mean of variable X (symbolised as ) is sum of scores ( X) divided by number of observations (n). The
mean is calculated in following way.
X/N
You have learned this in the first block. We will need to use this as a basic element to compute
correlation.
2.2.2 Variance
The variance of a variable X (symbolised as V) is the sum of squares of the deviations of each X score
from the mean of X () divided by number of observations (n). V = / N
You have already learned that standard deviation of variable X ( ) is square root of variance of X (fx /
N ).
2.2.3 Covariance
The covariance between X and Y ( or ) can be stated as
CV= xy / N
xy = product of the deviations of X and Y group scores from their respective means x = deviation from
the mean X y = Deviations from the Mean Y N= the total number of subjects

23
Introduction to Correlation Product Moment Coefficient of Correlation
23
Covariance is a number that indicates the association between two variables X and Y. To compute
covariance, first you need to calculate deviation of each score on X from its mean (M1 ) and deviation of
each score on Y from its mean (M2). Then multiply these deviations to obtain their product. Then, sum
these products. Divide this sum by number of observations (n). The resulting number is covariance. Lets
quickly learn to compute the covariance. We shall use the data from table 1.5 for this purpose. We shall
call duration of practice as X and Time taken as Y.
2.3 PEARSONS PRODUCT MOMENT COEFFICIENT OF CORRELATION

Now we shall understand the formula for computing Pearsons product- moment correlation coefficient.
We will also solve an example for the same.
2.3.1 Formula Since you have learned to compute the covariance between X and Y, now we shall learn
to compute Pearsons product-moment correlation coefficient (r). The Pearsons product moment
correlation coefficient (r) can be defined as xy / N. x x y
In this, there is covariance between X and Y, there is standard deviation of X and there is standard
deviation of Y. Since, it can be shown that this is the maximum value correlation can take. So the
maximum value of correlation coefficient is bound to be 1. The sign of Pearsons r depends on the sign
of products of x and y from their deviations. If the product is negative, then r will be negative and if is
positive then r will be a positive value. The denominator of this formula is always positive. This is the
reason for a 1 to + 1 range of correlation coefficient. By substituting covariance equation for
covariance we can rewrite equation 2.4 as (equation 2.5)
( )( )
XY
X X Y Y nr SS
rewrite equation 2.5 as follows:

(equation 2.5.)

( )( )
XY
XXYY
r
nS S
s formula for computing Pearsons correlation. Let
us look at another example for computing product moment correlation coefficient. Look at the example
provided here. A researcher was interested in studying relationship between personality and creativity.
She chose the Five-Factor model of personality as a personality conceptualisation. She realised that the
Five-Factor theory proposes that one of its five main dimensions, the openness to experience (O), is
uniquely related to creativity (McCrae, 1987). So she chose 12-item NEO-FFI Openness scale to measure
openness. She chose the Torrance Test of Creative Thinking (Figural) for measurement of creativity. She
understands that originality subscale of TTCT fits to the universal definition of creativity as uniqueness.
Finally, she administered both the instruments on ten subjects (in the real research, we should take
sample larger than this, roughly around 50 to 100 to have sufficient power to statistics). The data she
had obtained are given below in table 2.2. Now we shall compute the Pearsons product moment
correlation coefficient on this data.

2.4 INTERPRETATION OF CORRELATION


Now we shall learn to interpret correlation coefficient. We will interpret the direction and strength of
the correlation.
2.4.1 Understanding Direction The correlation between openness to experience (X) and creativity (Y) is
+0.803. The direction of this relationship is positive. This means that as the openness to experience
increases, creativity also increases and vice-versa. The direction of the correlation is determined by the
cross-product of . If most of these cross-products are positive then correlation is would be positive. It
will happen if a pair of deviations has same signs. Which means that when a given score of X deviates
from negatively, then the corresponding Y score has to deviate from negatively and vice-versa. The
figure 2 shown below indicates a positive relationship.
Openness
181614121086
Creativity
32
30
28
26
24
22
Fig. 2.1: Scatter of the relationship between openness and creativity for the data table 2.2

27
Introduction to Correlation Product Moment Coefficient of Correlation
27
2.4.2 Understanding Strength The strength of association between two variables can be understood
from the value of the coefficient. The coefficient of 0.80 indicates a high correlation. It shows that the
relationship between the two variables is certainly a close one. But it is still far from perfect. The low

score on openness to experience decreases the chances of being creative. Similarly, high score on
openness increases once chances to be creative. The common variance between openness and
creativity can be calculated.
CV = + 0.802 100 = 0.64 100 = 64.00% The
variance in creativity that is shared by openness and vice-versa is 64.00%. It is certainly a high percent
variance one variable is explaining in the other. It must be kept in mind that this correlation is computed
on a data of ten individuals. More representative data might change the picture. You would also realise
that no assumptions were required for computing correlation. 2.4.3 Issues in Interpretation of
Correlation The interpretation of the correlation on the basis of the strength and direction looks very
straightforward exercise. One should keep in mind that this interpretation of the correlation coefficient
is subjected to other aspects of the data. These aspects are range of the scores, reliability of
measurement, and presence of outliers. Lets look at them.
Restricted Range While correlating X and Y, it is expected that both the variable are measured with full
range. For example, suppose we want to study the correlation between hours spent in studies and
marks. We are supposed to take students who have varying degree of hours of studies, that is, we need
to select students who have spent very little time in studies to the once who have spent great deal of
time in studies. Then we will be able to obtain true value of the correlation coefficient. But suppose we
take a very restricted range then the value of the correlation is likely to reduce. Look at the following
examples the figure 2.1a and 2.1b.
Figure 2.2: Scatters showing the effect of range on correlation.

The figure 2.2a is based on a complete range. The figure 2.2b is based on the data of students who have
studied for longer durations. The scatter shows that
r = .96
hours studied
876543210
marks
50
40
30
20
10
0

r = .50
Hours Studied
7.57.06.56.05.55.04.5
MARKS
50
48
46
44
42
40
Fig. 2.2b: Scatter with restricted range on hours studied
Fig. 2.2a: Scatter showing full range on both variables

when the range was full, the correlation coefficient was showing positive and high correlation. When the
range was restricted, the correlation has reduced drastically.
You can think of some such examples. Suppose, a sports teacher selects 10 students from a group of 100
students on the basis of selection criterion, their athletic performance. Now the actual performance of
these ten selected students in the game was correlated with the selection criterion. The very low
correlation between selection criterion and actual game performance. She would believe that selection
criterion is not related with actual game performance. Is it true..? Why so? Now you will realise that
the range of the scores on selection criterion is extremely restricted (because these ten students were
only high scorers) and hence the relationship is weak. So note that whenever you interpret correlations,
the range of the variables is full. Otherwise the interpretations will not be valid. Figure 2.2. above
shows the effect of range on correlation. Unreliability of Measurement This is an important issue in
psychological research. The psychological test has to have reliability. The reliability refers to the
consistency of the measurement. If the measurement is consistent, then the test has high reliability. But
at times one of the variable or both the variables may have lower reliability. In this case, the correlation
between two less reliable variable reduces. Generally, while interpreting the correlation, the reliability is
assumed to be high. The general interpretations of correlations are not valid if the reliability is low. This
reduction in the correlation can be adjusted for the reliability of the psychological test. More advanced
procedures are available in the books of psychological testing and statistics.

Outliers Outliers are extreme score on one of the variables or both the variables. The presence of
outliers has extremely deterring impact on the correlation values. The strength and degree of the
correlation are affected by the presence of outlier. Suppose you want to compute correlation between
height and weight. They are known to correlate positively. One of the scores has low score on weight
and high score on height (probably, some anorexia patient). This extreme score is called an outlier. Let
us see the impact of an outlier observation on correlation. Without the outlier, the correlation is 0.95.
The presence of an outlier has drastically reduced a correlation coefficient to 0.45. Figure 2.2. shows the
Impact of an outlier observation on correlation. Without the outlier, the correlation is 0.95. The
presence of an outlier has drastically reduced a correlation coefficient to 0.45.

r = +.45
Height
6.56.05.55.04.54.03.53.0
Weight
70
60
50
40
30
20
Curvilinearity We have already discussed the issue of linearity of the relationship. The Pearsons product
moment correlation is appropriate if the relationship between two variables is linear. The relationships
are curvilinear then other techniques need to be used. If the degree of cruviliniarity is not very high, high
score on both the variable go together, low scores go together, but the pattern is not linear then the
useful option is Spearmans rho. We shall discuss this technique in the next unit.
2.5 USING RAW SCORE METHOD FOR CALCULATING r
Apart from the method we have learned in the earlier section to calculate Pearsons r, we can use
another variation of the formula. It is called as raw score method. First we will understand how the two
formulas are similar. Then we will solve a numerical example for the raw score method. We have
learned following formula for calculating r.

2.5.1 Formulas for Raw Score


We have already learnt following formula of correlation. This is a deviation score formula.
denominator of correlation formula can be written as N x x x y

The

And the numerator of the correlation formula can be written as xy.


Thus r can be calculated by using the formula xy / N x x x y
There is another formula that is raw score formula which reads as given below:
X - X / N; Y - Y / N; XY / ,X - X / N- x , Y - Y / N}

The students may find one of the methods easier. There is nothing special about the methods. One
should be able to correctly compute the value of correlation.
2.6 SIGNIFICANCE TESTING OF r
Statistical significance testing refers to testing a hypothesis about a population parameter by using
sample statistics. When we calculate correlation as a descriptive index of sample, we need not test
statistical significance of correlation. The significance need to be tested when we are interested in
understanding whether the obtained value of correlation is greater than the chance finding. One may
obtain a correlation of 0.22 between health and income on a sample of 30 individuals. If you take
another sample you may get a different value. This simply refers to sample specific finding. Researchers
are always interested in knowing whether the obtained findings are due to sample specific errors or this
is correct representation of population. To do this, one need to test statistical significance of the
correlation coefficients. Testing significance of correlation coefficient is complex issue. The reason for
the complexity lies in the distribution of the population correlation. We will not enter into the
complexities of this issue. The t-distribution and z-distribution are used to test statistical significance of
r.
As you have learned, we need to write a null hypothesis (HO) and alternative hypothesis (HA) for this
purpose. The typical null hypothesis states that population correlation coefficient between X and Y (xy)
is zero. HO : xy = 0 HA: xy 0

If we reject the HO then we accept the alternative (HA) that the population correlation coefficient is
other than zero. It implies that the finding obtained by us is not a sample specific error. Sir Ronald Fisher
has developed a method of using t-distribution for testing this null hypothesis. The degrees of freedom
(df) for this purpose are n 2. Here n refers to number of observations. We can use Appendix C for

testing the significance of correlation coefficient. Appendix C provides critical values of correlation
coefficients for various degrees of freedom. Let us learn how to use the Appendix C. We shall continue
with the example of health and income. Once we learn it, we shall use it for creativity and openness
example.
The correlation between health and income is +.22 obtained on 30 individuals. We decide to do this test
at 0.05 level of significance, so our = .05. We also decide to apply two-tailed test. The two-tailed test is
used if alternative hypothesis is non-directional, i.e. it does not indicate the direction of correlation
coefficient (meaning, it can be positive or negative) and one-tail test is used when alternative is
directional (it states that correlation is either positive or negative). Lets write the null hypothesis and
alternative hypothesis: HO : health income = 0 HA : health income 0
Now we will calculate the degree of freedom for this example.
2 = 30 2 = 28

df = n

So the df for this example is 28. Now look at Appendix C. Look down the leftmost df column till you
reach df 28. Then look across to find correlation coefficient from column of two-tailed test at level of
significance of 0.05. You will reach the critical value of r:
rcritical =
0.361
Because the obtained correlation value of +0.22 is less than critical (tabled) value, we accept the null
hypothesis that there is no significant correlation between health and income in the population. This
method is used regardless of the sign of the correlation coefficient. We use the absolute value (ignore
the sign) of correlation.
Now lets do the significance testing of correlation for our earlier example of openness and creativity.
The obtained correlation coefficient is 0.803 with n of 10. The null hypothesis is there is no correlation
between creativity and openness in the population. The alternative is there is a correlation between
them in population. HO : openness creativity = 0 HA : openness creativity 0
Now we will calculate the degree of freedom for this example.
df = n 2 = 10 2 = 8.
Now look at Appendix C. For the df = 8 and two-tailed = 0.05, we found out the critical value of r:
rcritical = 0.632

Because the obtained correlation value of +0.803 is greater than critical value of 0.632, we reject the
null hypothesis that there is no correlation between openness and creativity in the population. We
accept that there exists a correlation between openness and creativity in the population. 2.6.1
Assumptions for Significance Testing One may recall that simple descriptive use of correlation coefficient

does not involve any assumption about the distribution of either of the variable. However, using
correlation as an inferential statistics requires assumptions about X and Y. these assumptions are as
follows. Since we are using t- distribution, the assumptions would be similar to t.
Independence among the pairs of score. This assumption implies that the scores of any two
observations (subjects in case of most of psychological data) are not influenced by each other. Each pair
of observation is independent. This is assured when different subjects provide different pairs of
observation.
The population of X and the population of Y follow normal distribution and the population pair of scores
of X and Y has a normal bivariate distribution.
This assumption states that the population distribution of both the variables (X and Y) is normal. This
also means that the pair of scores follows bivariate normal distribution. This assumption can be tested
by using statistical tests for normality.
It should be remembered that the r is a robust statistics. This means that some violation of assumption
would not influence of the distributional properties of t and the probability judgments associated with
the population correlation.
2.7 OTHER TYPES OF PEARSONS CORRELATIONS
So far we have discussed Pearsons correlation for continues measurement of X and Y. We can calculate
Pearsons correlation when the variable is dichotomous (having two levels) for one or both the variables.
These correlations are popularly known as Point-Biserial and Phi coefficients. The dichotomous variables
are those that take two levels. For example, male- female, pass-fail, urban-rural, etc. We shall introduce
ourselves to these correlations. We are not going to solve the numerical of these correlations.
2.7.1 Point-Biserial Correlation (rpb) When one of the variable is dichotomous, and the other variable is
continuous, then the Pearsons correlation calculated on this data is called as Point-Biserial Correlation
(rpb). Suppose, we want to correlate marital status with satisfaction with life. Then we take marital
status at two levels: married and unmarried. The satisfaction with life can be measured by using a
standardised test of Satisfaction with Life. Now we have satisfaction with life as a continuously
measured variable and marital status as a dichotomous variable. In this case we shall use Point-Biserial
Correlation (rpb).
2.7.2 Phi Coefficient (phi ) Point-Biserial Correlation (rpb) was useful when one of the variable was
dichotomous. If both the variables (X and Y) are dichotomous, then the Pearsons Product Moment
Correlation calculated is called as Phi coefficient (phi ).

35

Introduction to Correlation Product Moment Coefficient of Correlation


35
Suppose, you are interested in investigating relationship between employment status and marital
status. Employment status is dichotomous having two levels, employed and unemployed. Similarly, the
marital status can be dichotomised by taking two levels: married and unmarried. Now we have both the
variables that are dichotomous. The correlation that is useful in such instances is Phi coefficient (phi ).

Nerve Impulses
Neurons send messages electrochemically; this means that chemicals (ions) cause an electrical impulse.
Neurons and muscle cells are electrically excitable cells, which mean that they can transmit electrical
nerve impulses.
These impulses are due to events in the cell membrane, so to understand the nerve impulse we need to
revise some properties of cell membranes.
All cells in animal body tissues are electrically polarized in other words, they maintain a
voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical
polarization results from a complex interplay between protein structures embedded in the membrane
called ion pumps and ion channels. In neurons, the types of ion channels in the membrane usually vary
across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties.
As a result, some parts of the membrane of a neuron may be excitable (capable of generating action
potentials), whereas others are not. Recent studies have shown that the most excitable part of a neuron
is the part after the axon hillock (the point where the axon leaves the cell body), which is called the
initial segment, but the axon and cell body are also excitable in most cases.
Each excitable patch of membrane has two important levels of membrane potential: the resting
potential, which is the value the membrane potential maintains as long as nothing perturbs the cell, and
a higher value called the threshold potential. At the axon hillock of a typical neuron, the resting
potential is around 70 millivolts (mV) and the threshold potential is around 55 mV.
The Resting Membrane Potential
When a neuron is not sending a signal, it is at rest. The membrane is responsible for the different
events that occur in a neuron. All animal cell membranes contain a protein pump called the sodiumpotassium pump (Na+K+ATPase). This uses the energy from ATP splitting to simultaneously pump 3
sodium ions out of the cell and 2 potassium ions in.

The Sodium-Potassium Pump (Na+K+ATPase)


(Provided by: Doc Kaiser's Microbiology Website)
Three sodium ions from inside the cell first bind to the
transport protein. Then a phosphate group is
transferred from ATP to the transport protein causing
it to change shape and release the sodium ions
outside the cell. Two potassium ions from outside the
cell then bind to the transport protein and as the
phospate is removed, the protein assumes its original
shape and releases the potassium ions inside the cell.

If the pump was to continue unchecked there would be no sodium or potassium ions left to pump, but
there are also sodium and potassium ion channels in the membrane. These channels are normally
closed, but even when closed, they leak, allowing sodium ions to leak in and potassium ions to leak
out, down their respective concentration gradients.

Concentration of ions inside and outside the neurone at rest:


Ion

Concentration inside
cell/mmol dm-3

Concentration outside
Why dont the ions move down their concentration gradient?
cell/mmol dm-3

K+

150.0

2.5

Na+ 15.0

Cl-

9.0

145.0

101.0

K+ ions do not move out of the neurone down their


concentration gradient due to a build up of positive charges
outside the membrane. This repels the movement of any
more K+ ions out of the cell.
The chloride ions do not move into the cytoplasm as the
negatively charged protein molecules that cannot cross the
surface membrane repel them.

The combination of the Na+K+ATPase pump and the leak channels cause a stable imbalance of Na+ and
K+ ions across the membrane. This imbalance of ions causes a potential difference (or voltage) between

the inside of the neurone and its surroundings, called the resting membrane potential. The membrane
potential is always negative inside the cell, and varies in size from 20 to 200 mV (milivolt) in different
cells and species (in humans it is 70mV). The Na+K+ATPase is thought to have evolved as an
osmoregulator to keep the internal water potential high and so stop water entering animal cells and
bursting them. Plant cells dont need this as they have strong cells walls to prevent bursting.
Check Point

The Resting Membrane Potential is always negative (-70mV)

K+ pass easily into the cell


Cl- and Na+ have a more difficult time crossing
Negatively charged protein molecules inside the neurone cannot pass the membrane
The Na+K+ATPase pump uses energy to move 3Na+ out for every 2K+ into neuron
The imbalance in voltage causes a potential difference across the cell membrane - called the resting
potential
The Action Potential [back to top]
The resting potential tells us about what happens when a neurone is at rest. An action potential occurs
when a neurone sends information down an axon. This involves an explosion of electrical activity,
where the nerve and muscle cells resting membrane potential changes.
In nerve and muscle cells the membranes are electrically excitable, which means they can change their
membrane potential, and this is the basis of the nerve impulse. The sodium and potassium channels in
these cells are voltage-gated, which means that they can open and close depending on the voltage
across the membrane.
The normal membrane potential inside the axon of nerve cells is 70mV, and since this potential can
change in nerve cells it is called the resting potential. When a stimulus is applied a brief reversal of the
membrane potential, lasting about a millisecond, occurs. This brief reversal is called the action potential:
An action potential has 2 main phases called depolarisation and repolarisation:

At rest, the inside of the neuron is slightly negative due to a


higher concentration of positively charged sodium ions outside
the neuron.

When stimulated past threshold (about 30mV in humans),


sodium channels open and sodium rushes into the axon, causing
a region of positive charge within the axon. This is
called depolarisation

The region of positive charge causes nearby voltage gated


sodium channels to close. Just after the sodium channels close,
the potassium channels open wide, and potassium exits the
axon, so the charge across the membrane is brought back to its
resting potential. This is called repolarisation.

This process continues as a chain-reaction along the axon. The


influx of sodium depolarises the axon, and the outflow of
potassium repolarises the axon.

The sodium/potassium pump restores the resting concentrations


of sodium and potassium ions

(provided by: Markham)


As an action potential travels down the axon, there is a change in polarity across the membrane.
The Na+and K+ gated ion channels open and close as the membrane reaches the threshold potential, in
response to a signal from another neuron.
At the beginning of the action potential, the Na+ channels open and Na+ moves into the axon, causing
depolarization.
Repolarization occurs when the K+channels open and K+ moves out of the axon.
This creates a change in polarity between the outside of the cell and the inside.
The impulse travels down the axon in one direction only, to the axon terminal where it signals other
neurons.
In physiology, an action potential is a short-lasting event in which the electrical membrane potential of
a cell rapidly rises and falls, following a consistent trajectory.
Action potentials occur in several types of animal cells, called excitable cells, which
include neurons, muscle cells, and endocrine cells, as well as in some plant cells.

In neurons, they play a central role in cell-to-cell communication. Action potentials in neurons are also
known as "nerve impulses" or "spikes", and the temporal sequence of action potentials generated by a
neuron is called its "spike train". A neuron that emits an action potential is often said to "fire".
In other types of cells, their main function is to activate intracellular processes. In muscle cells, for
example, an action potential is the first step in the chain of events leading to contraction.
In beta cells of the pancreas, they provoke release of insulin.[1]
Action potentials are generated by special types of voltage-gated ion channels embedded in a
cell's plasma membrane.[2]
These channels are shut when the membrane potential is near the resting potential of the cell, but they
rapidly begin to open if the membrane potential increases to a precisely defined threshold value.
When the channels open (by detecting the depolarization in transmembrane voltage[2]), they allow an
inward flow of sodium ions, which changes the electrochemical gradient, which in turn produces a
further rise in the membrane potential.
This then causes more channels to open, producing a greater electric current across the cell membrane,
and so on. The process proceeds explosively until all of the available ion channels are open, resulting in a
large upswing in the membrane potential.
The rapid influx of sodium ions causes the polarity of the plasma membrane to reverse, and the ion
channels then rapidly inactivate.
As the sodium channels close, sodium ions can no longer enter the neuron, and they are actively
transported out of the plasma membrane.
Potassium channels are then activated, and there is an outward current of potassium ions, returning the
electrochemical gradient to the resting state.
After an action potential has occurred, there is a transient negative shift, called
the afterhyperpolarization or refractory period, due to additional potassium currents. This is the
mechanism that prevents an action potential from traveling back the way it just came.
In animal cells, there are two primary types of action potentials, one type generated by voltage-gated
sodium channels, the other by voltage-gated calcium channels. Sodium-based action potentials usually
last for under one millisecond, whereas calcium-based action potentials may last for 100 milliseconds or
longer. In some types of neurons, slow calcium spikes provide the driving force for a long burst of rapidly
emitted sodium spikes. In cardiac muscle cells, on the other hand, an initial fast sodium spike provides a
"primer" to provoke the rapid onset of a calcium spike, which then produces muscle contraction.
A cell membrane consists of a layer of lipid molecules with larger protein molecules embedded in it. The
lipid layer is highly resistant to movement of electrically charged ions, so it functions mainly as an
insulator. The large membrane-embedded molecules, in contrast, provide channels through which ions
can pass across the membrane, and some of the large molecules are capable of actively moving specific
types of ions from one side of the membrane to the other.
Process in a typical neuron

Approximate plot of a typical action potential shows its various phases as the action potential passes a
point on a cell membrane.
The membrane potential starts out at -70 mV at time zero. A stimulus is applied at time = 1 ms, which
raises the membrane potential above -55 mV (the threshold potential).
After the stimulus is applied, the membrane potential rapidly rises to a peak potential of +40 mV at time
= 2 ms.
Just as quickly, the potential then drops and overshoots to -90 mV at time = 3 ms, and finally the resting
potential of -70 mV is reestablished at time = 5 ms.

Check Point

Action Potential has two main phases:

Depolarisation. A stimulus can cause the membrane potential to


change a little. The voltage-gated ion channels can detect this
change, and when the potential reaches 30mV the sodium
channels open for 0.5ms. The causes sodium ions to rush in,
making the inside of the cell more positive. This phase is referred
to as a depolarisation since the normal voltage polarity (negative
inside) is reversed (becomes positive inside).

Repolarisation. At a certain point, the depolarisation of the


membrane causes the sodium channels to close. As a result the
potassium channels open for 0.5ms, causing potassium ions to
rush out, making the inside more negative again. Since this
restores the original polarity, it is called repolarisation. As the
polarity becomes restored, there is a slight overshoot in the
movement of potassium ions (calledhyperpolarisation). The
resting membrane potential is restored by the Na+K+ATPase
pump.
Voltage-gated ion channels are capable of producing action potentials because they can give rise
to positive feedback loops: The membrane potential controls the state of the ion channels, but the state
of the ion channels controls the membrane potential. Thus, in some situations, a rise in the membrane
potential can cause ion channels to open, thereby causing a further rise in the membrane potential. An
action potential occurs when this positive feedback cycle proceeds explosively.
Voltage-gated sodium channels are responsible for the fast action potentials involved in nerve
conduction. Slower action potentials in muscle cells and some types of neurons are generated by
voltage-gated calcium channels.
The most intensively studied type of voltage-dependent ion channels comprises the sodium channels
involved in fast nerve conduction. These are sometimes known as Hodgkin-Huxley sodium channels
because they were first characterized by Alan Hodgkin and Andrew Huxley in their Nobel Prize-winning
studies of the biophysics of the action potential, but can more conveniently be referred to
as NaV channels. (The "V" stands for "voltage".)
An NaV channel has three possible states, known as deactivated, activated, and inactivated. The channel
is permeable only to sodium ions when it is in the activated state. When the membrane potential is low,
the channel spends most of its time in the deactivated (closed) state. If the membrane potential is raised
above a certain level, the channel shows increased probability of transitioning to the activated (open)
state. The higher the membrane potential the greater the probability of activation.
Once a channel has activated, it will eventually transition to the inactivated (closed) state. It tends then
to stay inactivated for some time, but, if the membrane potential becomes low again, the channel will
eventually transition back to the deactivated state. During an action potential, most channels of this
type go through a cycle deactivatedactivatedinactivateddeactivated.
However, the likelihood of a channel's transitioning from the inactivated state directly to
the activated state is very low: A channel in the inactivated state is refractory until it has transitioned
back to the deactivated state.
As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions
into the cell.
This is followed by the opening of potassium ion channels that permit the exit of potassium ions from
the cell.
The inward flow of sodium ions increases the concentration of positively charged cations in the cell and
causes depolarization, where the potential of the cell is higher than the cell's resting potential. The

sodium channels close at the peak of the action potential, while potassium continues to leave the cell.
The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell.
For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage
returns to its normal resting value, typically 70 mV.[3] However, if the voltage increases past a critical
threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a
runaway condition whereby the positive feedback from the sodium current activates even more sodium
channels. Thus, the cell "fires," producing an action potential.[4][5]
Although action potentials are generated locally on patches of excitable membrane, the resulting
currents can trigger action potentials on neighboring stretches of membrane, precipitating a domino-like
propagation. In contrast to passive spread of electric potentials (electrotonic potential), action
potentials are generated anew along excitable stretches of membrane and propagate without decay.[6]
Myelinated sections of axons are not excitable and do not produce action potentials and the signal is
propagated passively as electrotonic potential.
Regularly spaced unmyelinated patches, called the nodes of Ranvier, generate action potentials to boost
the signal. Known as saltatory conduction, this type of signal propagation provides a favorable tradeoff
of signal velocity and axon diameter.
Depolarization of axon terminals, in general, triggers the release of neurotransmitterinto the synaptic
cleft. In addition, backpropagating action potentials have been recorded in the dendrites ofpyramidal
neurons, which are ubiquitous in the neocortex.[7] These are thought to have a role in spike-timingdependent plasticity.
All or Nothing Law
The action potential only occurs if the stimulus
causes enough sodium ions enter the cell to change
the membrane potential to a
certain threshold level. At the threshold, sodium
gates open in the membrane and allow a sudden
flood of sodium ions to enter the cell. If the
depolarisation is not great enough to reach the
threshold, then an action potential (and hence an
impulse) will not be produced. This is called the all
or nothing law. This means that the ion channels
are either open or closed; there is no half-way
position. This means that the action potential
always reaches +40mV as it moves along an axon,
and it is never attenuated (reduced) by long
axons. Action potentials are always the same size,
however the frequency of the impulse carrying the
information can determine the intensity of the
stimulus, i.e. strong stimulus = high frequency.

How do Nerve Impulses Start


We and other animals have several types of receptors of mechanical stimuli. Each initiates nerve
impulses in sensory neurons when it is physically deformed by an outside force such as:
Mechanoreceptors enable us to
detect touch
monitor the position of our muscles, bones, and joints - the sense of proprioception
detect sounds and the motion of the body.
E.g. Touch
Light touch is detected by receptors in the skin. These are often found close to a hair follicle so even if
the skin is not touched directly, movement of the hair is detected.
Touch receptors are not distributed evenly over the body. The fingertips and tongue may have as many
as 100 per cm2; the back of the hand fewer than 10 per cm2.
Proprioception
Proprioception is our "body sense".
It enables us to unconsciously monitor the position of our body.
It depends on receptors in the muscles, tendons, and joints.
If you have ever tried to walk after one of your legs has "gone to sleep", you will have some appreciation
of how difficult coordinated muscular activity would be without proprioception.
The Pacinian Corpuscle
Pacinian corpuscles are pressure receptors. They are located in the skin and also in various internal
organs. Each is connected to a sensory neuron. Pacinian corpuscles are fast-conducting, bulb-shaped
receptors located deep in the dermis. They consist of the ending of a single neurone surrounded
by lamellae. They are the largest of the skin's receptors and are believed to provide instant information
about how and where we move. They are also sensitive to vibration. Pacinian corpuscles are also located
in joints and tendons and in tissue that lines organs and blood vessels.
Pressure on the skin changed the shape of the Pacinian corpuscle. This changes the shape of
the pressure sensitive sodium channels in the membrane, making them open. Sodium ions diffuse in
through the channels leading to depolarisation called a generator potential. The greater the pressure
the more sodium channels open and the larger the generator potential. If a threshold value is reached,
an action potential occurs and nerve impulses travel along the sensory neurone. The frequency of the
impulse is related to the intensity of the stimulus.
Adaptation
When pressure is first applied to the corpuscle, it initiates a volley of impulses in its sensory neuron.
However, with continuous pressure, the frequency of action potentials decreases quickly and soon
stops. This is the phenomenon of adaptation.
Adaptation occurs in most sense receptors. It is useful because it prevents the nervous system from
being bombarded with information about insignificant matters like the touch and pressure of our
clothing.
Stimuli represent changes in the environment. If there is no change, the sense receptors soon adapt. But
note that if we quickly remove the pressure from an adapted Pacinian corpuscle, a fresh volley of
impulses will be generated.

The speed of adaptation varies among different kinds of receptors. Receptors involved in proprioception
- such as spindle fibres - adapt slowly if at all.

How are Nerve Impulses Propagated? [back to top]


Once an action potential has started it is moved (propagated) along an axon automatically. The local
reversal of the membrane potential is detected by the surrounding voltage-gated ion channels, which
open when the potential changes enough.

The ion channels have two other features that help the nerve impulse work effectively:

For an action potential to begin, then the depolarisation of the neurone must reach the threshold value,
i.e. the all or nothing law.
After an ion channel has opened, it needs a rest period before it can open again. This is called
the refractory period, and lasts about 2 ms. This means that, although the action potential affects all
other ion channels nearby, the upstream ion channels cannot open again since they are in their
refractory period, so only the downstream channels open, causing the action potential to move one-way
along the axon.
The refractory period is necessary as it allows the proteins of voltage sensitive ion channels to restore to
their original polarity.
The absolute refractory period = during the action potential, a second stimulus will not cause a new
action potential.
Exception: There is an interval in which a second action potential can be produced but only if the
stimulus is considerably greater than the threshold =relative refractory period
The refractory period can limit the number o faction potentials in a given time.
Average = about 100 action potentials per second.
How fast are Nerve Impulses?
Action potentials can travel along axons at speeds of 0.1-100 m/s. This means that nerve impulses can
get from one part of a body to another in a few milliseconds, which allows for fast responses to stimuli.
(Impulses are much slower than electrical currents in wires, which travel at close to the speed of light,
3x108 m/s.) The speed is affected by 3 factors:
Temperature - The higher the temperature, the faster the speed. So homoeothermic (warm-blooded)
animals have faster responses than poikilothermic (cold-blooded) ones.
Axon diameter - The larger the diameter, the faster the speed. So marine invertebrates, who live at
temperatures close to 0C, have developed thick axons to speed up their responses. This explains why
squid have their giant axons.
Myelin sheath - Only vertebrates have a myelin sheath surrounding their neurones. The voltage-gated
ion channels are found only at the nodes of Ranvier, and between the nodes the myelin sheath acts as a
good electrical insulator. The action potential can therefore jump large distances from node to node
(1mm), a process that is called saltatory propagation. This increases the speed of propagation
dramatically, so while nerve impulses in unmyelinated neurones have a maximum speed of around
1 m/s, in myelinated neurones they travel at 100 m/s.

Action potential
From Wikipedia, the free encyclopedia

Neurotransmission[edit]
Several types of cells support an action potential, such as plant cells, muscle cells, and the specialized
cells of the heart (in which occurs the cardiac action potential). However, the main excitable cell is the
neuron, which also has the simplest mechanism for the action potential.
Neurons are electrically excitable cells composed, in general, of one or more dendrites, a single soma, a
single axon and one or more axon terminals.
Dendrites are cellular projections whose primary function is to receive synaptic signals. Their
protrusions, or spines, are designed to capture the neurotransmitters released by the presynaptic
neuron. They have a high concentration of ligand-gated ion channels. These spines have a thin neck
connecting a bulbous protrusion to the dendrite. This ensures that changes occurring inside the spine
are less likely to affect the neighboring spines.
The dendrites extend from the soma, which houses the nucleus, and many of the "normal"
eukaryotic organelles. Unlike the spines, the surface of the soma is populated by voltage activated ion
channels. These channels help transmit the signals generated by the dendrites.
Emerging out from the soma is the axon hillock. This region is characterized by having a very high
concentration of voltage-activated sodium channels. In general, it is considered to be the spike initiation
zone for action potentials.[8] Multiple signals generated at the spines, and transmitted by the soma all
converge here.
Immediately after the axon hillock is the axon. This is a thin tubular protrusion traveling away from the
soma. The axon is insulated by a myelin sheath. Myelin is composed of either Schwann cells (in the
peripheral nervous system) or oligodendrocytes (in the central nervous system), both of which are types

of glial cells. Although glial cells are not involved with the transmission of electrical signals, they
communicate and provide important biochemical support to neurons.[9]
To be specific, myelin wraps multiple times around the axonal segment, forming a thick fatty layer that
prevents ions from entering or escaping the axon. This insulation prevents significant signal decay as
well as ensuring faster signal speed. This insulation, however, has the restriction that no channels can be
present on the surface of the axon. There are, therefore, regularly spaced patches of membrane, which
have no insulation. These nodes of Ranvier can be considered to be "mini axon hillocks", as their
purpose is to boost the signal in order to prevent significant signal decay.
At the furthest end, the axon loses its insulation and begins to branch into several axon terminals. These
presynaptic terminals, or synaptic boutons, are a specialized area within the axon of the presynaptic cell
that contains neurotransmitters enclosed in small membrane-bound spheres calledsynaptic vesicles.
Initiation[edit]
Before considering the propagation of action potentials along axons and their termination at the
synaptic knobs, it is helpful to consider the methods by which action potentials can be initiated at
the axon hillock. The basic requirement is that the membrane voltage at the hillock be raised above the
threshold for firing.[10] There are several ways in which this depolarization can occur.

When an action potential arrives at the end of the pre-synaptic axon (yellow), it causes the release of
neurotransmitter molecules that open ion channels in the post-synaptic neuron (green). The combined
excitatory and inhibitory postsynaptic potentials of such inputs can begin a new action potential in the
post-synaptic neuron.
In fact, the generation of action potentials in vivo is sequential in nature, and these sequential spikes
constitute the digital codes in the neurons. Although previous studies indicate an axonal origin of a
single spike evoked by short-term pulses, physiological signals in vivo trigger the initiation of sequential
spikes at the cell bodies of the neurons,[11] [12]
Dynamics[edit]
Action potentials are most commonly initiated by excitatory postsynaptic potentials from a presynaptic
neuron.[13]
Typically, neurotransmitter molecules are released by the presynaptic neuron.
These neurotransmitters then bind to receptors on the postsynaptic cell. This binding opens various
types of ion channels.

This opening has the further effect of changing the local permeability of the cell membrane and, thus,
the membrane potential. If the binding increases the voltage (depolarizes the membrane), the synapse
is excitatory. If, however, the binding decreases the voltage (hyperpolarizes the membrane), it is
inhibitory.
Whether the voltage is increased or decreased, the change propagates passively to nearby regions of
the membrane (as described by the cable equation and its refinements).
Typically, the voltage stimulus decays exponentially with the distance from the synapse and with time
from the binding of the neurotransmitter.
Some fraction of an excitatory voltage may reach the axon hillock and may (in rare cases) depolarize the
membrane enough to provoke a new action potential.
More typically, the excitatory potentials from several synapses must work together at nearly the same
time to provoke a new action potential. Their joint efforts can be thwarted, however, by the
counteracting inhibitory postsynaptic potentials.
Neurotransmission can also occur through electrical synapses.[14] Due to the direct connection between
excitable cells in the form of gap junctions, an action potential can be transmitted directly from one cell
to the next in either direction. The free flow of ions between cells enables rapid non-chemical-mediated
transmission. Rectifying channels ensure that action potentials move only in one direction through an
electrical synapse.[citation needed] Electrical synapses are found in all nervous systems, including the human
brain, although they are a distinct minority.[15]
"All-or-none" principle[edit]
The amplitude of an action potential is independent of the amount of current that produced it. In other
words, larger currents do not create larger action potentials. Therefore, action potentials are said to
be all-or-nonesignals, since either they occur fully or they do not occur at all.[16][17][18] The frequency of
action potentials is correlated with the intensity of a stimulus. This is in contrast to receptor potentials,
whose amplitudes are dependent on the intensity of a stimulus.[19]
Sensory neurons[edit]
Main article: Sensory neuron
In sensory neurons, an external signal such as pressure, temperature, light, or sound is coupled with the
opening and closing of ion channels, which in turn alter the ionic permeabilities of the membrane and its
voltage.[20] These voltage changes can again be excitatory (depolarizing) or inhibitory (hyperpolarizing)
and, in some sensory neurons, their combined effects can depolarize the axon hillock enough to provoke
action potentials. Examples in humans include the olfactory receptor neuron and Meissner's corpuscle,
which are critical for the sense of smell and touch, respectively. However, not all sensory neurons
convert their external signals into action potentials; some do not even have an axon![21] Instead, they
may convert the signal into the release of a neurotransmitter, or into continuous graded potentials,
either of which may stimulate subsequent neuron(s) into firing an action potential. For illustration, in
the human ear, hair cells convert the incoming sound into the opening and closing of mechanically gated
ion channels, which may cause neurotransmitter molecules to be released. In similar manner, in the
human retina, the initial photoreceptor cells and the next layer of cells (comprising bipolar
cells and horizontal cells) do not produce action potentials; only some amacrine cells and the third layer,
the ganglion cells, produce action potentials, which then travel up the optic nerve.
Pacemaker potentials[edit]

In pacemaker potentials, the cell spontaneously depolarizes (straight line with upward slope) until it
fires an action potential.
In sensory neurons, action potentials result from an external stimulus. However, some excitable cells
require no such stimulus to fire: They spontaneously depolarize their axon hillock and fire action
potentials at a regular rate, like an internal clock.[22] The voltage traces of such cells are known
as pacemaker potentials.[23] The cardiac pacemaker cells of the sinoatrial node in the heart provide a
good example.[24] Although such pacemaker potentials have a natural rhythm, it can be adjusted by
external stimuli; for instance, heart rate can be altered by pharmaceuticals as well as signals from
the sympathetic andparasympathetic nerves.[25] The external stimuli do not cause the cell's repetitive
firing, but merely alter its timing.[23] In some cases, the regulation of frequency can be more complex,
leading to patterns of action potentials, such as bursting.
Phases[edit]
The course of the action potential can be divided into five parts: the rising phase, the peak phase, the
falling phase, the undershoot phase, and the refractory period. During the rising phase the membrane
potential depolarizes (becomes more positive). The point at which depolarization stops is called the
peak phase. At this stage, the membrane potential reaches a maximum. Subsequent to this, there is a
falling phase. During this stage the membrane potential becomes more negative, returning towards
resting potential. The undershoot, or afterhyperpolarization, phase is the period during which the
membrane potential temporarily becomes more negatively charged than when at rest (hyperpolarized).
Finally, the time during which a subsequent action potential is impossible or difficult to fire is called
the refractory period, which may overlap with the other phases.[26]
In saltatory conduction, an action potential at one node of Ranviercauses inwards currents that
depolarize the membrane at the next node, provoking a new action potential there; the action potential
appears to "hop" from node to node.
Myelin and saltatory conduction
Myelin is a multilamellar membrane that enwraps the axon in segments separated by intervals known
as nodes of Ranvier. It is produced by specialized cells: Schwann cells exclusively in the peripheral
nervous system, and oligodendrocytes exclusively in the central nervous system. Myelin sheath reduces
membrane capacitance and increases membrane resistance in the inter-node intervals, thus allowing a
fast, saltatory movement of action potentials from node to node. Not all neurons in vertebrates are
myelinated; for example, axons of the neurons comprising the autonomous nervous system are not, in
general, myelinated.
Myelin prevents ions from entering or leaving the axon along myelinated segments. As a general rule,
myelination increases the conduction velocity of action potentials and makes them more energyefficient. Whether saltatory or not, the mean conduction velocity of an action potential ranges from
1 meter per second(m/s) to over 100 m/s, and, in general, increases with axonal diameter.[54]
Action potentials cannot propagate through the membrane in myelinated segments of the axon.
However, the current is carried by the cytoplasm, which is sufficient to depolarize the first or second
subsequent node of Ranvier. Instead, the ionic current from an action potential at one node of
Ranvier provokes another action potential at the next node; this apparent "hopping" of the action
potential from node to node is known as saltatory conduction. Although the mechanism of saltatory
conduction was suggested in 1925 by Ralph Lillie,[55] the first experimental evidence for saltatory

conduction came from Ichiji Tasaki[56] and Taiji Takeuchi[57] and from Andrew Huxley and Robert
Stmpfli.[58] By contrast, in unmyelinated axons, the action potential provokes another in the membrane
immediately adjacent, and moves continuously down the axon like a wave.
Some diseases degrade myelin and impair saltatory conduction, reducing the conduction velocity of
action potentials.[63] The most well-known of these is multiple sclerosis, in which the breakdown of
myelin impairs coordinated movement.[64]
Termination
Chemical synapses
In general, action potentials that reach the synaptic knobs cause a neurotransmitter to be released into
the synaptic cleft.[70] Neurotransmitters are small molecules that may open ion channels in the
postsynaptic cell; most axons have the same neurotransmitter at all of their termini. The arrival of the
action potential opens voltage-sensitive calcium channels in the presynaptic membrane; the influx of
calcium causes vesicles filled with neurotransmitter to migrate to the cell's surface and release their
contents into the synaptic cleft.[71] This complex process is inhibited by
the neurotoxins tetanospasmin and botulinum toxin, which are responsible for tetanus and botulism,
respectively.[72]
Electrical synapses
Some synapses dispense with the "middleman" of the neurotransmitter, and connect the presynaptic
and postsynaptic cells together.[73] When an action potential reaches such a synapse, the ionic currents
flowing into the presynaptic cell can cross the barrier of the two cell membranes and enter the
postsynaptic cell through pores known as connexons.[74]Thus, the ionic currents of the presynaptic
action potential can directly stimulate the postsynaptic cell. Electrical synapses allow for faster
transmission because they do not require the slow diffusion ofneurotransmitters across the synaptic
cleft. Hence, electrical synapses are used whenever fast response and coordination of timing are crucial,
as in escape reflexes, the retina of vertebrates, and the heart.
Neuromuscular junctions
A special case of a chemical synapse is the neuromuscular junction, in which the axon of a motor neuron
terminates on a muscle fiber.[75] In such cases, the released neurotransmitter is acetylcholine, which
binds to the acetylcholine receptor, an integral membrane protein in the membrane (the sarcolemma)
of the muscle fiber.[76] However, the acetylcholine does not remain bound; rather, it dissociates and
is hydrolyzed by the enzyme, acetylcholinesterase, located in the synapse. This enzyme quickly reduces
the stimulus to the muscle, which allows the degree and timing of muscular contraction to be regulated
delicately. Some poisons inactivate acetylcholinesterase to prevent this control, such as the nerve
agents sarin and tabun,[77] and the insecticidesdiazinon and malathion.[78]

Вам также может понравиться