Академический Документы
Профессиональный Документы
Культура Документы
CHAPTER ONE
INTRODUCTION
with basic preparation for adult life. Also, mathematics is used for analysing
tasks and real-life problems (Gray and Tall, 1999). Again, employers in the
all expressed their continuing need for people with appropriate mathematical
skills (Smith, 2005). This situation demands that every child should be
There is ample evidence to show that all over the world, majority of
National Research Council in the late 1980s is of the view that students
2
study of mathematics is getting worse worldwide especially with regard to
2000; Raimi, 2001; Igbo, 2004; Aguele, 2004). It is unfortunate that the
poor (Agwagah, 2000; Ekele, 2002; Kurume, 2004). This situation cannot be
including (Usman and Harbor- Peters, 1998; Harbor- Peters, 2001; Ikeazota,
2002 and Igbo, 2004), have offered reasons for these consistent poor
fingers have been pointed at the way mathematics is taught in schools, and
mathematics, suggesting that the students are not working hard enough or
change to a thinking mode suitable for the particular problem, for example,
factors according to Harbor- Peters and Ugwu (1995); and Aguele ( 2004)
indicates the frequency of these process errors, from which one can find out
the extent students entering the senior secondary school possess the
that can enable them cope with SS1 mathematics work. Okonkwo (1998)
Both studies were centred on pupils of primary six intending to resume new
mathematics programme in JS1 level. This and the paucity of instrument for
school students.
subject- matter readiness, for further learining ( Ausubel, et al, 1978). Lack
ready for it, he not only fails to learn the task in question (or learns it with
undue difficulty), but also learns on this experience to fear, dislike, and
avoid the task (Ausubel, et al, 1978). Thus, readiness becomes an essential
1979), the sequential procedure used in its instruction (Gagne, 1962; 1968)
and the hierarchical pattern of its organization (Igbo, 2004). Thus, effective
learning task (Ausubel, et al, 1978). Diagnostic test has been defined as test
5
that analyzes and locates specific strengths and weaknesses and
sometimes suggests causes (Burns, Roe and Ross, 1988). Achievement test
on the other hand, measures what students have learned (Annie and
they committed. Process skills are thought processes that are related to
or wrong use of these skills are referred to as process errors (Payne and
Squibb, 1990). Harbor- peters and Ugwu (1995) classified these errors
series ( Usman and Harbor- Peters, 1998) and simultaneous linear equations
( Unodiaku, 1998).
Peters (1998), Aguele (2004) and Ezugwu (2006) could be influenced by the
sex and school location of the students. Hence, the need to investigate
the sex and school location of the students. Such investigation focused on
6
whether Urban or rural, and the type of school attended considering
whether public or privately owned. Rural inhabitants work with people they
Urban dwellers know each other in narrow segmented ways that have little
such places are accustomed to relationships of great intimacy and work with
people they know well are classified as rural schools. Moreso, urban school
will be classified as schools located where the dwellers know each other in
private school was defined as one rightly owned and cared for by an
institutions, army, police or road safety. A public school was defined as one
owned and cared for by a government, normally through its agency charged
system.
Research reports over the years have not only vindicated that senior
that the process errors which students commit while solving mathematics
mathematics learning.
7
Some recent mathematics literature indicated that mathematics
readiness tests were developed and used to determine the readiness levels
of pupils advancing from primary six to Junior secondary school one (JSS1)
may therefore ask the following questions. To what extent can readiness test
are ‘ready’, ‘fairly ready’, or ‘not ready’, for the senior secondary
mathematics programme.
entrants.
Obollo- Afor education zones. It was focused on junior secondary three (JS3)
students’ mathematics curriculum. The scope of this study is such that the
are all the content areas that are covered by the National curriculum in
mathematics for junior secondary schools (JS3). The content areas covered
students, and the dire need to improve upon that, and get the JS3 students
intending to gain entrance into the senior secondary school (SS1) level, it is
now necessary to provide a clear tool which could be used to asses the
readiness level of JS3 students at the point on entry into SS1. This situation
one level, JS3 to another, SS1 based on the frequency of the process errors
the students committed on the MARTHRET at the point of entry into senior
secondary one, will determine the depth of coverage or mastery of the junior
Teachers can then use it in determining the readiness levels of their students
can now carryout remedial instruction of the students found ‘not ready’.
10
Identification of the weaknesses that affect the level of
being ‘ready’ or ‘fairly ready’ and then subjecting the students identified as
three (JS3) students at the point of admission into senior secondary school
decisions will be based on how ‘ready’, ‘fairly ready’ or ‘not ready’ the
students were.
will indicate their specific areas of strengths and weaknesses as well as the
then stress more on those specific areas that students exhibit difficulties,
reveal how prepared the junior secondary school mathematics teachers are
because those concepts/ skills that students exhibit weaknesses implies that
skills of the teachers. This will enhance the readiness level of the students as
administrators for the deficient students based on the areas that students
others).
used. They should also include possible methods teachers can use in
teaching the topic or units that students show evidence of lack of mastery or
weaknesses.
students in understanding and mastery of the areas that they are deficient.
This will assist the students to be ‘ready’ or at least ‘fairly ready’ for the
Research Questions
MATHRET?
13
6. To what extent do school types influence the subjects’ mathematical
Hypotheses
male and female students that influences their degree of readiness for
rural students.
private students.
14
CHAPTER TWO
LITERATURE REVIEW
A. Theoretical Framework:
- Concept of Readiness
B. Empirical Framework
Theoretical Framework
education. One of such area has been determining the readiness levels of
their performance in the subject. This draws strength from Piaget’s theory
15
for Child’s Conception of Number (1952). Piaget claims that
appear to understand the words “Two and six makes eight’, but will not
understand what this means until they understand how the set ‘eight’ can be
broken down into its subsets ‘two’ and ‘six’ and then reconstituted again. It
they committed) with which they can understand the SS1 mathematics
work, that the mathematics readiness test for senior secondary school
students (SS1) can be sought for. This is the main thrust of this study.
Concept of Readiness
(2000) in Akerrnan and Barneth (2005) pointed out that at its broadest;
academic tasks are all implicated as factors that contribute to and define
for its nurture; one does not simply wait for it. He concluded that readiness,
in these terms, consists of mastery of those simple skills that permit one to
or knowledge say, in learning. When the mastery is not in the learner, the
level of what is learned (if any) is low. Similarly David and Flavell (1989)
asserted that the problem of the child’s “readiness to learn’ can in fact be
reduced to the question of whether he has mastered all the steps in the
sequence that proceed and are prerequisite for the concept to be learned.
state of the person that makes it possible for him to engage profitably in a
in learning. Burns; Roe and Ross (1988) remarked readiness as the notion
that a person needs to be in a state of preparedness (ie not just ready and
wiling, but also intellectually and physically able), before he can learn new
knowledge or skills. Similarly, Meisels (2002) noted that waiting for children
learn? Similarly, Hind (1970) shared the view that readiness to learn has
health status such as ability to see and hear and physical abilities such as
arithmetic processes, do not learn a new process properly until they have
from learning activities and without which he may not likely achieve success.
preparedness” (Burns, et al, 1988) before he/she can learn new knowledge
new knowledge or skills. This is associated with the fact that one might be
external stimuli which may pose as hindrance in the process, he will not
enjoy the activity. If for any reason, he is forced to enjoy the activity
18
without readiness, it will lead to dissatisfaction or frustration. These
Similarly, early signs of cognitive ability and maturity have been shown to be
linked to children’s performance in school, and for this reason, this highly
admit that readiness has to do with ability to profit from practice or learning
suggest a gradation and has been looked at as a “degree” to which one has
acquired some prior knowledge essential for learning a new skill or acquiring
19
new knowledge (Udegboka, 1987). In addition, it could be argued that if
assuming all things being equal. Downs and Parking (1958) considered
insufficient experience in the subject-matter and the period his mind can
now cope with the work. This conception of “readiness” suggests a stage of
learning. Similarly, Gibson (1972) noted that the term “readiness” is used
cultural, the personal, the cognitive and the motivational components. For
from various cultures and with various experiences will express their
that these cultural and experiences vary from culture to culture and from
20
one individual to another that degree of readiness for any learning task
Goal Team Report, 2006). How, for example, can we classify some one who
Mathematics but has mastered some aspects of it? Judging from Bruner’s
Such requirements for personal factors that can lead to adequate progress in
not to another. A child may as well be ready for one content areas or units
in a Mathematics curriculum but not for other content areas or units in same
Mathematics curriculum.
necessary for one to tackle the next harder work successfully, but often such
that a child’s state or condition is devoid of these factors, to that extent will
be child’s readiness fall short of the “absolute”. And since this is observable
appropriate entry behaviour. This may be related to the statement that the
becomes the standard solution of the problems with the child having little or
reason or analyse them. This may be the reason why Lassa and Paling
statement is in line with the most popular opinions which suggest that
suggest the inclusion of such basic ideas like acquisition of information about
terms for, and meanings of, numbers, operations as well as other techniques
Hierbert and Carpenter, (1982) have argued that a readiness test useful in
a range of concepts or skills and not just readiness to learn only one
should be broad based. Again, such a test should also provide information
readiness test should not only provide information concerning the child’s
Mathematics learning.
Another study done by call and Wiggin (1973) using second year
control group was taught by call, a trained Mathematics teacher. The result
of the study revealed that the experimental group performed better in the
criterion test even when initial differences in reading and Mathematics test
80 Piagetian studies. In it, Romberg noted that the findings were related to
Mathematics concepts and ability to apply such concepts. And yet more, it
shown that students distribute their study time and apportion their learning
in particular ways by the kinds of testing devices they use, they must
students, and construct reliable and valid measuring instrument that test the
degree to which these objectives are realized. For if a test is to be useful, its
scores must be both reliable and valid (Atkinson and Atkinson, 1993).
readiness for some Mathematics courses and may be useful in preparing for
the tendency is that they will profit from the senior secondary school
Mathematics programme.
28
Furthermore, examination itself is an essential learning experience,
student to confirm, clarify and correct ideas, and identifies areas that need
further thought and study. Each on-line test includes a diagnostic scoring
choice test significantly increases retest scores a week later (Plow-man and
since students often feel “certain” about incorrect answers (Ausubel, 1978).
separate the pupils into two groups, those that have achieved at least as
high as a certain level and those who have not (Ahman and Glock, 1971).
effectively teachers present and organize materials, how clearly they explain
ideas, how well they communicate with less sophisticated individuals, and
difficulties both individual and group (Ausubel, 1968 in Ezeife, 2002). Just
like every other test feedback from it, indicates curriculum contents, which
students have mastered, and those areas that they lack prerequisite skills or
using basal textbooks early in the term, readiness tests help the teacher
screen out at the beginning of the term those pupils who would almost
certainly fail it they were to undertake the difficult work to come (Hildreth, in
applying the knowledge and skills that they have acquired in the first two
administered to the beginning SS1 students within their first or two weeks of
entrance into SS1 level. Students who exhibit mastery will be retained in
from the fact that MATHRET posses diagnostic potential (UCSD, 2006) as
30
well as suitable for assessing individualized instruction or group of
adequacy of subject-matter knowledge that will enable them profit from the
good readiness test is one helpful basis for making necessary adjustments in
classes for you to take as you start your education there (Student Success
students’ scores on achievement tests tell something about how well he will
might judge that a fifth grade pupils with grade equivalent scores in reading
and Mathematics at the 7.8 level. Using local norms would, in terms of
the seventh grade (Ausubel, 1963). Readiness assessment would place you
in classes that are neither too hard nor too easy for you (Student Success
Centre, 2006). That is to say that the JS3 students that will score a cut-off
while students that will score below the cut-off point will be classified “not
31
ready”. Ahman, et al, 1971) has said as much, that the purpose of a
mastery test is to separate the pupils into two groups, those who have
achieved at least as high as a certain level and those who have not. For
want to infer that because a youngster has higher readiness test score on
verbal than non verbal, he would be more successful in a verbal than a non
tests are used not only for educational purposes but also in the selection of
are most likely to perform well later on (Ingule; Ruthie, and Ndombuk,
scarcely a type for job for which some kind of psychological test has not
recognize the need for tests of special aptitudes to supplement the global
United States, civil service found in one study that 93 per cent of the
individuals, but also assign value to the content being tested (ACES, 2006).
MATHRET can as much be used to select JS3 students as they are advancing
programme. Such students can then profit from senior secondary school
Mathematics programme, since they are ‘ready’ for the senior secondary
(Goldman, 1971). This suggests that readiness test can be used to defect
knowledge they previously acquired. The form of test results also vary from
convey minute difference in behaviour (ACES, 2006). The test results thus
the group and they provide an objective basis for starting differentiated
will assist the teacher in preparing teaching materials and methods to suit
the ability levels of the learners. And more importantly for individualized
know the current aptitude levels of pupils and the current state of their
and the content to be learned” (Adkins, 1958). It is test result that can give
“not ready”. Those that are “ready can profit from the next higher level of
school, and so should be promoted, while those grouped “not ready’ need to
helpful basis for making necessary adjustments in the first grade programme
comes (or is sent) seeking help with a particular problem. The original
matter.
problems. For instance, examples may be drawn from studies on the nature
cultural factors associated with behavioural differences. For all such areas of
are involved. In this kind of situation, Kirk (1961) suggested that tests such
So far, the review suggests that readiness test can and need be used
Validity of a test.
this statement, a ruler may be a valid measuring device for length, but isn’t
very valid for measuring volume. A readiness test has been looked at as a
test, which should separate those who are capable of learning a particular
concept and those who are not (Hiebert and Carpenter, 1982). And in the
classroom situation, a readiness test can be useful only to the point that it is
relatively free from both type I and type II errors. Type I and type II errors
ready and classifying as not ready one that is ready. To appraise these
The basic concept in test theory states that the score obtained by any
individual on a test has two components, namely, his true score (the exact
measure of his ability) plus the error associated with his score on this
particular test (Walter and Nancy, 1971). Similarly, Guilford (1954) stated
that among the group of testees, all test scores are partly due to error
variance and not to eliminate its occurrence entirely. From theoretical point
37
of view, one way of minimizing error variance is to maximize reliability
Georgetown (2006) upheld that validity is the extent to which a test actually
perhaps more important that a test serves the purpose for which it is
of separating students into two distinct groups – those that have acquired
the prerequisite knowledge (ready) for the next higher tasks and those that
ready’. To the extent that a readiness test can do this, to that extent it is a
test has to do with what a test measures and how well it does not (Anastesi,
test validity should be made in terms of some specific functions the test is
Test theory suggests that the concept of validity assumes that test
score variance can be broken into variance in some trait and error variance
view of this theory if two tests share a common factor, the implication is that
scores in them are Inter-correlated and that score in one test can be used to
predict a score in the other test. However, each test posses another factor,
38
which is specific to it. How valid a test is depends on its purpose – for
example a ruler may be a valid measuring device for length, but isn’t very
valid for measuring volume (Georgetown, 2006). Therefore, each test scores
error factor and unique factor (Guilford, 1954). Test validity may be defined
is equal to the sum of specific factor variance, and common factor variance.
of reliabilities .81 and .89 said to measure alphabetizing ability have been
found to correlate .09 and .00, with a criterion of alphabetizing on the job
(Mosier, 1947). Criterion reliability was found to be .40. This reveal that
even though the two tests measured whatever they claim to measure
The reliability estimates of both tests were less than .90, probably due to
poor construction of the tests. For well constructed tests usually have a
1954; Thomdike and Hagen, 1977; Beihler and Snowman, 1990; Ibecker,
on the test are related to other behaviours (Ibecker, 2006). This is usually
predict future behaviour provided you want to predict, that is you need to
scores. Predictive validity, that is ability to predict, is the crux of the matter
40
in any measure of readiness as well as all other tasks designed for
standards are needed. This suggests that although application of theory and
validating a test on purely rational grounds. For a test can only be as valid
as the criterion against which it has been validated upon. A study aimed at
by Travers (1951) in Hill (2001). The study revealed that families of skilled
and retail merchant families. Although the result of the study was empirical,
yet the result of the finding was not meaningful because it was discovered
that the judge that judged the success of the experimental groups have
interest to protect. With the use of such ratings as criterion for validating a
test would lead to perpetuating bias, because this would yield systematic
Monsier (1947) and Travers (1951) in Hill (2001) suggests that validation by
41
rationalization alone or empiricism alone is subject to committing of error.
dropped and used only after its validity must have been rationalized. This is
The findings of Travers (1951) in Hill (2001) calls for the need for good
design in empirical study. For a good design hardly introduce bias into one’s
findings. But poor design does. This point is worth mentioning especially in
However, the studies (such as Ali and Akubue, 1988; Iwuala, 1988) have
some of the researchers that showed that continuous assessment scores are
42
valid scores. For instance, Arowosegbe (1990) worked with a sample
the Odili study involved integrated science scores. The study revealed
studies had correlated continuous assessment scores with only the final
assessment scores to provide the published result which the researchers had
used, provided of course, that the said examinations were themselves validly
measures achievement.
Okedera’s (1980) approach was not consistent with the above result.
adult learners (10 female and 20 male) in an adult literacy class in Ibadan
two weeks before they sat for the final school leaving certificate
examination. The same test was administered to the same subjects a week
later. Correlation of test and retest scores were made with the final leaving
43
certificate examination scores and the result was found significantly
positive. The researcher concluded that the result of the study was an
One of such flaws is that using a small and sample that was not
randomly drawn and only 30 subjects selected from one classroom cannot
but threaten the external validity of the results (Campbell and Stanley, 1966
in Zuriel, 2004). Another flaw is that the two administrations of the test
were not more than two weeks before the certificate examination makes the
2006). On this ground, this researcher argues that for practical significance,
be at least a school year. This may suggest that school heads and
1981/82 and 1982/83) were used, tends to be a better designed study than
that of Okedera’s (1980). For the study correlated the scores of the
from one student to another, yet correlations markedly varied from school to
in teacher-made tests. The study failed to specify the intervals between the
different schools sampled for the study. Again, this situation places some
MATHRET, like any other test, may also be validated against ratings
prone to error. Kissane (1986) cited Endean and Cares who found that
scale of the scholastic Aptitude Test (SAT-M) with some bright students
45
(using SAT-M as criterion) overlooked. Still on teacher nomination,
Stanley 1976) reported that SAT-M scores obtained several years earlier
much better than teacher nominations. This reveals that ratings may not
yield the “relevance” test for valid criteria (Thorndike and Hagen, 1977).
Apart from failure to be relevance, they do fail the “freedom from bias” test
(Travers, 1951).
needles to validate every test. For instance, Ebel (1979) has wondered why
criterion) measure, used in the process ends up being only rational. In other
in validating the criterion test initially to directly validate the test instead,
since the validity of a test cannot exceed that of its criterion measure
(Guilford, 1954). With the same expert judgment used in the first instance
in directly validating the test, the length of the validation process chain will
hardly flaw Ebel’s argument. For if it does, the flaw would as well occur in
influenced by too many interacting variables that change in ways too fast for
necessarily a quality of a test but depends on the use it was made use of.
This suggests that Ebel wish that the concern of validity should be on the
use a test is put and not on the test itself per say. Third, Ebel pointed out
that criterion measures do not exist for most tests but have to be made only
by the same process that the tests of interest are made by and therefore
should as well need first of all their own validation. Ebel therefore
including construct, validity, except for such tests that its criteria are
Although the point has been made that validity of a test is not a
principal basis of a quality of a test, it has to do on the use the test is put
into, one need to conclude that this nullifies the importance of empirically
determining the indices of validity. Regardless of the form a test takes, its
most important aspect is how the results are used and the way those results
needs to stress that the test user should have some idea of the degree of
matter of degree, rather than all or none (ACES, 2006). This degree of
47
confidence is given by a combination of sufficient standardization
information as well as a validity index associated with it. For one reason to
(1986) insisted that any good test manual should be composed of these.
Moreso, Ebel’s (1961) claim that criterion measures for most tests do not
measure exists (unless of course the test scores themselves), any trait
scores and test. Yet, this does not constitute evidence that the test
that it provides enough evidence that the tests measure the same construct
they measure it as well as the degree of confidence one can have in using
one as predictor of the other. The construct, which the test actually
a difficult time defining the construct, we are going to have an even more
evidence of validity is nullified. This situation only demands from the test
constructor to claim noting more than that his test measures testees’
competence in carrying out the type of tasks that make up his test. The
reason being that test scores constitute the best criterion we can get of what
a test is intended to measure (Ebel, 1979). One major problem here could
be that this approach may make it very difficult to compare tests. For
rationalized tests measuring the same construct may not share any common
variance after all (see Mosier, 1947), it is not certain that for any two given
tests can really measure similar or the same constructs, an assurance which
unforeseen and unmeasured variables affect them so much that our best
predictions can only be imprecise and crude. Any disagreement with Ebel
degree of confidence (or risk) in using such tests for such purposes than
such cases in which predictions are made without any knowledge of degree
49
of confidence (or risk) involved. The researcher is of the opinion that
empirical approaches and the rational and provide test users the degree of
Reliability of a test.
trait are true differences and constitute true variance among the individuals.
individuals differently, leading to a tag of low reliability, for the test. Such
The reliability of a test reflects the extent to which this error variance has
difference) among the testees on the trait that the test measures.
a test is of high quality. It is only to the extent that test scores are reliable
that they can be useful for any purpose at all (Ebel, 1968). It is also to that
50
extent that the test measures whatever it measures with precision.
Reliability is essential to both the testee and the test user. To the testee,
decision about him depends on the test scores. And to the test user who
scores generated from the test as he makes his decision based on them
(Ebel, 1979).
the level of ability of the testees and the range of their talents as well.
Reliability computed via coefficient alpha usually takes values from 0.00 to
1.00 indicating identical ordering between the test and the hypothetical
equivalent form coefficient alpha may also take values less than zero (WISC,
involved using deviations and correlation of two sets of scores from the
51
same test and sample. Among such different approaches identified in
Richardson (Ebel, 1979; Cronback, 1957; Horst, 1953; Guilford, 1965) and
estimate yields the most reliable coefficient because of its sensitivity to all
using two parallel forms of the test that will test the same material and give
the same result (Georgetown, 2006). One major problem with this
much larger initial pool of items and demand more time for their trial on the
sampled subjects. Such demand of more time on the part of the subjects
may not tolerate such. As a result, observation from Thorndike and Hagen
implies that they might refuse permission for such a project. Such refusal is
the scores of the two measures taken by the same group of subjects
resulted from their tendency to make certain error variance, the approaches
prove the only option left for some test constructors. Specifically, we may
in general, that the bigger the reliability coefficient of the test, the better the
Thondike and Hagen (1977) strongly opined that the coefficient that is
(Anastesi, 1976). The idea being that the more heterogeneous a sample is
(ie. The wider the range of the talents of the testees), the higher the
consisting of only subjects that belong to the same class. And the ability
level of subjects used in the determination of the reliability of a test has also
(Ebel, 1979, Georgetown, 2006). If items of a test are not too difficult (ie.
Appropriate) for the ability of the subjects, the test, it has been argued, is
1954; Statsoft, 2003). For those that the test is too difficult for, their scores
in the test are likely to be influenced by guess work and would lead to high
error variance and consequently yield low reliability. And for those the test
is too easy for, discrimination would not be effective, resulting also in low
subjects it is designed for and this group should be reported for appropriate
utilization of the test by the consumer (Lyman, 1986). The higher the
So far, the review has shown the need for considering validity and
been looked into. And more interestingly what meaning to make of the
that a high validity and reliability should be built into its development.
54
Evaluation.
using different indices, one may rightly suggest that a test constructor need
to choose those indices that call for least amount of labour. In consideration
of this, then Ferguson’s (1942) method and the analysis of variance need to
pointed out that for an achievement whereby the criterion is usually interval
(i.e. Total score) discrimination index D would be the best option considering
with blue print. A test blue print has been described as a two dimensional
(Ebegbulem, 1982; Ohuche and Akeju, 1988; STI, 2006). From this
reliability and validity of the test. Anastesi (1976) noted that a high validity
as well as a desired distribution of final test scores can be built into a test
when developing it. This suggests that assumptions of different indices need
to be beard in the mind of test maker. To Lemka and Wiersma (1976) most
approaches to item analysis are more suitable for tests and therefore, are
unsuitable for speeded tests. The contention being that speeded tests yield
item difficulty indices whose values depend on the position of an item on the
test than its intrinsic quality (Guilford, 1954). In addition, Wersma’s (1949)
speeded test is dependent on the position of the item. All in all, these
results suggest that item analysis of a speeded test would lead to unreliable
Allen and Yen (1979) have suggested that an item pool of at least a
range of one and a half to three times the number of items expected to
compose the final form of the test is required in initiating item analysis.
recommended that an item pool of at least double the expected final length
examinees are required for pre testing and that number in any case should
not be five times or more of the number of items. Guilford (1954) cited
final form of the test would be administered for purposes of determining the
test reliability. The standards set by these authors appear quite stringent
and are unlikely to be the minimum for good results. In view of this,
entire, arguing that the amount of and kind of pre testing required for a test
in estimating item difficulty and validity indices for the whole sample from
which enabled the use of upper and lower 50, 33, 27, or 25 percents.
Guilford (1954) noted that the more extreme these groups are the sharper is
the discrimination between them and the less likely that a chance reversal
point of split that would minimize standard error and yet maximize
pointed out that this optimum point is attained with the upper and lower
27%. However, for flatter than normal distributions, the optimum point of
split results from the use of extreme 33% (Ureton, 1957). It seems that the
27% rule might be quite robust. This is because some studies (eg Ely,
item analysis using tail proportions that ranged from 10% to 50%. No
consistent significant difference was found. This implies that any suitable
tail proportion might serve as well as any other. In such situation therefore,
the test constructor should aim at a normal distribution of total scores in his
other hand, negative skewness results from injection of more difficult items
to effect discrimination at the upper end of the range. From the foregoing, it
appears that level of difficulty of test items determines the range of scores.
found that level of difficulty of items also affects test validity. He also found
that if all items are of equal difficulty, maximum validity is obtained when
the level of difficulty is .50. In addition, he found that increasing the range
of item difficulty decreases test validity provided that the tetrachonic item
58
Inter-corelations are not more than .40 and the number of items in the
test does not exceed 150. Gulliksen (1945) had earlier confirmed that
increases. The results of these two studies therefore suggest that similar
items he worked with had tetrachonic intercorrelation that had only one
retain the assumption of only one factor accounting for tetrachonic inter
correlation and make his findings comparable with Boraden’s, Lord found
tetrachonic item intercorrelations range between .10 and .30) the optimal
level for two-choice items is an uncorrected difficulty level of .85, for three-
choice items, .77 while it is .74 and .69 for four and five-choice items
opined that multiple choice tests discriminate best when items are at a level
59
easier than median difficulty after correction for chance. One would then
wonder how use of such easy items could yield optimal discrimination when
Anastesi (1976) has suggested that use of easier items reduces the
Cronback and Warrington’s work suggest that the effect of such reduction for
the most able testees can only be serious when item inter-correlation is very
high.
that the increase in item –total correlation can be predicted using the
responses for ratio of test lengths. The authors found it true for two to five
and third grade arithmetic test. However, when seven alternatives were
used, the formula seems to over predict the correlation. The situation in
these studies was that the alternative distracters added to the items were as
studies were noted. Plumlee (1952) studied the less in item-total biserial
tests of parallel content. She found that the mean biserial coefficient for
completion tests were actually .08 greater than for the multiple-choice forms
60
instead of .13, which a prediction formula would suggest. Thus,
considering for conditions one may conclude that correlation for chance
occur from the levels of difficulty of items used in composing the tests for
the studies. It seems that with relatively easy multiple-choice items, need
total score or item mean for chance on the size of item-total biserial
correction of total score only, did not change the average item-test
correlation. But these coefficients were somewhat lower in the more difficult
the average item validity indices. The correlation of MATHRET scores using
Appendix: D).
authors looked at it from educational point of view while some insisted that
it is entirely a medical term. For instance, teachers may use the sensory
tests for screening, but diagnosis and treatment would require a medical or
by students of beginning SS1 without calculator so that they can show their
tests are used to measure skills (Glossary, 2007). That is to say that
error. Glossary (2007) further revealed that such tests yield measures of
subject, and thus, provide a basis for remedial instruction. MATHRET was
one can use the cut-off point of 29 frequency of errors or below it and 30
remedial instruction.
Mathematics students, pointed out that the need to assess the current
achieve a common core of Mathematical skills. Harling (1991) noted that the
national assessment system provides teachers and others with the means of
identifying the need for further diagnostic assessments for particular pupils
and Mildred (1999) advocated for learning style inventories. The use of
further hinted that ATI and Trait-treatment concepts of learning are referred
the diagnosis of the students learning abilities is investigated via the analysis
of their error patterns, there would be clear indication of those that are
“ready” and those that are “not ready” for senior secondary school
and weaknesses.
63
Diagnostic information concerning a student lacking readiness is a
the presence or absence of the student and how such problem/treat can be
client’s needs are within the purview of his services. The treat being those
prerequisite skills the students have not been able to master which made
students inform of group test and scores obtained were used in the diagnosis
of the students’ learning abilities. In this regard, Annie and Mildred noted
that for diagnosis, the primary types of data needed are samples of
entrant into SS1 level was determined by the cut-off point of 29 frequency
readiness and that primary data needed for diagnosis are samples of
to a given test score. For assignment of a score is not an end but a means
to an end in testing (Schofield, 1973). From the test, one get an idea about
identifiable (Popham and Husek, 1969; Ebel, 1962; Engelhart, 1972). One
absolute standard. In this regard, Lyman (1986) noted that this “absolute”
standard is the maximum obtainable score in the test. Other scores of the
testees failed to play any role in determining the score of any given testee.
“content” which the test is designed to measure and is evaluated against the
dependent on how easy or difficult the test items are to the testee, even for
a given content area. Again, Ebel (1962) was of the opinion that the
(Lyman 1986).
65
The other second approach to interpretation of scores compares
each testee’s score with the performance of others in the group. In this
case, one’s score does not depend on how difficult the test is but on how the
group performs (Engelhart, 1972). The group whose scores are used as the
(Lyman, 1980).
preparedness qualifies one for designation as “ready” and below which one
cut-off score.
say, the minimum essentials of competence to qualify one for being “ready”.
theoretical point of view, Ebel (1979) insisted that the ideal cut-off score
here should be 100 percent. The problem that appear to exist here is that
test cannot have perfect reliability. This, therefore, calls for need to have
essential fundamentals.
arbitrarily fixing cut-off scores without presenting rationales for their choice
(Ebel, 1979). However, the rational demanded has been provided by Ebel
(1979) himself in his attempt to determine the cut-off score. Ebel (1979)
contended that in a well constructed test, no testee should score less than
the expected chance score and that no best testees should score near the
midway between the chance score and the maximum score and the cut-off
score as midway between the ideal mean and the expected chance scores.
One could argue that most tests are not ideal, being either easier or
more difficult than even the constructor wanted to make it. And that
its dependence on the group (Bruner, 1964). It then follows that a purely
not be appropriate. However, this may be different from the issue. For
having a cultural component only suggests that level of readiness may vary
from “culture” to “culture” and not that prerequisite skills for any topic in
Mathematics remain the same every where irrespective of the fact that it
may exist in one classroom but not in another. To determine what should be
the cut-off score for readiness is particular should thus, not be a “cultural”
67
matter. It suggests therefore that a criterion – referenced approach to
although an ideal test may not exist, if a test becomes easy or difficult (to a
particular group) because the skills it tests are mastered by most or few in
the population of interest, the skewness of the distribution should not be the
(e.g Allen and Allen, 1979; Cronback and Warrington, 1952). The approach
problem that could be perceived here is that it is possible that many of those
that fall within the group that “qualifies” may not really posses the
prerequisite Mathematical and reasoning skills that make for true readiness
population, it appears that the problem, is more with the instrument and not
with the true distribution of the trait in the population. It may not be in
for situations whereby a test is either easier or more difficult than there is
standard is combined with a group related standard to find out the cut-off
score so suggested:
i. The determination of the average of the actual mean and the ideal
mean;
ii. The determination of the average of the lowest score obtained and
iii. Fixing the cut off score midway between those two averages.
The advantage of this approach is that it is combining the ideal with the
to this combines a criterion score with the need to satisfy certain situational
proportion of the testees. If number less than this minimum meet the cut-
off standard, a new cut-off score is defined midway between the former cut-
off and minimum score got by the proportion initially stipulated. If the initial
cut-off score is got by more than the maximum number stipulated, the cut-
69
off score is increased to a position midway between the initial cut-off and
lay their hands on. Therefore, every user has options to consider so as to
discrimination between those that are “ready” and those that are “not
ready”.
Apart from using a readiness test for selection purpose which forces a
testees, there is as well the possible use of a readiness test for counselling
purpose. This, like in all standardized tests, compares the testee with a
with among other testees in the test. This approach “defines” a testee’s
how easy or difficult the test may be. Moreso it has been suggested that
overall performance within some reference group. In using readiness fest for
the purpose of selection, the need more often arises for a cut-off score that
separates those regarded “ready” for a given set of activities from those “not
which from 55% and above of the maximum frequency of errors is regarded
as “not ready” and vice versa. What is left for the test user is to determine
for himself which approach or cut-off best fits his particular situation. More
so, he may further test the validity of his choice empirically. And for
Empirical Framework:
On readiness Testing
schools. However, the basic principle concerning all readiness testing are
the same no matter the stratum of the school system one might be
such tests with strict restriction to a particular stratum of the school system.
have been found in the average ages at which children become able to
and social-economic factors, the order of progression appears the same for
all children (Duruji, 1975; Hilgard, Atkinson and Atkinson, 1975 and Hill
coefficients of .52 and .57 respectively. His conclusion was that Piagetian
(1975) confirmed this report. The two researchers found a high correlation
achievement scores obtained at the end of first grade (ie. Primary one).
some Piagetian tasks (Bruner, 1964; Gibson, 1972; Ann, Bill, and Wilber
a longitudinal study using some children over a four-year period. When the
conservers. Some of the other no conservers were trained so that they can
conserve liquid quantity while the others was left on their own. Then
groups of children was made. One of the findings Bearison made revealed
difference was not found between performance of the trained conservers and
their later conserving peers. The finding suggests that there is relationship
particular Mathematical ability might be the factor that accounted for such
at as evidence that those who fail the logical reasoning test are not likely to
perform well in, say, first grade Mathematics. On the other hand, Michael
74
(1977) and Broody (1979) have found that young children acquire the
addition and subtraction, yet some children that have not acquired number
conservation could still add and subtract accurately. Similar finding was
use of performance in class inclusion was noted in studies such as one done
addend test. Earlier study by Dodwell (1962) contradicted this finding, since
Dodwell (1962) did not find any clearly defined relationship between class
might not have resulted from logical reasoning on the part of the children as
acquired. The authors remarked that while some studies suggested that
size and number of units a given quantity measured, some others found that
Carpenter (1982) also revealed that transitive reasoning takes were often
measurement tasks, Bailey (1974) has earlier reported that children gained
In other words not all Pragetian tasks are necessary in learning concepts in
simple affair. This provides further justification for the remand for indices of
76
correlation that can define the nature of the relationship between specific
standard approaches for solving them and for which invariably need logical
more techniques.
Case (1978) review some studies in which it was revealed that the real
school is their inability to deal with more than one aspect of a problem
might suggest if such problems are capable of being segmented and solved
matter its complexity provided such task could be broken into bits he can
blocks. It then follows that insofar each piece of a problem can be handled
independently of the others and not as integrated with other parts of the
problem, any limitation of the hypothesis will not rose severe problem.
abilities are often not a factor responsible for some children’s performances
addend and those requiring the use of inverse relation between the use of
inverse relation between unit size and number of units are among such tasks
and this is more reason why they correlate highly with Piagetian tasks (see
transitivity are critical for acquiring logical Mathematical knowledge but not
readiness to perform logical Mathematical tasks but may not be good tests
and Zuriel (2002) have carried out prognosis tests on this approach. In the
(Zuriel and Galinka , 1999; Zuriel, 2002) simple materials in CCPAM are
provided for the subjects to learn and a test on the material follows
immediately. The overall scores from this test are correlated with the scores
test developed for this study (Zulber, 2000) are as evidence of validity.
Anastesi (1976)’s work, some of the test subscales contain materials similar
(1977) noted that the authors of tests using these approaches have offered
some of these approaches were valid but some were not. The empirical
support exists for the use of Piagetian tasks only as measures of readiness
for aspects of Mathematics that require logical reasoning in order to solve it.
knowledge level. The test included an open, coded questionnaire for the
guidelines for the teacher, various learning aids, and a coding key. It is
children selected from both public and religious kindergartens, average age
80
of 70.97 months and SD = 4.72. The 314 items were divided into four
independent work, the child receives 3 points, for performance after verbal
aids, 1 point, and 0 is given if the child cannot answer the question. The
child’s Mathematical readiness score is the sum of points received for the
the validity and reliability estimates of the subtests and the teachers’ ratings
preschool children.
be used with children of 4 to 6 years. Johnson (1976) pointed out that the
turning pages, following directions, and so on. The test yielded raw scores,
and validity of this test. It becomes difficult, therefore, to evaluate this test.
Even it is difficult to se how sores yielded by the test can be used to identify
Freeband and B. Lehrer. The ACIT is designed for use on individual primary
arithmetic readiness skills (Melnick and Freedland, 1972). The test is based
the test attempts to assess how testees tackle quantitative relation and so
years, mental age 6.8 years and 1.Q. 66.3 were used in establishing the
validity of the ACIT. There was no evidence of reliability of the ACIT. It was
subscales of the test range from low through moderate to high (between
reported that correlations between the subscales of the ACIT and the
the best predictor of arithmetic skill performance with both correlating .70.
Mental age and I.Q. follow in that order and correlate .68 and .50
composition and small size of the validation sample and reliability of the
reliability might not be too low. Even at that, the ACIT can only be used
ZIP Test
of the ZIP test (Scott, Jr, 1970) designed for use with migrant children aged
skills, for the purpose of placement. The Mathematics section of the ZIP
part of the ZIP test. No information was reported on reliability and validity
of the readiness part, except only for the whole Mathematics subtest.
(1986) suggest that the validity of the approach is low. Again, there was no
report on the rather reliability in the ZIP test to enhance proper assessment
of the ratings.
developed this test and described it as a diagnostic test. The test was
noted to be contained in that test. Moreso, the authors maintained that the
test was equally designed to control for extraneous factors which cause
less, virtual clustering, before and after and identification of symbols. The
84
test also covered reversibility of addends, money concepts, rule of
validating the test, mean mental age of 6 years 5 months with standard
deviation of 1.0 year and mean I.Q. of 66.9 with standard deviation of 7.6.
it was reported that items within each of the 5 subtests yielded Kuder-
Richardson estimate of .74, .82, ..90, .89, and .95. Moreso, it was reported
that content validity was estimated from various sources and is also
levels confirming the hierarchical nature of the abilities upon which the tests
were based. Again, it was reported that independent item analysis for the
(ranging from 1.56 to 1.98) and relatively high mean discrimination indices
(ranging from .37 to .67). More still, the authors reported that except for
one mixed factor, varimax rotation factor analysis for each subtest yielded
also reported that intercorrelation of ACST scale scores with mental age and
Reports on its validity suggest that this part might have a measure of
Thorndike and Hagen (1977) have warned that factorial validity need to be
85
checked against an external criterion. It can be argued that it is difficult
Unless for the small size and lack of information on method of composition
of the validation sample, the validity and reliability figures reported would
useful one.
Johnson (1976) reported that this test was designed for children
whose ages fall between 2 and 9 years but with emphasis on 41/2 to 6 years
hearing and sight). The author noted that it has five sections namely:
correlation was found significant at the .001 level of significant. It was also
sample of 57 students selected from the testing population is .78, also found
significant at .001 levels. And yet more, this test is reported to have a
86
formula for predicting mental age correctly to within 20 I.Q. points in
98.6% of cases.
and first grade. ARS is a 14-item, 5-point rating scale designed to sample
social, and emotional aspects of behaviour. The rating of the scale was
of 10 days between ratings, were used to validate the scale. The test-retest
reliability estimates for the different categories yields a range from .64 to
.83. Academic Readiness scale scores got at the beginning of the session
were correlated with end of year scores on the Stanford Achievement Test
ARS and the word recognition and reading comprehension sections of the
most at .01 level (Burk, 1968). It was reported that nine out of the fourteen
items showed significant difference between two schools. One of the schools
been found significant, but their values are not known and the best
between the correlated measures. Specific figures were not reported. This if
items only. This situation will end up yielding very unreliable measures. This,
even the reported moderately high reliability indices for categories within
this instrument might not be quite reliable. Thus, if one should use this
year olds. The validity of the test was established and provided in the form
grade children at the beginning of the school session and Wide Range
Achievement Test (WRAT) scores and teachers’ ratings at the end of the
session. ART scores correlated .69 with WRAT scores and .66 with teachers’
ratings. WRAT scores correlated .84 with teachers’ ratings. One factor only
(ie reversal) was measured by the test in the process of learning to read.
evaluate the validity figures’ with confidence. Considering its face validity,
the validity figures appear reasonable and teacher ratings used also appear
consistent with WRAT scores in view of the size of their correlation. There
the authors, Rosalie C. Johnson and Rose Kenney for use in education
months and 6 years 6 months because tasks on the test are those most
children in the age bracket can solve easily. Clearly, the test is an academic
schools in Marin country in California, USA, the J-K Screening Test scores
obtained at the beginning of the school year were correlated with teachers’
using 375 first graders in San Fransisco using identical procedure as the first
in a cross validation study. Using the same data, a biserial coefficient was
computed which yielded identical result, .65, also found significant at .001
level. Another report by Seitz, Johnson and Kenney (1973) remarked athat
values ranging .16 to .99 with a median of .68 were also reported.
Screening Test whose criterion measure the ratings were, cannot be given
with confidence except when more information are available. More still,
some reported subtest reliabilities are very low. Obviously scores from such
This approach would have defined the nature of relationship between test
and criterion scores, and possible fluctuations among them which apparently
samples.
single concept and should be free from both type 1 and type II errors.
Appraisal was also done on some readiness tests for usefulness on the basis
students’ sex, location and type of junior secondary school attended and
evidence from some studies can be found with implications for such
as well.
younger girls and still found in the study; boys performing significantly
1849 subjects over a period of four years to find out the possible trend in
Process (STEP) and the School and College Ability Test (SCAT). In STEP,
there was no significant sex difference found in grade 5 but males score
appears to increase with age, being statistically greatest in grade II. In non-
males scored higher. Such apparent superiority of females over their male
counterpart in this particular grade might not be the first or only report on
that females were consistently achieving higher scores than their male
males against their female counterpart. Even in their own study using 360
fifth form subjects, Obioma and Ahuche (1980) found boys superior to girls
Anambra, Imo, Rivers and Cross Rivers found that girls were more deficient
in Mathematics than boys (Obioma, 1985). This suggests that males are
(1980). Using a random sample of 186 male and 186 male and 182 female
writing. After analysis of the Mathematics score, the analysis revealed that
mean performance of male subjects being greater than that of the female
subjects.
suggest the existence of sex differences. For instance, Kostick (1994) did a
93
study on a reasoning test. In the study, 900 subjects were assessed on
there was possible sex effect in ability to orally recall old orally given
is the source of his contradictory finding, because American test might not
So far, the review suggests that sex might not be a factor responsible
different situations
finding when considering that different schools are differently equipped with
resources both material and administrative which can be brought into the
investigation.
one could as well expect that products of such different Junior secondary
schools types should move into senior secondary school with significantly
form of relationship might exist between these variables. There is need for
with which to tackle the next harder work successfully. The review revealed
that readiness test can be used for various purposes in public and private
review that development of any useful test requires that a high validity and
readiness can be determined from any of the two aspect of diagnosis (i.e.
tests studied were on pupils advancing from one primary school level to
another primary school level. Also studies were carried out on pupils
advancing from primary school (primary six) to junior secondary school class
one (JS1). No study was conducted for students advancing from junior
RESEARCH METHODOLOGY
The study adopted survey research design. The instrument was used
work with which the influence of gender, school type and location can be
common test to all those students included in the sample (Obienyem, 1998).
into difficult areas of ordinary level mathematics for the secondary schools.
The area of this study is Enugu state. This study was carried out in
Nsukka and bollo-Afor education zones. The two education zones (Nsukka
Nsukka urban has four (4) boys schools, three (3) girls schools,
eighteen (18) mixed and six (6) private schools. Rural locality has three (3)
boys schools, three girls schools (3), thirteen (13) mixed schools and eight
Urban schools in Obollo-Afor zone consists of two (2) boys, two (2)
girls, seven (7) mixed schools and five (5) private schools. Rural schools in
Obollo-Afor have two (2) boys, three (3) girls, nine (9) mixed schools and
nine (9) private schools. These gave a total of sixty-nine (69) public and
Appendix : B).
The population of this study is 54031. This figure comprised all the
newly admitted senior secondary class one students in 69 public and private
sampled from the six education zones in Enugu State. Specially, the total
zone is 22126. Out of this figure (22126), the male students numbered 9850
A sample of 300 students was used for the study. The choice of this
figure (300) was based on the fact that the instrument (MATHRET) was
composed of thirty (30) essay questions. Each of the students’ scripts needs
one can identify all the skills that students missed or failed (process errors)
questions for such large number (300 students’ scripts), it is quite tedious,
painstaking and may take more time than required to accomplish the study
if attempt is made to increase this sample size. This study adopted multi-
Nsukka, Udi, Agbani, Awgu and Obollo-Afor. The next stage involved the use
selecting school type (in terms of whether public or private school). This
resulted to 102 school types sampled. The next stage involved using
types. The next stage involved using the same proportionate sampling
technique to sample 17 intact classes. The last stage involved using simple
class. In this case, for instance, in a boys’ school with 20 boys in a class, in
which the researcher intend to select 7 students, the researcher simply cut
out 20 pieces of paper with equal size. He wrote ‘yes’ on seven pieces and
99
‘No’ on the remaining thirteen pieces, and rumple the pieces of paper. He
then put the pieces of paper in a bag and shuffle them. The researcher then
requested the students to pick just once (without replacement). Those that
picked the 7 ‘yes’ will be the effective sample from this class.
In the case of mixed school (class) (boys and girls), each class was
grouped into boys and girls similar technique was applied in selecting the
number of boys and girls required in mixed class. This sample random
148 boys and 152 girls. These figures ( 148 and 152) brought the effective
create a test blueprint that specifies your outcome and the types of items
you plan to use to assess those outcomes (STI, 2006). In the national
experience and two lecturers, one in mathematics education and the other in
100
measurement and evaluation where requested to independently suggest
weights that they consider appropriate for the different sections of the JS3
below, where A represents the researcher and B, C the two teachers, D and
The mean weights in the respective content areas were summed which
gave 30% Number and Numeration. More so, the proportion of 7.8:60.6
increase their chances of identifying an item pool that will represent their
the weights assigned to each content area by the “Judges” as the weight of
the corresponding content area of the test blueprint (see table 2 below).
laying a solid foundation for life-long learning. These guidelines suggest that
(1956) Taxonomy was used in conjunction with Ohuche and Akeju (1988).
(K); understanding and use of concepts and process (UCP); and Decision
Making (DM).
102
Table 2: A Blueprint of the MATHRET for trial testing
Content No of Items
K UCP DM Total
Total 36
scale.
A B C D E F G H Mean (x)
content was 5, and the mean ratings for the coverage of the cognitive
domain was 4.8. The mean ratings of 5 and 4.8 suggested a high degree of
agreement among the ratters. It also suggested that the coverage of both
Subsequently, a pool of 36 essay items covering all the content areas in the
table 2 was produced. The 36 items on the blueprint were used to produce
the Mathematics Readiness Test (MATHRET) for trial testing and item
represents the percentage of a given group that fails an item (WISC, 2006).
difficulty. The difficulty levels of the items and the discrimination index of
the items were used to establish the item difficulty (ID) and discrimination
The validity of the item pool was measured using discrimination index
compose the final form of the MATHRET, three criteria were adopted, in line
with literature.
104
i. In case more items qualified than were specified in the blueprint,
priority should be given to those with indices near .50 (Guilford, 1954).
ii. Item D must not be less than .20 (Lenike and Wiersma, 1976).
iii. Item validity index P, should range between .25 and .75 (see Allen and
Yen, 1979), where P stands for proportion of the students that pass each
item (see Appendix : P).The 36 items were made up of four content areas
Mensuration and Everyday statistics. The developed items were Pilot tested
to establish the reliability of the instrument and its subscales. The MATHRET
sampled from Enugu Education Zone of Enugu State, during their first week
distribution of their scores on the MATHRET are 67.14 and 4.64 respectively
(see Appendix: S). The four content areas of the MATHRET were used to
reliability estimates of NUMAP, COMPUS and MACOPS are .91, .88 and .81
After the trial testing and item analysis, six (6) items out of thirty-six
(36) items used in the pilot testing were found to be bad. The 6 items were
instrument (MATHRET) for data collection. In this regard, the reliability and
K UCP DM Total
Total 30
experience along side three lecturers, (V,W and X). One in mathematics
106
Education and the other two in measurement and Evaluation were
The results of the rating of the second version of the MATHRET by twelve
5- point scale.
M N O P Q R S T U V W X Mean (x)
Coverage of curriculum 5 5 5 5 5 5 5 5 5 5 5 5 5
content
The mean ratings for the coverage of the cognitive domain of the second
version of the MATHRET blueprint was 4.9, and the mean ratings of the
coverage of the curriculum content was 5. The mean ratings of 4.9 and 5
indicated a high degree of agreement among the raters. It also shows that
2006). Moreso, the pilot testing was aimed at identifying problems that
could be expressed during the main study with a view to finding out practical
solutions to such problems. For instance, the researcher may, find that some
questions were either too hard or too easy, or variably interpretable by the
testees (Ali, 2006). And they pre- test the items to identify those which are
too difficult or too easy or which fail to discriminate clearly between high and
The MATHRET was administrated to all the 110 JS3 students of four
state, during their second week 2007/2008 session. Data obtained from the
pilot testing was used to provide the item analysis data of the instrument.
The item discriminating index of the MATHRET was found to range from .44
to .74 with a mean of .61 (see Appendix: H). All the 30 items of the
The MATHRET is a 30- item essay group test and is made up of four
(MACOPS). The NUMAP has 6 items, COMPUS 15 items, and MACOPS has 9
items. Kuder Richardson (KR20) method was used in establishing the internal
reliability estimates are 0.91, 0.90, and 0.97 respectively (see Appendix :D).
numbers. All the items are numerical and involve both positive and negative
numbers. Considering the nature of this subscale, all items in this section
data.
classification.
The instrument for data collection for this study was a Mathematics
was composed of thirty (30) essay test items that were designed to elicit the
an error. The total frequency of errors committed by each student was later
levels of the students was concerned with measuring a given score against
this ‘absolute’ standard is the maximum obtainable score in the test. The
from 55% and above, he will be classified as having obtained credit level.
The above criteria was adopted in this work but in reverse order. The above
ranging from 55% and more of the maximum obtainable frequencies of erros
(17700) (see Appendix: E) was classified as not ready (fail). If his errors
(weak pass), and if his error frequency ranges from 0% to 39% he was
110 SS1 entrants of 2007/08 session in Udi education Zone within their first
two weeks in first term. The data collected with the instrument during trial
testing were used to find the internal consistency of the instrument. The
subscales, namely NUMAP, MACOPS and COMPUS were .91, .97 and .90
The data obtain with the instrument (MATHRET) were analyzed using
c) The t- test statistics was used to test the significance of the difference
The content validity was established from the high degree of agreement
among the raters. They were requested to asses the weights independently
adherence to the final result of the blue print. The mean ratings of the raters
on 5- point scale was 5 and 4.8 for the coverage of JS3 mathematics
5- point scale.
M N O P Q R S T U V W X Mean (x)
Coverage of curriculum 5 5 5 5 5 5 5 5 5 5 5 5 5
content
The collected answer scripts of the 110 students during the trial
Richarson formula (KR20) was used in testing the internal consistency of the
items used in the trial testing. The reliability coefficient of .91 was obtained
‘fairly ready’ or ‘not ready’ (in terms of their mean error scores on the
school entrants that are ‘ready’, ‘fairly ready’, and ‘not ready’ for senior
students (300) used for the study were found ‘ready’ for the
the 300 students used for the study were found ‘ready’ and
iii. The mean error difference was 6.53 in favour of the male students;
iv. The males were more ‘ready’ than their female counterpart for senior
by students
MATHRET
7.60
i. The mean of the errors committed by the urban students was 44.41
iii. The difference in means of errors committed by the Urban and rural
iv. The urban students were found to be more ‘ready’/‘fairly ready’ than
as measured by MATHRET.
3.13
schools.
Hypothesis one:
male and female students that influence their degree of readiness for senior
Table 11: t-table for difference in mean errors made by male and female
From the student’s distribution shown in table 9, df (v) = 284, = 0.05, and
the t- critical Value is 1.645. The t-calculated value (3.07) is greater than
t-critical value (1.645), the hypothesis was therefore, rejected. This implies
counterpart, meaning that males are more ‘ready’ for senior secondary
Hypothesis Two
and rural SSI entrants that influence their degree of readiness for senior
Table 12: t-table for difference in means of errors made by urban and rural
(See Appendix: M)
From the student’s t-distribution table with = 0.05 and df (v) = 284,
the t-critical value is 1.645. The t-calculated value (26.48) is greater than t-
critical value (1.645). The hypothesis was therefore rejected. This means
in the means of errors committed by urban and rural students suggests that
mathematics learning.
120
Hypothesis Three
and private senior secondary class one entrants that influence their degree
MATHRET.
Table 13: t-table for difference in means of errors made by public and
(see Appendix: N)
From the student’s t distribution table with = 0.05 and df (v) = 284, the
t- critical value is 1.645. The t-calculated value (1.83) is greater than the
t-critical value (1.645). The hypothesis was therefore rejected. This means
and private senior secondary school class one (SS1) entrants as measured
CHAPTER FIVE
Discussion of results
The results of this study are discussed around the findings in both the
Out of this figure (14508), 863 and 1267 frequency of errors were actually
was 38 students out of the total sample size of 300 students used for the
study. Moreso, the number of students that committed between 40% and
the students that committed 55% and above of the maximum obtainable
frequency of errors (17700) were 215 out of a total number of 300 subjects
used for the study. The percentage found to be ready was 12.67 percent.
The percentage found to be ‘fairly ready’ was 15.67% while the percentage
122
‘not ready’ was 71.66% percent. It was found that students committed
was discovered that the least frequency of errors occurred on w with 459
frequency of errors (See Appendix: J and O). This result suggests that the
‘not ready’ were not encouraging. Only 12.67% and 15.67 percent of the
the junior secondary school students lacked evidence of readiness for senior
work (Onugwu, 1991; Unodiaku, 1998). The result also, suggested that the
students were most deficient in ability to write down the answers correctly
(or with the appropriate signs, where necessary in which they committed
989 frequencies of errors. The next skill that the students were found most
frequencies of errors. That means students have not mastered writing down
answers correctly (or with the appropriate signs, where necessary), and
123
using number to divide number, thereby exhibiting procedural error as
The above findings may be associated with the quality and quantity of
mathematics taught and learned at the junior secondary school level. Senior
secondary class one students (SS1) was dependent upon previous junior
public schools with mean error difference of 3.13 in favour of public schools.
schools was 49.55 while the mean frequencies of errors committed by the
students in the private schools was 46.42. Obviously, JS3 students in the
private and public junior secondary school students (JS3) for senior
revealed that the beginning SS1 students were generally ‘not ready’ for
MATHRET (scores)
This study sought to investigate how far the male and female students
results. The mean error difference of 6.53 was in favour of the male
committed more errors than females in the following skills: v, w, and y. (see
obvious that:
students;
iii. Female students gained more ability to write down the answers
correctly (or with the appropriate signs, where necessary) than the
male students;
iv. However, the male students appeared to be more ‘ready’ than their
This shows that male students were more ready than their female
counterpart in mastery of most of the skills while the females were more
to be more ready than their male counterpart students from privately owned
recorded more mean (x) errors (46.42) than their male counterpart from
public schools who recorded 49.55 mean errors with difference in mean
being more ready for senior secondary school mathematics than their
counterpart students from public schools, since students from public schools
committed more mean (x) errors of 49.55 than female students from private
schools that committed mean (x) errors of 46.42 with mean error difference
of 3.13 (see Appendix: N). This observed difference was found significant
126
(t = 1.83, p = 0.05 and 284 degrees of freedom (see Appendix: N). the
be adducible to the fact that more qualified mathematics teachers are more
thereby making the public schools to have an edge over their private
counterpart schools. There is need to seek for possible solution to bridge the
among the privately owned secondary schools springing up here and there
3.13 in favour of the privately owned secondary schools need further enquiry
and specific learning abilities there are also differences in cognitive style
(Biehler and Snowman, 1999). Isinenyi (1990), Ubagu (1992) and Unodiaku
(computation) than girls as is the case in this study in which girls committed
127
more computational (process) errors than boys. In other words males
the entire students were not mathematically ready for JS3 mathematics
v, w, x and y than their rural counterpart (see Appendix: O). In other words
the rural students are more ready than the urban students in terms of
(see appendix: O). It was observed that students in public schools located in
the rural areas committed more errors than students from public schools
located in the urban areas. These observed differences could be the reason
why subjects’ location and type of junior secondary school attended (in
Sawrey and Telford (1958) observed that quite rightly, that the intelligence
of the children varies directly with the environment in which they are raised.
account for differences in readiness levels between urban and rural students.
129
Such apparent variation in mastery of the skills between urban and
rurally located schools is in line with different axioms. One of such axioms is
that children’s intelligence varies directly with the environment in which they
and experience, the later being mainly culturally determined Hunt (1961) in
Unodiaku (1998). Nwagu (1990) pointed out that the resources available,
the sub-cultural and the geo-physical conditions differ somehow from rural
beginning SSI students are within expectation. The school location was
findings in this study are quite consistent with the findings of Isinenyi
(1990), Akukwe (1990), Ubagu (1992) and Odo (1990). All reported that
students from rural localities committed more errors than their counterpart
from urban schools. In other words, students from the urban located schools
are more ready than their counterpart students attending schools in the
between urban and rural students with urban students being more
agricultural occupation with which the rural families and inhabitants are
rural inhabitants are peasant illiterate farmers and petty traders who cannot
offer any academic guidance to their children and wards attending schools.
Apart from the fact that lack of educational background hinders them from
the inhabitants are composed of literate wards and parents, who have
school, thereby enabling them to have an edge over their counterpart from
the rurally located schools in terms of readiness abilities for senior secondary
that extra-mural and evening classes are usually made compulsory in urban
urban localities pay part-time teachers for the purpose of coming to teach
their children and wards in the evenings. But in the rural areas the illiterate
students never showed concern for their wards or children’s poor attendance
to the literate status of the parents and guardians in the urban areas. These
viewed from the fact that schools located in urban areas are usually better
teachers, funds, among others) for improving the quality and quantity of
teaching and learning in urban schools. The worst of all, almost all the
Conclusions.
percent (71.67%) were found ‘not ready’ for the senior secondary
ii. The males were more ‘ready’ than their female counterpart for
iii. Gender was a significant factor in readiness of male and female SSI
p<0.05.
learning.
mathematics learning.
the readiness level of students advancing from one level to another, say
4. Male and female students used in the study show convincingly unequal
ready than their female counterpart. The implication is that the use of
MATHRET has gender bias. The implication of this is that the issue of
134
5. males performing better than females earlier reported in literature
noted to exist and found significant in the study. The implication is that
rural areas to get them ready for senior secondary school mathematics
learning.
different, with students from public schools being more ready than
used for assessing them, the students from public schools will appear to
introducing a new related topic. This study provides such materials for
mathematics.
4. The study has also vindicated that the use of the MATHRET enhances
those ‘ready’ and ‘fairly ready’ against those ‘not ready’, therefore,
136
need to be entrenched in the Junior secondary school mathematics
5. Mathematics textbook authors and other test developers may use this
that the learning of one aspect becomes a prerequisite for the learning of the
next harder one. The curriculum planners recommended the use of entry
skills/experience/ knowledge that can enable him profit from the present
find out if the child is ‘ready’ for the new instruction considering whether he
level of the learner therefore becomes a factor that determines the teacher’s
the fact that secondary school teachers find it difficult to diagnose their
learning mathematics.
‘fairly ready’ or ‘not ready’ (in terms of scores on the MATHRET) for the
4. To what extent do male and female students vary in terms of their mean
readiness (in terms of their mean errors committed on the MATHRET) for
attended (in terms of whether public or private) are not significant factors
empirical. It was observed that Readiness test has been very efficacious in
various ways but unfortunately none was developed for senior secondary
The instrument used was the MATHRET. Thirty essay items were developed
and given to validators. Based on the results of the validators, the instrument was
administered to the 300 sampled subjects. Kuder Richardson formula (KR20) was
researcher.
2. A reliability coefficient of 0.91 was obtained for the test through pilot testing.
validation.
4. Only 12.67 and 15.67 represent, 38 and 47 percent of the 300 subjects used
for the study were found ‘ready’ and ‘fairly’ ready’ respectively for senior
5. Mean error difference of 6.53 was found to exist between male and female
(scores).
6. The urban students were found to be more ready for senior secondary
8. The mean error difference between male and female SSI entrants as
9. The mean error difference between urban and rural beginning SS1
p<0.05).
142
10. REFERENCES
ACES Report (2006). ACES Validity Handbook: What is Test Validity? http: ii www.
Collegboard. Com/highered/apr/aces. Handbook/test valid. Htm/.
Ahman J. S. and Glock, M. D. (1971). Evaluating Pupils’ Growth: Principles of test and
Measurements. Boston: Allyn and Bacon inc.
Akukwe, A. C. (1990). School Location and School type as factors in Secondary School
Mathematics Achievement in Imo State. Unpublished M.ED Thesis, University of
Nigeria, Nsukka.
All Psych Online (2006). Research Methods Validity and Reliability in Allpsych
Online. http: //allpsych. Com/research methods /validity reliability. html.
Anastasi, A. (1968). Psychological Testing (3rd ed.). Psychological Testing (4 th ed.) New
York: Macmillan Publishing Co. Inc.
143
Ann, A.C.; Bill, V.; Wilbur, D.M. and Dutton, L. (2006). Mathematics Children Use and
Understand. Napa, California: Rattle Ok Publishers.
Annie W. Ward and Mildred Murray-Ward (1999). Assessment in the Classroom. New
York Wadsworth Publishing Company.
Barrack, S.W. (1980). Achievement in and Attitude Toward High School Mathematics
with Respect to Sex and socio-economic Status. Dissertation Abstract
International, 41 (5), 1812A.
Bordin, E.S. (2000). Psychological Counselling. New York: Appleton Century – Crofts.
Brodgen, H.E. (1996). Variation in Test Validity with Variation in the Distribution of
Item Difficulties, Number of Items and Degree of Their Inter Correlation.
Psychometrika. 11, 197-214.
Bruner, J.S. (1966). Toward a Theory of Instruction. Cambridge, MA: Belknap Press.
Bryan, M.M.; Burke, P.J.; and Stewart, W. (1999). Correlation for Guessing in Scoring of
Pre-tests: Effect Upon Item Difficulty and Item Validity Indices. Educational and
Psychological Measurement, 12,45-56.
Burks, H.F. (1968. Manual of Burks’ Behaviour Rating Scales. Huntington Beach,
California: Arden Press.
Burns, P.C.; Roe, B.D.; and Ross, E.P. (1988). Teaching Reading in today’s elementary
schools, 4th ed. Houghton Mifflin company. Boston; New Jersey press .
Call, R. J. and Wiggin, W.A (1966). Reading and Mathematics. Mathematics Teacher,
54, 149-157.
Campbell, D.P. and Stanley, R.N. (1966). Revision of the Strong Vocational Interest
Blank. Personnel and Guidance Journal44,744-749.
David, E. and Flavell, J.H. (1989). Studies in Cognitive Development: Studies in Honour
of Jean Piagent. New York; Oxford University Press, London.
Denney, H.R. and Remmers, A.H. (2000). Reliability of Multiple-choice Measuring
Instruments as a Function of the Spearman-Brown
Prophecy Formula 11. Journal of Educational Psychology, 31, 699-704.
Derogates, I.R. (1992). SCL-90-R: Administration, Scoring and Procedures Manual – 11.
Baltimore, MD: Clinical Psychometric Research.
Dienes, Z.P. (2002). Some reflections on learning Mathematics. In W. E. Lamon (Ed).
Learning and nature of Mathematics (pp 51-67). Chicago: Science Research
Association Inc.
Dodwell, P.C. (1998). Relation Between the Understanding of the Logic of Class and of
Cardinal Number in Children. Canadian Journal of Psychology, 16, 152-160.
145
Downs, L. W. and Parking, D. (1958). The Teaching of Arithmetic in Tropical
Primary Schools. London: Oxford University Press.
Downie, N.M. and Heath, R.W. (1974). Basic Statistical Methods. 4th edition. New York:
Harper and Row publishers.
Duel, H.J. (1998). Effect of Periodic Self-Evaluation on Student Achievement Journal of
Educational Psychology, 49, 197-199.
Ebegbulem, C.E. (1982). Constructing Tests for Continuous Assessment. Lagos.
Macmillan Nigeria Publishers Ltd.
Ebel, R.L. (1961). Writing the Test item. Educational Measurement, Washington, D.C.
American Council on Education, 185-249.
Ebel, R.L. (1962). Content Standard Test Scores. Educational and Psychological
Measurement, 22,15-25.
Ebel, R.L. (1979). Essentials of Educational Measurement. 3rd ed. Englewood Cliffts,
N.J., Prentice-Hall inc.
Ekele, J. (2002). Development and Standardization of Quadratic Aptitude Test for Upper
Primary School Pupils. Unpublished Ph.D Thesis, University of Nigeria, Nsukka.
Ely, J.H. (2001). Studies in Item Analysis 2: Effects of Various Methods Upon Test
Reliability. Journal of Applied Psychology, 35, 154-203.
Engelhart, M.D. (1972). Methods of Educational Research, Chicago: Rand Mc Nally and
company.
Gagne, R.M. (1962). The Acquisition of Knowledge. Psychological Review, 69 (4), 355-
365.
Gagne, R.M. (1968). Learning Hierarchies. Educational Psychology, 6 (1), 3-6.
Goldman, L. (1971). Using Tests in Counselling. (2nd ed.). Los Angeles: Good year
Publishing Company Inc.
Goldstein, G. (1992). Handbook of Psychological Assessment (2nd ed.). New York:
Pergamon Press.
Gray, E. M. and Tall, D.D. (1999). Duality, Ambiguity, and Flexibility. A “proceptual”
view of simple arithmetic Journal of Research in Mathematics Education, 25,
116-146.
Greenspoon, J. and Gerten, C.D. (1986). A New Look at Psychological Testing:
Psychological Testing from the Standpoint of a Behaviourist. American
Psychologist, 22, 843-853.
Guilford, J. P. (1954). The Phi Coefficient and Chi Square as Indices of Item Validity.
Psychometrika, 6,11-19.
Guilford, J.P. (1965). Fundamental Statistics in Psychology and education, 4 th ed. New
York: McGraw-Hill Book Company.
Harison, C.W. (1944). Factors Associated with Successful Achievement in Problem
Solving in Sixth Grade Arithmetic. Journal of Educational Research. 38, 111-118.
147
Harbor Peters, V.F. And Ugwu, P.N. (1995). Process Errors committed by Students
in some Geometric Theories. Nigeria Research in Education 7, 141-152
Hawlett, K.D. (2001). A Study of the Relationship Between Piagetian Class Inclusion
Tasks and the Ability of First Grade Children to do Missing Added Computation
and Verbal Problems. Dissertation Abstracts International, 34,6259A – 6260A.
Hiebert, J. and Carpenter, T.P. (1982). Piagetian Tasks as Readiness Measures in
Mathematics instruction, a Critical Review. Educational Studies in Mathematics ,
13, 329-345.
Hilgard, E. R., Atkinson, R. C.; and Atkinson, R.L. (2005). Introduction to Psychology,
(6th ed). New York: Harcourt Brace, Jovanovich, Inc.
Hllins, E.R. (1996). Culture in School Learning: Revealing the Deep Meaning. New
Jersey: Lawrence Erlbaum Associates.
Johnson, A. P. (1976). Notes on a Suggested Index of Item Validity: the U-L Index.
Journal of Educational Psychology, 42, 499-504.
Johnson, O.C. (2004). Tests and Measurement in Child Development: Handbook II, Vol.
1. San Francisco: Jersey. Bass, Inc.
Keislar, E.R. (1997). Shaping of a Learning Set in Reading. Paper Presented at the
Meeting of the American Educational Research Association Atlantic City.
149
Kerlinger, F.N. (1973). Foundations of Behavioural Research, (2nd ed). New York.
Holt, Rinehart and Winston.
Kersh, B.Y. (1967). Engineering Instructional Sequence for the Mathematics Classroom.
In J. M. Scandura (ed). Research in Mathematics Education. Washington, D.C.:
National Council of Teachers of Mathematics.
Kirk, S.A. (2003). Educating the Retarded child. Boston: Houghton Mifflin press.
Kooker, E.W. and Williams, C.S. (1959). College Students’ Ability to Evaluate their
Performance on Objective Tests. Journal of Educational Research, 53, 69-72.
Kostick, M.N. (1994). A Study of Transfer: Sex Differences in the Reasoning Process.
Journal of Educational Psychology. 45 (8), 449-458.
Leslie, W. and Barnette, Jr. (1968). Readings in Psychological Test and Measurements.
Ontario: The Dorsey Press.
Lord, F.M. (1998). The Relation of the Reliability of Multiple-Choice Tests to the
Distribution of Item Difficulties. Psychometrika, 7, 181-194.
Lyman, H.B. (1986). Test Scores and What they Mean. (4th ed). Englewood Clifts,
N.J: Printice-Hall press.
Mehrens, W.A. and Lehmann, D. (1993). Standardized Tests in Education. New York:
Holt, Rinehart and Winston.
150
Meisels, S.J. (2002). Assessing Readiness. In Pianta, R.C. (Coxm, eds). The transition to
Kindergarten: Research, Policy, training, and Practice Baltmore: The MD press
Limited.
Melnick, G; Mischio, G.; and Lehrer, B. (2004). Arithmetic Concept Screening Test
Manual. Curriculum Research and Development Centre in Mental Retardation.
New YorkYeshiva University press Limited.
Michael, E.R. (2000). Acquisition Order of Number Conservation and Arithmetic Logic
of Addition and Subtraction. Dissertation Abstract International. 37,4116B-4117B.
Michael, W.B.; Hertzka, A.F.; and Perry, N.C. (1953). Errors in Estimates of Item
Difficulty Obtained from Use of Extreme Groups on a Criterion Variable.
Educational and Psychological Measurement, 42, 499-504.
Nie, N.H.; Hull, C.H; Jenkins, J.G.; Stein Brenner, K; and Bent, D.H. (1975). Statistical
Package for the Social Sciences, (2 nd ed). New York: Mc Graw-Hill.
Nunnally, J.C. (1981). Psychometric Theory. New Delhi, Tata: Mc Graw-Hill Publishing
Company, Ltd.
Nwabuise E.M. (1986). Factors Affecting the Learning Process. In V.O.C. Nwachukwu
(es). Educational Psychology. Ibadan: Heinemann Educational Books (Nig.) Ltd.
Nwachukwu, V.C. (1997). Transfer of Learning. In V.O.C. Nwachukwu (ed.)
Educational Psychology. Ibadan: Heinemann Educational Books (Nig.) Ltd.
151
Nworgu, B.G. (1990). Evaluating the Effects of Resource Material Types Relative to
Students Cognitive Achievement, Relation and Interest in Integrated Science.
Unpublished Ph.D. Thesis, University of Nigeria, Nsukka.
Odili, J.N. (1991). Relationship Between Continuous Assessment Cumulative Scores and
Junior school Certificate Examination Results in Integrated Science in Selected
LGAS of Bendel State. Unpublished M.ED Project: University of Nigeria, Nsukka.
Ohuche R.O. and Akeju, S.A. (1988). Measurement and Evaluation in Education.
Onitsha: Africana –Feb Publishers Limited.
Okonkwo, S.C. (1998). Development and Validation of Mathematics Readiness test for
Junior Secondary School Students. Unpublished Ph.D Thesis, UNN.
Onugwu, J.I. (1991). Identification of Kinds of Errors Secondary School Students Make
in Solving Problems in Mathematics. Unpublished M.ED Thesis, University of
Nigeria, Nsukka.
Ozouche, C. N. (1993). Difficult areas of ordinary level Mathematics for the Secondary
Schools. Unpublished M.ED Thesis, University of Nigeria, Nsukka.
152
Payne, S.J. And Squibb, H.R. (19990). Algebra and Rules and cognitive Accounts of
Errors. Cognitive science, 14, 445-481.
Pennington, B.F., Wallach, I. and Wallah, M.A.(1980). Non Conservers’ Use and
Understanding of Number and Arithmetic Genetic Psychology Monographs, 101,
231-243.
Piaget, J. (1952) in martin Hughes (1989). Children and Number: Difficulties in learning
Mathematics. Britain: T.J. press (Padstow) limited.
Plowman, L. and Stroud, J.B. (1992). The Effect of Informing Pupils of The Correctness
of Their Responses to Objective Test Questions. Journal of Educational Research,
36, 16-20.
Plumlee, L.B. (2000). The Effect of Difficulty and Chance Success on Item-Test
Correlation and on Test Reliability Psychometrika, 17, 69-86.
Ross, C.C. and Henry, L.K. (1939). The Relation Between Frequency of Testing and
Progress in Learning Psychology. Journal of Educational Psychology, 30, 604-
611.
Scott, Jr. N.C. (1990). ZIP Test: a Quick Locator Test for Migrant Children Journal of
Educational Measurement, 7, 49-50.
Stanley, J.C. (2000). Test, Better Finder of Great Mathematics Talent Than Teachers are
American Psychologist, 31 (4), 313 -314.
Stevin, S.K. (1969). Mathematics, The Man-made Univese. San Francisco: W.H.
Freeman and Company.
SPSS Inc.(2000). Statistical Package for the Social Sciences Personal Computer (spsspc)
version 3.0, Chicago, IL: SPSS Inc.
Traler, D., Jacobs, V.L. Selover, M.O., and Townsend, T.J. (1973). Cognition and
instruction. Hillsdale, NJ: Erlbaum.
Usman, K.O. And Harbor-peters, V.F.A. 91998). PROCESS Errors committed by senior
secondary school students in mathematics. Journal or science, technology and
mathematics Education
Vande Linder, L. F. (1994). Does the Study of quantitative Vocabulary Improve Problem
Solving? Elementary School Journal, 65, 143-152.
West African Examinations Council (2000). Chief Examiner’s Report. Lagos: West
African Examinations council.
Wesma, A.G. (1949). Effect of Speed on Item Test Correlation Coefficients. Educational
and Psychological Measurement, 9, 51-57.
WISC (2006). Summary of Test Statistics: What do Those Numbers Mean? http://testing.
Wisc.edu/what Do Those numbers Mean. htm.
5. Students will be Decimal places and Interpretation of data such This should be
able to use significant figures. as population. Rounding off related to the
approximation Problems in in multiplication and pupils work in
measurement. mensuration addition to a reasonable Science and
involving volume. degree of accuracy. Geography.
Area of land. Calculation using standard Relate
Distances. Consumer form. Eg. calculation
arithmetic. Games (1.36 x 10-5) x (2.43 x 100). using standard
and athletics timing. form the use of
Etc. a calculation
machine.
158
M - 353 353 10 - 10 10
Rural Nsukka 13 - 2 N 202 272 474 14 6 8 14
mixed -
Your are not allowed to write on the question paper. You shall return the
question paper along with your answer script. Write your name, sex (i.e male or
female), school and class very clearly on your answer script. Show clearly the
processes you use in solving the following questions. Answer all the questions. Do
not start answering the questions until you are told to start.
1. Write down the translation of the following statement into numerical.
Subtract the sum of nineteen and three from the product of nineteen and
three and divide the result by seven.
2. Find one-sixth of the difference between the sum of sixteen and eleven
and the number three.
3. Write down the value of x, if 0.0000218 = 2.18 ×10x
4. Write down the approximate value of 0.046487 to two significant figures.
5. Multiply 1.12 by 0.11 and leave your answer in standard form.
6. A quantity of food took 40 students 15 days to consume. How many days
will it take 25 students to consume the same quantity of food?
7. Write down the factors of x2 – y2.
2
3
8. Solve for x: x 12
x 1
12. Use the plans drawn in (a) and (b) below to sketch cuboid and cone
respectively.
__________________
(Complete the cuboid)
(a)
(Plan)
(b)
Drawn Drawn
(Side view) (Top view)
(Note: complete the plan of the cone and drawn its side and top views in the
spaces provided).
80 0 400 600
0 0 80 0 Fig. 4
60 40 O
Fig. (2) 50
Fig. (1)
40O 60o
30o
Fig. (3) Fig. 5
14. How many lines of symmetry does the figure drawn below have?
165
30O
C B
16. Using the diagram drawn below, find the side marked y.
y
60O
16cm
6 8
18. Write down four major steps in copying <ABC shown below.
The following are the examination marks scored by 15 students. Use the
information to answer questions 25 and 26.
5, 8, 0, 4, 5, 7, 8, 5, 11, 4, 7, 0, 4, 5, 7.
25. Calculate the modal mark.
26. Find the range of the distribution.
27. A bag contains 35 ripped and 25 unripe ones. Calculate the probability of
selecting unripe fruits.
28. In a class of 21 girls and 42 boys, what is the probability of selecting a
girl as prefect of the class?
29. The figure drawn below shows the family spending in a day.
Rice
Potato
Meat
Beans
Yam
APPENDIX: D
INTERNAL CONSISTENCY RELIABILITY ESTIMATE OF THE MATHRET
= (1.034) (1-0.1239)
= (1.034) (0.8761)
= 0.91
APPENDIX: E
INTERNAL CONSISTENCY RELIABILITY ESTIMATES OF THE MATHRET
SUBSCALE 1: NUMAP
K= 6, s2 = 3.5, x = 91.5, pq =
.8378
6 .8378
KR- 20 = 1
5 3.5
= 1.2 (1-0.2394)
= 1.2 (0.7606)
= 0.91
169
SUBSCALE 2: CAMPUS
Item Pass Fail p q pq
1 73 37 .6636 .3364 .2232
2 80 30 .7273 .2727 .1983
3 79 31 .7182 .2818 .2024
4 75 35 .6818 .3182 .2169
5 82 28 .7455 .2545 .1897
6 74 36 .6727 .3273 .2202
7 78 32 .7091 .2909 .2063
8 81 29 .7364 .2636 .1941
9 77 33 .7 .3 .21
10 75 35 .6818 .3182 .2169
11 83 27 .7545 .2455 .1852
12 85 25 .7727 .2273 .1756
13 69 41 .6273 .3727 .1756
14 79 31 .7182 .2818 .2024
15 72 38 .6545 .3455 [
.2261
pq 3.1011
SUBSCALE 3: MACOPS
= 1.125 (1-6.1398)
= 1.125 (0.8602)
= 0.97
171
APPENDIX: F
Outline of content/objective tested and item analysis data on the second
version of mathematics readiness test (MATHRET)
Item Content/objective tested D P Obj.
No
Number and Numeration
1 Interpretation of word problems into numerical expressions .74 .66 K
2 Writing down word problems into numerical expression .7 .72 Ucp
3 Approximation in measurement; calculation using standard .64 .71 K
form
4 Approximation in measurement and signification figures. .66 .68 K
5 Calculation using standard from. .17 .74 Ucp
6 Solving problems involving inverse proportion. .73 .67 Ucp
Algebraic processes
7 Factorization of algebraic expressions. .17 .7 K
8 Simple equations involving fractions .68 .74 Ucp
9 Using calculation method to solve problems on simultaneous .73 .7 Ucp
linear equations in two variables.
10 Knowledge of variation. .16 .68 K
11 Change of subject of a formula. .68 .75 Ucp
Geometry and mensuration
12 Drawing views and plans of common solids. .61 .77 Ucp
13 Identification of similar figures. .66 .63 K
14 Identification of similar shapes. .63 .72 Ucp
15 Calculation of lengths of triangles. .62 .65 Ucp
16 Length of right-angled triangles. .68 .84 Ucp
17 Angle associated with right-angled triangle .68 .82 Ucp
18 Knowledge of construction. .71 .85 K
19 Idea of construction. .73 .83 K
20 Comparison of estimates involving areas of similar figures. .62 .81. Ucp
21 Comparison of estimates involving volumes of similar firgues. .68 .81 Dm
22 Calculation of volume of solid figues .67 .71 Dm
Everyday Statistics
23 Concept of mean. .73 .73 Ucp
24 Concept of median. .62 .63 Ucp
25 Concept of mode. .66 .72 Ucp
26 Idea of range. .72 .7 Ucp
27 Concept of probability. .61 .72 Dm
28 Idea of probability. .66 .74 Dm
29 Knowledge of pie-chart. .18 .87 K
30 Concept of pie-chart. .69 .71 Ucp
172
173
NR
P= (100),
NT
APPENDIX: H
Analysis of MATHRET Items. Class: ss1
Computation of the item Difficulty (ID) and Discriminating index (DI)
of the first version of the MATHRET.
S/N ID DI RMK
1 .68 .62 √
2 .56 .5 √
3 .59 .72 √
4 .55 .61 √
5 .64 .62 √
6 .71 .63 √
7 .61 .55 √
8 .59 .7 √
9 .65 .72 √
10 .71 .65 √
11 .58 .74 √
12 .51 .6 √
13 .6 .55 √
14 .62 .6 √
15 .71 .59 √
16 .52 .63 √
17 .47 .44 √ Ru Rl
Item difficulty (ID) = and
18 .59 .69 √ 2N
19 .54 .65 √ Ru Rl
20 .64 .68 √ Discriminating index (DI) = , where
N
21 .57 .71 √
22 .55 .6 √ Ru = the number of upper achievers that scored
23 .6 .67 √ the item correctly, RL = the number
24 .7 .5 √ Lower achievers who scored the item correctly,
25 .58 .44 √
26 .49 .56 √ N = the number in the group.
27 .6 .72 √
28 .52 .63 √
29 .47 .5 √
30 .51 .6 √
Key: √ = good item.
177
APPENDIX: I
Names of schools sampled for the study.
1. St. Theresa’s College, Nsukka devoted by
A
2. Boys’ Secondary school, Orba denoted by
B
3. St. Cyorian’s Girls’ Secondary School, Nsukka denoted by
C
4. Girls’ Secondary School, Obolo-Afor denoted by
D
5. Model Secondary School, Nsukka denoted by
E
6. Community Secondary School, Obimo denoted by
F
7. Community Secondary School, Obolo-Afor denoted by
G
8. Onward International School, Nsukka denoted by
H
9. Oxford Secondary School, Obolo-Afor denoted by
I
10. Boys’ Secondary School, Aku denoted by
J
11. Boys’ Secondary School, Ibagwa-Aka denoted by
K
12. Girls’ Secondary School, Aku denoted by
L
13. Girls’ Secondary School, Imiliki denoted by
M
178
14. Community Secondary School, Ohebe-dim denoted by
N
15. Community Secondary School, Ede-Oballa denoted by
P
16. Community Secondary School, Umunko denoted by
O
17. Community Secondary School, Amala denoted by
Q
18. In-land Secondary School, Opi denoted by
R
19. Model Secondary School, Orba denoted by
S
179
APPENDIX: J
Mathematics Readiness Test (MATHRET) skills.
A. Comprehending Skills:-
a) Ability to understand the concept of inverse proportion.
b) Ability to identify similar figures
c) Ability to identify lines of symmetry
d) Ability to understand the procedure of construction/bisection of angle
e) Ability to understand when to use formula appropriately
f) Ability to identify the quantity of objects from figure drawn.
B. process skill:
g) Ability to add numbers
h) Ability to multiply
i) Ability to subtract numbers
j) Ability to divide numbers
k) Ability to write numbers in standard form.
l) Ability to approximate numbers to a given significant figures.
m) Ability to factorize algebraic expression
n) Ability to multiply expression
o) Ability to add expressions.
p) Ability to sketch solid objects
q) Ability to find unknown side of a right-angled triangle.
r) Ability to find unknown angle of a right-angled triangle
s) Ability to compare areas/volumes of similar figures.
t) Ability to compare angles in a figure drawn.
u) Ability to balance equation.
C. Transformation skill:
(v) Ability to translate word problem into numeric
180
D. Carelessness skill:
(w) Ability to write down values or expressions which one has mastery always.
E. encoding skill:
(x) Ability to draw accurately or write down the values in the diagrams
accordingly.
(y) Ability to write down the answers correctly (or with the appropriate signs,
where necessary).
The Mathematics Readiness Test Process Skills, the Number of points
allocated to each level of Skill and the Maximum frequency of the points
(Errors Committed by individual students.
Comprehending skill --- 6 points
Transformation skill --- 1 point
Process skill --- 15 points
Carelessness skill --- 1 point
Encoding skill --- 2 points
Total --- 25 points
Maximum frequency of the 25 points (Errors) per script of a student
= 59 (see marking scheme: appendix: k)
Maximum frequency of the Errors for the 300 students
= 59 × 300 = 17700.
181
APPENDIX: K
Solution (MATHRET)
1.
19 319 3
7
MI Transf (v)
57 22
= M1 process (h)
7
M1 process (g)
35
= M1 process (j)
7
=5 M1 process (j)
A1 Encod (y) 6 marks
16 11 3
2. M1 Transf. (v)
6
23 3
= M1 process (g)
6
24
= M1 process (j)
6
= M1 process (j)
= 4 A1 Encod (y) 5 marks
40 students = 15 days;
25 students = ?
40 15
6. = 24 days. M1 process (h); M1 compr
25 1
A1 Encod. (y) 3 marks
7. (x – y) (x – y) M1 process (m)
A1 Ecode (y) 2 marks
= 2(x-1) = 3(x-12)
2x – 2 = 3x – 36 M1 process (n)
2x – 3x = 2 – 36 M1 process (u)
x = 34 A1 Econd (y) 3 marks
9. x – 3y = 10 ………………… equ(1)
2x – y = 15………………… equ (2)
(1) x1 x – 3y = 10 …………………equ (3) M1 process (n)
(2) x 3 6x – 3y = 45 ………………….equ (4) M1 process (n)
(4) – (3) 5 x = 35 M1 process (i) M1 process (0)
35
x= = 7 M1 process (j)
5
Subtract 7 for x in equ (1) to get
183
7 – 3y = 10
- 3y = 10-7 M1 process (u)
- 3y = 3
3
y= = -1 M1 process (j)
3
A1 Encod. (y) 8 marks
1 k
10. ya where k is a constant.
x x,
k k
8= 8
100 10
k = 80 M1 process (u)
80
y A1 Encod. (y) 2 marks
x
5t
11. R2 = (by taking square-root of both sides)
3
2
3R = 5t M1 process (u)
3R 2
t= A1 Encod. (y) 2 marks
5
12. (a)
(b)
Top view is a
M1 process (p) Circle without
A1 Encod. (x) centre
Side view is M1 Process (p)
M1 process (p) a rectangle A1 Encod. (x)
A1 Encod. (x) M1 Process (p)
A1 Encod. (x)
8 marks
13. Figure (1 and (2) are similar triangles because their;
(i) Corresponding angles are equal; M1 Compr (b)
184
(ii) Corresponding sides are equal. A1 Encod (y) 2
marks
185
14. 1
2
3 5 lines of Symmetry
4 M1 Compr (c) 1 mark
5
15. Sin 300
Sin 300 = 10 M1 process (n)
10
x= M1 process (q)
Sin 30
10
= M1 process (j)
1
2
x = 20cm A1 Encod (y) 4 marks
y
16. Cos 600 =
16
y = 16 Cos 600 M1 process (n)
= 16 × ½ M1 process (q)
M1 process (h)
= 8cm A1 Encod (y) 4 marks
6
17. Tan θ =
8
= 0.75 M1 process (j)
θ = tan-1 (0.75) M1 process (r)
θ = 370 A1 Enod (y). 3 marks
M1 process (h)
49 270
x M1 process (j)
9 1
M1 process (h)
= 1470cm2 A1 Encod (y) 5 marks
6
21. The scale factor is 6:3 = : 1 M1
3
process (j)
=2:1
3
Vol. of the first cuboid = (2) Vol. of the second cuboid
= 8 × 1800 M1 process (h)
= M1 process (s)
= 14400cm3 A1 Encod (y) 4marks
1 m -2
22. Volume of cone = h M1
3
compr (e)
2
1 22 5 4.2 d 5
= ( radius( r ) ) M1 process (s)
3 7 2 1 2 2
1 22 25 4.2
= x x x M1 process (h)
3 7 4 1
= 27.5cm3 A1 Encod (y) 4 marks
9 10 7 n 1 3 9 0
23. =8
7
22 7 n
= 8 M1 process (g)
7
22+ 7n = 8 x 7
188
7n = 56 – 22 M1 process (h)
7n = 34 M1 process (i)
34
n= M1 process (g)
7
= 4.86 A1 Encod.(y) 5 marks
29. Potato
Rice
Meat
Beans Yam
From the above figure, meat is the least expensive M1 Compr (f)
A1 Encod (y) 2 marks
30. JS II
JS I
130O
JS III
2 58 X 23 3354 784
3 58 X 56 X 3481 3136
4 29 58 X 841 3136
5 56 X 29 3136 841
6 56 X 59 X 2916 3136
7 56 X 23 3481 784
8 56 X 59 X 3481 3249
9 28 59 X 784 3481
10 58 X 28 3136 784
11 59 X 59 X 3481 3136
12 58 X 59 X 3364 3136
13 28 23 784 784
14 59 X 59 X 3481 3136
15 58 X 57 X 3364 3249
16 56 X 22 3136 784
17 59 X 57 X 3025 3249
18 23 56 X 784 3136
19 59 X 56 X 3481 3136
20 28 √ 23 784 784
21 57 x 59 X 3249 3481
22 28 57 X 784 2704
192
23 22 57 X 784 3136
24 58 X 58 X 3364 3364
25 28 59 X 784 3025
26 58 X 58 X 3364 3364
27 57 X 59 X 3249 3481
28 56 X 58 X 3136 3364
29 29 57 X 841 3249
30 58 X 57 X 3364 3249
31 56 X 59 X 2809 3481
32 23 58 X 676 3136
33 22 55 X 729 3025
34 59 X 59 X 3025 3481
35 56 X 58 X 3025 3364
36 28 56 X 784 3136
37 56 X 29 3136 841
38 29 58 X 841 3364
39 57 X 59 X 3249 3481
40 59 X 28 3249 784
3481
41 57 X 23 3249 729
42 57 X 58 X 3249 3364
43 56 X 29 3236 841
44 58 X 59 X 3364 3481
45 56 X 58 X 3136 3364
46 58 X 59 X 3364 3025
47 59 X 58 X 3481 3364
48 58 X 22 3364 784
49 59 X 59 X 3481 3025
193
50 56 X 58 X 3136 3364
51 58 X 23 3364 729
52 52 X 59 X 2704 3481
53 23 56 X 784 2704
54 57 X 58 X 3136 3364
55 59 X 53 X 3025 2809
56 59 X 54 X 2809 2916
57 59 X 59 X 3481 3481
58 58 X 56 X 3249 2916
59 28 59 X 784 3481
60 56 X 57 X 3136 3249
61 28 58 X 784 3364
62 22 58 X 841 3025
63 58 X 22 3364 784
64 29 59 X 841 3136
65 58 X 56 X 3025 3136
66 23 57 X 676 3249
67 22 58 X 841 3364
68 57 X 23 3249 784
69 56 X 58 X 3136 3364
70 59 X 29 X 3025 841
71 23 .R 59 X 729 3481
72 56 X 59 X 3136 3481
73 22 28 729 784
74 28 59 X 784 3481
75 58 X 57 X 3364 3249
76 59 X 27 2916 729
77 57 X 59 X 3249 3136
194
78 23 58 X 784 3025
79 28 58 X 784 3364
80 23 58 X 676 3136
81 58 X 58 X 2809 3364
82 29 59 X 841 3481
83 59 X 23 2704 625
84 56 X 23 2809 676
85 27 58 X 729 3364
86 34 X 59 X 900 3481
87 23 58 X 676 3364
88 27 57 X 729 3249
89 22 56 X 676 3136
90 55 X 59 X 3481 3481
91 58 X 28 3025 784
92 56 X 23 3136 729
93 59 X 59 X 2601 3481
94 59 X 29 3481 841
95 58 X 58 X 3364 3364
96 28 57 X 784 3249
97 22 59 X 729 3481
98 23 56 X 729 3136
99 59 X 59 X 3136 3481
100 28 58 X 784 3364
101 29 59 X 841 3481
102 23 56 X 784 3481
103 29 59 X 841 3481
104 28 58 X 784 3364
195
105 59 X 59 X 3481 3481
106 58 X 29 3025 841
107 59 X 59 X 3481 3481
108 28 59 X 784 3025
109 58 X 59 X 3364 3481
110 59 X 58 X 2916 3364
111 22 55 X 484 3025
2
y = 7840 ; y = 418389; N = 152; Y1 = 51.58
2
Y 2 X 418389 7840 92.06
S.D. = 2 2752 .56 26660 .50
N N 152 152 N
= 9.59
S22 = 91. 97
199
APPENDIX: M
Standard deviation of raw scores (x and y) of errors committed by urban and
rural students respectively, as measured by MATHRET.
2 2 2
x2 x2 308672 6395
S.D. = 2143.56 - 1972.22
N2 N2 144 144
S2 = 13.09
S22 = 171.35
Standard deviation (S.D) of errors scores (x1) made by rural students.
2
N1 = 156; x 1 = 8113; x1 = 442496; x1 = 52.01
2 2 2
x1 x1 442496 8113
S.D. =
N1 N1 156 156
S.D. = x x
452640 8964
2433.55 2322.62
N N 186 1'86
S.D. = Y
Y
299594 5544
2628.02 2365.03 262.99 16.22
N N 114 114
S22 = 262.99
Everyday Statistics
26 Concept of mean. .73 .73 Ucp
27 Idea of mean. .21 .19 Ucp
28 Concept of median. .62 .63 Ucp
29 Concept of mode. .66 .72 Ucp
30 Idea of range. .72 .7 Ucp
31 Concept of probability. .61 .72 DM
32 Idea of probability. .66 .74 DM
33 Knowledge of pie-chart. .18 .67 K
34 Concept of pie-chart. .69 .71 Ucp
35 Idea of mode .2 .24 Ucp
36 Concept probability .18 .22 DM
NR
P= (100), where p= percentage of pupils who answer the test item correctely.
NT
NR = number of pupils who answer the test item correctly.
NT = total number of pupils who attempt to answer the test item.
RU R L
Discriminating index (D) = , where Ru = the number upper achievers that
N
scored the item correctly, RL = the number lower achievers who scored the item
correctly, N = the number in the group.
211
APPENDIX: S
Internal Consistency Reliability Estimate of the MATHRET on first version.
Item Pass Fail p q pq
1 70 10 .875 .125 .1093
2 68 12 .85 .15 .1275
3 71 9 .8875 .1125 .0998
4 65 15 .8125 .1875 .1523
5 60 20 .75 .25 .1875
6 72 8 .9 .1 .09
7 69 11 .8625 .1375 .1185
8 65 15 .8125 .1875 .1523
9 70 10 .875 .125 .1093
10 74 6 .925 .075 .0694
11 66 14 .825 .175 .1443
12 69 11 .8625 .1375 .1185
13 70 10 .875 .125 .1093
14 65 15 .8125 .1875 .1523
15 60 20 .75 .25 .1875
16 68 12 .85 .15 .1275
17 67 13 .8385 .1625 .1362
18 58 22 .8625 .1375 .1185
19 74 6 .925 .075 .0694
20 65 15 .8125 .1875 .1523
21 70 10 .875 .125 .1093
22 71 9 .8875 .1125 .0998
23 72 8 .9 .1 .09
24 64 16 8 .2 .16
25 70 10 875 .125 .1093
26 59 21 .7375 .2625 .1936
27 61 19 .7625 .2375 .1881
28 58 22 .8625 .1375 .1185
29 68 12 .85 .15 .1275
30 66 14 .825 .175 .1443
31 70 10 .875. .125 .1093
32 60 20 8125 .1875 .1523
33 71 9 .8875 .1125 .0998
34 74 6 .925 .075 .0694
35 67 13 .8385 .1625 .1362
36 70 10 .875 .125 .1093
4.6251
K= 36, X = 67.14, S.D. = 4.64, S2 = 21.55, pq = 4.6251
212
Subscale 1: NUMAP
Subscale 2: CAMPUS
17 2.2554
KR – 20 = 1
16 12.90
17
= 1 01748 = 1.0625 (0.8252 = 0.88
16
Subscale 3: MACOBS
Item Pass Fail p q pq
1 70 10 0.875 .125 .1094
2 69 11 .8625 .1375 .1185
3 63 18 .7878 .2125 .1673
4 68 15 .85 .15 .1275
5 68 10 .85 .15 .1275
6 65 16 .8125 .1875 .1523
7 70 13 .875 .125 .1093
8 64 14 0.8 .2 .16
9 67 13 .8385 .1625 .1362
10 66 14 .825 .175 .1443
11 68 12 .785 .125 .1093
Time: 1 ½ hr.
Instructions Class: SS 1
You are not allowed to write on the question paper. You shall return the
question paper along with your answer script. Write your name, sex (i.e male or
female), school and class very clearly on your answer script. Show clearly the
processes you use in solving the following questions. Answer all the questions. Do
not start answering the questions until you are told to start.
1. Write down the translation of the following statement into numerical.
Subtract the sum of nineteen and three from the product of nineteen and
three and divide the result by seven.
2. Write down the product of three multiplied by one-fifth and two.
3. Find one-sixth of the difference between the sum of sixteen and eleven and
the number three.
4. Write down the value of x, if 2.1x = 5.8 x 10.
5. Write down the value of x, if 0.0000218 = 2.18 x 10x
6. Write down the approximate value of 0.046487 to two significant figures.
7. Multiply 1.12 by 0.11 and leave your answer in standard form.
8. A quantity of food took 40 students 15 days to consume. How many days
will it take 25 students to consume the same quantity of food?
9. Write down the factors of x2 – y2.
2 3
10. Solve for x:
x 12 x 1
11. Write down the factors of x2 + y2.
12. Use calculation method to solve the simultaneous linear equations.
x- 3y =10
2x –y = 15
215
(a) __________________
(Plan (Complete the cuboid)
(Note: complete the plan of the cone and draw its side and top views in the
spaces provided).
16. Identify figures that are similar, giving two reasons.
80 0 40 0 60 0
600 400 80 0
Fig. (4)
Fig. (2) 50O
Fig. (1) 60o
40O 30o
Fig. (5)
Fig. (3)
216
17. How many lines of symmetry does the figure drawn below have?
30O
C B
19. Using the diagram drawn below, find the side marked y.
60O
16cm
20. Calculate the angle marked θ in the diagram below.
6 8
21. Write down four major steps in copying <ABC shown below.
B C
23. Two similar rectangles have a pair of corresponding sides in the ratio 3 : 7.
If 270 cm2 is the area of the first rectangle, find the area of the second
rectangle.
24. Two similar cuboids have heights 6cm and 3cm. If the volume of the second
cuboid is 180cm2 calculate the volume of the first cuboid.
25. Calculate the volume of a cone whose height is 4.2cm and diameter of the
base is 5cm (Take Л = 22/7) .
26. Find the value of n if the mean of the scores 9, 10, 7, 1,3, 9, and 0 is 8.
27. Find n if the mean of the scores 7, 0, 3, n, 4, 10 and 5 is 6.
28. Calculate the median of the following set of scores.
2.5, 4.5, 3.1 2, 4.2, 2.5, 3.9, 4.3.
The following are the examination marks scored by 15 students. Use the
information to answer questions 29 and 30.
5, 8, 0, 4, 5, 7, 8, 5, 11, 4, 7, 0, 4, 5, 7.
29. Calculate the modal mark.
30. Find the range of the distribution.
31. A bag contains 35 ripped fruits and 25 unripe ones. Calculate the probability
of selecting unripe fruits.
32. In a class of 21 girls and 42 boys, what is the probability of selecting a girl
as perfect of the class?
33. The figure drawn below shows the family
Rice
spending in a day.
Potato
Which food item is the least expensive?
Meat
Beans
Yam
218
Furthermore, teacher should build into the students those prerequisite skills that
could enable them sketch solid objects. Such as knowledge of terms like plan,
edges, top view, side view, front view, parallel projection, proportional drawing,
and orthogonal projection (See Appendix: K item 12). If teachers built these terms
into the students, future student(s) who may commit the error of inability to sketch
solid object will tend to avoid it, thereby enhancing the students’ readiness for
senior secondary Mathematics learning.
7. Students have shown inability to draw accurately or write down the values in
the diagrams accordingly by recording a total of 593 frequencies of errors
and inability to find unknown side of a right- angled triangle with 579
frequencies of errors committed on this. In remediation of these errors, it is
suggested that: teachers should teach the student that to gain the knowledge
of similarity of triangles lies on the equality of their corresponding (1)
angles and (2) sides. In the case of say pie- chart, as in terms 29 and 30 of
the MARKET, the knowledge of the size of spaces pie item in a pie- chart
determines which item is larger (in size or quantity) than the other.
Similarity, the size of angles in a pie- chart determines which sector is larger
than the other. All these information should be inculcated in the students,
and with these, another student(s) who could fall victim of this error will
avoid it due to mastery of the skill. Teachers should introduce the idea of
SOHCAHTOA in teaching the students how to find unknown side of the
222
right- angled triangle with one unknown side, one side given and no
included angle, where,
Opposite Adjacent
SOH Sin ; CAH Co sin e
Hypothenus Hypothenus
Opposite
TOA Tangent
Adjacent
When students master the use of this idea/principle, student who could have failed
problem(s) on this aspect will take correction. This step will increase the volume of
Mathematical readiness level of JS 3 student aspiring to resume Mathematics
learning at senior secondary one level.
MATHRET item 17, in finding the unknown angle of a right- angled triangle
when two sides (involving opposite and hypotenuse) were given. The ability
to use logarithm tables and or calculator should be taught as prerequisite
required to know that 37o = tan -1 (0.75) (See Appendix: K item 17).
12. Students demonstrated inability to (a) add expression with 503 frequency of
errors; and (b) inability to compare angles in a figure should be acquainted
with rudiments/ skills such as minus minus gives minus (i.e. - - = -, always).
Also minus minus minus gives either minus(-) or plus (+) = - - - = + or –
Examples, -4 -3 = -7; -4- (-3) = -4+3 = -1. but -5 – (-8) = -5 + 8 = 3. based
on these prerequisite knowledge /skills, students can then solve expressions
such item 9 in the MATHRET. In solving item 9, students can easily see that
225
equation (4) minus equation (3) is -3y – (-3y) gives – 3y + 3y = 0; and
6x – x gives 5x. Also 45 – 10 gives 35 (See Appendix: K item 9). Students
should be informed that;
a. Angle formed by two perpendicular lines gives 90o; (B) the sum of
angles at a point gives 360o; (c) The sum of angles on a straight line is
180o. Considering MATHRET item 30 (see Appendix: K) students
aught to know that since JS II was indicated with 90o; JS III with 130o,
then JS I should be 360o – 130o – 90o = 140o. They should therefore
compare the three angles that make up the angles at a point 360
degrees i.e. 90o, 130o and 140o. They will realize that 140o is the
greatest angle and it represents JS I section. JS I section therefore has
the greatest number of students (See Appendix: K item 30)
13. Inability to (a0 understand the concept of inverse proportion, and (b) identify
similar figures were among the areas that students committed less frequency
of errors, with 469 and 478 frequency of errors respectively. Although
students demonstrated evidence of mastery on these errors among the
students that committed it, and among students who may fall victim of these
errors. Teachers should introduce illustrative examples in explaining the
concept of inverse proportion. For instance, teacher may introduce the idea
that if a plot of land takes 2 men 5 days to cultivate, that the same plot of
land will take more men less days to cultivate. That means instead of more
men more days the inverse refers to more men less days. Teachers should
introduce the idea of lines of symmetry for different solid objects before
teaching students similarity of solid figures such as ‘similar triangles as
found among the MATHRET items. For the similar triangles in the
MATHRET, the two major consideration for their similarity are (1) the
corresponding sides must be equal; and (2) the corresponding angles must be
equal too (See Appendix: K item 13).
226
14. The last two skills students showed evidence of mastery were on
(1) Ability to identify lines of symmetry, with a record of 465 frequency of
errors, and especially on (2) ability to write down values or expressions which
one has mastery always with a record of 459 frequencies of errors. Although
students demonstrated evidence of mastery of these two skills in their scripts,
yet there is need that teachers state generally that line of symmetry divides a
solid object into identical parts.