i
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
ISBN: 978-602-19590-4-6
Proceedings of
INTERNATIONAL CONFERENCE ON
MATHEMATICAL AND COMPUTER SCIENCES
ii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PREFACE
This event is a forum for mathematician and computer scientist for discussing and exchanging
information and knowledge in their area of interest. It aims to promote activities in research,
development and application not only on mathematics and computer sciences areas, but also
all areas that are related to those two fields.
This proceeding contains sorted papers from the International Conference on Mathematical and
Computer Sciences (ICMCS) 2013. ICMCS 2013 is the inaugural international event organized
by Mathematics Department Faculty of Mathematics and Natural Sciences University of
Padjadjaran, Indonesia.
In this proceeding, readers can find accepted papers that are organized into 3 track sections,
based on research interests which cover (1) Mathematics, (2) Applied Mathematics, (3)
Computer Sciences and Informatics.
We would like to express our gratitude to all of keynote and invited speakers:
Prof. Dr. M. Ansjar (Indonesia)
Assoc. Prof. Dr. Q. J. Khan (Oman)
Prof. Dr. Ismail Bin Mohd (Malaysia)
Prof. Dr. rer. nat. Dedi Rosadi (Indonesia)
Prof. Dr. T. Basarudin (Indonesia)
Assoc. Prof. Abdul Thalib Bin Bon (Malaysia)
Prof. Dr. Asep K. Supriatna (Indonesia)
We also would like to express our gratitude to all technical committee members who have
given their efforts to support this conference.
Finally, we would like to thank to all of the authors and participants of ICMCS 2013 for their
contribution. We hope your next participation in the next ICMCS.
Editorial Team
iii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
EDITORS
iv
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
REVIEWERS
v
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
SCIENTIFIC COMMITTEE
vi
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PROCEEDINGS
ORGANIZING COMMITTEE
vii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
TABLE OF CONTENTS
PREFACE ................................................................................................................................ iii
EDITORS .................................................................................................................................. iv
REVIEWERS ............................................................................................................................. v
A Noble Great Hope for Future Indonesian Mathematicians, Muhammad ANSJAR ............ 1
Teaching Quotient Group Using GAP, Ema CARNIA, Isah AISAH &
Sisilia SYLVIANI ..................................................................................................................... 60
Network Flows and Integer Programming Models for The Two Commodities
Problem, Lesmana E. .............................................................................................................. 77
viii
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
A Property of 𝒛−𝟏 𝑭𝒎 [[𝒛−𝟏 ]] Subspace, Isah AISAH, Sisilia SYLVIANI ............................. 119
ix
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
Controlling Robotic Arm Using a Face, Asep SHOLAHUDDIN, Setiawan HADI ................ 202
Image Guided Biopsies For Prostate Cancer, Bambang Krismono TRIWIJOYO ............ 214
Mining Co-occurance Crime Type Patterns for Spatial Crime Data, Arief F HUDA,
Ionia VERITAWATI ........................................................................................................... 267
x
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
KEYNOTE SPEAKER
xi
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Mathematics has strong interactions with human life since ancient time all over the
world. Mathematics has also contributed in developing human mind. Mathematics is never
absence in all efforts developing new science, technology and engineering to improve the
quality of human life. Besides the contribution of body of knowledge of mathematics in the
development of science, technology and engineering, there are values in mathematics, which is
worth to adopt for pleasant and peaceful life in a community, or in a country.
It is fair to imagine at one time, Indonesia with such a great populations, will contribute
a significant number of mathematicians actively participate with other scientists and engineers
to develop modern science and technology in this country, which is also acknowledge by other
modern countries.
The dream which is better called A Noble Hope will come true, if Indonesian
mathematicians start to work today, may be start with a small scale well calculated action, and
slowly, but sure, expand widely. It may take decades to come, but it might be also sooner.
There are two main programs could be followed
Program one is about the developing of mathematics graduate research. Firstly, it is
necessary to strengthen and to improve mathematics graduate and undergraduate programs in
all departments. These activities should be along the line, or parallel to the existing government
programs besides proposing new programs.
Program two concerns all level of pre university mathematics education.
Mathematicians should establish contacts with groups of mathematics teachers. Through these
contact mathematicians help the teachers to master and understand correctly the concepts they
are teaching, make them used with the way of thinking and reasoning in mathematics, and adopt
the values in mathematic. This will enable them to make their student also understand correctly
the concepts they are learning, and introduce thinking and reasoning in mathematics slowly.
They could also make the students familiar with the values in mathematics. However, this
should be done in whatever curriculum is being used and ways of teaching are being applied.
This is purely to improve and to correct the mathematical background of the teachers,
in order they can provide the students with proper and correct understanding of mathematical
concepts they are learning, free from interfering to the teaching practice. The most expected
result is the mass improvement of pre university mathematics education, how slow it might be.
This also guaranties the possible continuation of the first program.
Meanwhile, mathematicians should always give input to the government and
community about the correct mastering in mathematics, and should welcome any invitation to
improve mathematics educations, and participate actively.
A strong message behind these programs is the responsibility of each Indonesian
mathematician for improvement the quality of pre university mathematics educations.
Keywords: Mathematics with human life, mathematics and human mind, mathematics and
science and technology, a noble hope, values in mathematics, improving the quality of pre
university mathematics education.
1. Introduction.
The main desireof Indonesian, as well as the people of all nations, are living in prosperity and
respectedamong the nations. The continuous effort by all citizens generation by generation is the main
key to achieve the goal. High quality education in all levels is the only basic and effective prescription
to prepare each generation to hold thisresponsibility.
1
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Mathematics has had interaction with human life since the birth of mathematics, which is unknown
until today.It will continue to the future. Mathematics has contributed to the human life, culture, and
humanity. On the other hand, the demand to fulfill the needs of human life hastriggered various
developments in mathematics.
The pyramid in the era of ancient Egypt (around 200 B.C.) showed that the ancient Egyptian
had been using some mathematical concepts known today. Restructuring of land boundary covered by
the mud brought by the annual flood of the Nile might have triggered the formulation of the basic
concepts of current geometry, whatever was the formulation. However, this is far from the estimated
date of written ancient Egyptian mathematics known as Moscow Mathematical papyrus and Rhind
papyrus. Both papyri discovered in 1858 dated from about 1700 B.C. Both papyri contain problems and
their solutions. It is most likely that that the problems related to the work faced by the people as far
back as 3500 B.C. Egyptian mathematics must have existed since that time.
At the same time mathematics also developed in Babylonia. Babylonian mathematics related
not only to astronomy, as well as in Egypt, but also related to various daily activities such as counting,
trading, etc. The existence of dams, canals, irrigations and other engineering work in Babylonia also
2
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
suggested that they had been using mathematics, although possibly in much simpler versions. Probably,
it is more precise to say that the engineering work created mathematics in Babylonia.
In 332 B.C.,Egyptian and Babylonian mathematics merged with Greek mathematics,after
Alexander the Great conquered Egypt and Babylonia. Since then mathematics belonged to the Greek
and developed until the year of 600.
Meanwhile, mathematics also grew and developed solely in China since 1300 B.C. The first
contact with the rest of the world was estimated around 400 B.C. Mathematics in China was developed
responding to the need for trading, government administrations, architecture, calendar, etc.
However, mathematics as an organized, independent, and reasoned discipline firstly introduced
in Classic Greek of the period from 600 to 300 B.C. The Greek made mathematics abstract, where an
idea is processed only by human mind. They insisted on deductive proof. This is one of the great
contributions of the Greek mathematics. The Greek from this Classic Greek period followed by the
Alexandrian Greek period also created many other foundations for current mathematics. Euclidcreated
geometry known as Euclidean geometry in the Alexandrian period.
The interaction between mathematics and real phenomena as introduced in Greek mathematics
is another vital contribution of the Greek. The Greek identified that mathematics is an abstraction of
physical world, and saw in it the ultimate truth about the structure and the design of the universe. It
means that they considered mathematical concepts as the abstract form of the nature. The phrase
'mathematics is the essence of nature's reality' appeared as a doctrine. Mathematical equations express
various real phenomena, which is familiar today as mathematical models of the respective phenomena.
The Greek also identified mathematics with music. They even considered mathematics as an art. They
felt and saw beauty, harmony, simplicity, clarity, and orderliness in mathematics.
The vitality of Greek Mathematical activities started to decline in the Christian era, especially
after Alexandrian Greek in North Africa defeated by the Roman Empire. The situation became worst
when the Arab Kingdom conquered Alexandria in 640. However, a century later the Arab kingdom
opened the door to invite Greek and Persian scientists to perform in the kingdom. Greek mathematics
started blooming again as Arab mathematics. The mathematics community in Arab translated and
completed former work of Greek mathematicians. This became the sources for European mathematics
lately, to replace the missing original manuscripts in their period. Mathematics performed in Arab
through astronomy to determine the praying and fasting calendar, the direction of kiblah for praying,
etc.
Outside of Greece and Italy, mathematics started to develop in Europe only around the 12 th
century, after churches were established. In the circle of churches, learning mathematics was relatively
important. By emphasizing on deductive reasoning, they considered learning mathematics as an
exercise for debating oraugmenting. A priest needs this ability in theology and spreading the religion.
Besides, they considered arithmetic as a science of number applied to music, geometry as a study of
stationary objects, and astronomy as a study on moving objects.
In the period of 1400 – 1600, the humanist studied abstract mathematics together with physics,
architecture and other sciences, to support the developmentof their work of art. They started to use
perspective in fine arts as a direct involvement of mathematics. They wrote various books on
mathematics for arts, and some could be categorized as mathematics books.
At the end of 16thcentury, the role of mathematics in sciences, especially in astronomy, was
increasing. Copernicus and Kepler strongly believed on the laws in astronomy and mechanicsobtained
through mathematics. The heliocentric replaced the geocentric concept in astronomy. At the beginning
of the 17th century, namely in 1610, Galileo wrote a famous statement:
"Philosophy [nature] is written in the great book whichever lies before our eyes – I mean
the universe – but we cannot understand it if we do not first learn the language and grasp
the symbols in which it is written. The book is written in the mathematical language, and the
symbols are triangles, circles and other geometrical figures, without whose help it is
impossible to comprehend a single word of it; without which one wonders in vain through
a dark labyrinth."
This statement agrees with current perceptions. A mathematical model is a mathematical representation
of nature or other real phenomena. We can explore the nature as well as the other real phenomena and
3
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
solve their problems mathematically through the respective mathematical models. Descartes even said
that the essence of science is mathematics.
Newton and Leibniz created calculus separately in the 17th century, and considered as a greatest
creation in mathematics, next to Euclidean geometry. The work in calculus to solve physical problems
led to the creation of ordinary differential equations and extended later to partial differential equations.
Those are among the most powerful mathematical models to solve various real world problems. It began
from Newton's law of motion.
The interaction between mathematics and various activities for the benefit of human life has
increased in modern time until now through its applications in science, technology, and engineering, as
well as in art and social sciences. One time, physics needed a particular function with properties
considered impossible at the time. The function, which is familiar today as Dirac-δ function, is
impossible in conventional function. It is a discontinuous function which is zero along a real axis except
at the origin, but integrable along the whole axis with the value of 1.To back up the existence of such
function the theory of Generalized Functions is created. This function opened the new era in quantum
mechanics in modern physics, which provideda lot of benefit for human life. The application of
mathematics to win the World War II was the starting point of operational research. Operational
research has many applications to support many activities related to the better living. Genetic algorithm
is a useful method in engineering. This method based on the abstract mathematical expression of
genetics theory in biology. Finite element method is another powerful method in structural
engineering.The formulation of the method based on physical views of a structure. Through abstract
mathematical formulation, this method is also applicable as a powerful method in fluid dynamics.
Mathematics has also played significant role in creating and developing new sciences and
technology such as nano sciences, nano technology, information technology, etc., which also provide
valuable contributions to human life. We must not neglect the progress of computer science, side by
side with numerical analysis. Mathematical computation provides strong supports to science,
technology and engineering leading to prosperous living, as well as in mathematics itself.
When classical Greek introduced mathematics as an abstract concept, it means that any ideas in
mathematics are the results ofprocesses of human mind. The Greek mathematicians insisted onusing
deductive proof. Last century,Freudenthal stated that mathematics is not only a body of knowledge, but
also a human activity. By considering human activity also means human mind. Therefore, this statement
agrees with the earlier statement of Richard Courant, who called mathematics as an expression of human
mind. The full Courant's statementis as follows.
This statement points out that thinking in mathematics is an active anddynamic thinking.Strong reasons
always support all statements. Thinking in mathematics is not aiming just fora perfect outcome, but the
outcome with aesthetic perfection. Strong intuition should accompany logical thinking. Besides paying
attention comprehensively to the existing situations, mathematical thinking is also looking forwardto
build up something new. Besides aiming at general situation, never mathematics never neglect any
particular cases. It is common to say that mathematical thinking is logical, critical and systematic
thinking, as well as consistent, creative and innovative. It is clear that these are also the way of thinking,
which is a great worth in real life.
This way of thinking has built up strong structure of mathematical theories, which make
mathematics always stable in their development. Each mathematical theory consists of a chain of neatly
ordered truths known as axioms, theorems, lemmas, prepositions, etc. There are no contradictions
among the truths in mathematics, and obeyed consistently. A truth in mathematics is a relative truth. A
statement becomes a new truth if it seems logical consequences of previous truths, or at least it does not
contradict them. However, an axiom is an initial truth or one of the basics of other truthsaccepted only
4
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
It has been shown that the general wishes of every nation is living in prosperity and respectable among
the world community. To achieve this situation mastery in science, technology and engineering is
imperative for every nation. Meanwhile, mathematics has been playing significant roles in the
development of science and technology, and has been interacted with real life since early history, and
contributed to the development of human mind for a long history.
Therefore, it is a great hope, or, probably a noble great hope to see at one time, Indonesia with
such enormous population will contribute a significant number of mathematicians directly participate
in the modern development of science, technology and engineering in this country. One may say that
this is only a nice dream. However, this is a dream that may come true, although frankly it is not easy
and may take very long, long time. It needs very strong will, followed by continuous strong big efforts
and tireless work for a long time. May be it will take decades to come, but may be less. This is a
5
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
challenge, for Indonesian mathematicians; a challenge to reach what one may call as A Noble Great
Hope.
If there is a will, this effort has to start by improving and intensifying research activities,
improving undergraduate and graduate mathematics programs, and the mot important thing is
improving pre university mathematics teaching in all levels.
This may sounds as an old song, a song so far nobody can sing. However, there are roles for
individual mathematicians; good mathematicians with graduate or undergraduate degrees. They could
do something along the line of government policy and government's programs and provide indirect, but
valuable support. The roles should be played continuously, gradually more intensive, and this will
provide benefit from time to time to our education..
This program should start with standardizing the mathematical knowledge and mathematical abilities
among the mathematicians in each mathematics departments in every university, as a prerequisite for
intensifying research programs.
The existing research activities should be pushed more intensively and whenever possible
design a new well-planned proposals. Whenever possible try to start collaborative or joint research with
colleges in any branches of sciences and engineering. Mathematicians could contribute for example
through modeling, identifying mathematical aspects in the problem, or introducing mathematical tools
and computation to solve the problem, etc.
To make this is more possible for a long range, the undergraduate and graduate mathematics
curriculum should allow, even encourage the students to take one or two courses in other branches of
sciences, or engineering. Mathematics department should establish cooperation and good relations with
related departments.
If this program runs as expected, we hope not very long, a small group of mathematicians,
engineers, or other scientists will come out with respectable results. This group, as an embryo should
develop in quality and quantity. Several other embryos should grow in the short future.
In this case, mathematicians are working completely within the framework of government
programs, with some innovations.
4.2. A Noble Great Hope program 2: Improving mathematics at pre university education
The weakness of mathematics at current pre university education is a reality and we have to admit. This
situation weakened any efforts to develop university mathematics educations.
The main factor to weaken the pre university mathematics educations in all levels are the
teachers. There are a very limited number of teachers and most of them do not equipped properly with
mathematical knowledge they have to teach. Even many of them do not understand correctly the
concepts they have to teach. This must not happen. They must get help. They must and need to
understand correctly the concepts they are teaching. With this help, they must be able to make their
students also understand correctly and free from miss conceptions.
Besides, the teachers must used with thinking and reasoning mathematically. They should also
understand some values in mathematics appropriate for real life, so that they can also pass this to their
students.
This could be done if an individual or a small group of mathematicians organized regular
meetings with small groups of teachers. The meetings completely discuss the correct understanding of
the concepts to teach, thinking and reasoning mathematically, and understanding on the values in
mathematic. So, it nothing to do with curriculum and the way of teaching being adopted.
The activities should extend to many more groups, however the old groups must be closely
monitored. Emphasizing to a small group is only for effective reason.
This is a contribution of Indonesian mathematician to improve pre university mathematics
educations, without interfering government programs.
Meanwhile, mathematicians should provide regular information on mathematics to the
government and the society, for instants comments on school mathematics books, popular clarifications
about mathematics, etc.
6
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
To improve the pre university mathematic education should not be left to government only.
Mathematicians have an obligation to play a role outside government programs.
5. Concluding remark.
This paper should have more analysis, especially on the pre university mathematics educations. The
same thing for The Noble Great Hope program 1 and 2, more detail should be outlined. But myillness
restricts me from working much. Therefore, in the program 2 every body should arranged the activity
individually.
I thanks and appreciate highly, the understanding and the patient of the committee.
At last, I there are still small things could be drawn from this short paper. I apologize for the
imperfect things of this paper due to my illness.
6. References
Ellis, M. W. (2005). The Paradigm Shift in Mathematics Education. The Mathematics Educator Vol 15,
7-17.
7
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
INVITED SPEAKERS
8
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract
In this paper, we propose an idea as well as a method how to manage the convergences of
Newton’s method if its iteration process encounters a local extremum. This idea build the
oscullating circle at a local extremum, and use that radius of oscullating circle or known as the
radius of curvature as an additional number of local extremum, then take the addition of local
extremum and radius of curvature at that local extremum as an initial guess in finding a root
close to that local extremum. Several examples which demonstrates that our idea is successful
and perform to fulfill the aim of this paper, will also be given in this paper.
1. Introduction
One of the most frequently occuring problems in scientific work is to find the roots of the equations of
the form
f ( x) 0 (1)
Iterative procedures for solutions (1) are routinely employed. Starting with the classical Newton’s
method, a number of methods for finding roots of equations have come to exist each of which has its
own advantages and limitations.
The Newton’s method of root finding based on the iterative formula is given by
f ( xk )
xk 1 xk . (2)
f '( xk )
Newton’s method displays a faster quadratic convergence near the root while it requires evaluation of
the function and its derivative at each step of the iteration.
However, when the derivative evaluated is zero, Newton’s method stalls ([1]).
Newton’s method will face several obstacles if it has low values of the derivative, the Newton
iteration offshoots away from the current point of iteration and may possible converge to a root far away
from the intended domain. For certain forms of equations, Newton’s method diverges or oscillates and
fails to converge to the desire root. We observe these obstacles by considering the function with
expression
f ( x) x 3 x 2 x 3
where its graph is given in Figure 1 ([2]). If we start the iteration at x0 where x0 is a fixed number in
the interval (1.5,1.6), then we will obtain infinite sequence like
x0 , x1 , x0 . x1 , …
*
which does not converge to x .
x* x0 1 x1 x0 0.999...
Figure 1 : x0 , x1 , x0 . x1 , … Figure 2 : x0 ,
9
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
If we start at x0 0.999... as shown in Figure 2, we may get x1 which approaches (exceed the
computer number). Therefore the algorithm cannot proceed.
In this paper, we would like to compute all the zeroes of a function when its graph is much like in
Figure 3.
2. Curvature of a Function
The idea of curvature is the measure of how sharply a curve bends. We would expect the curvature to
be 0 for a straight line, to be very small for curves which bend sharply. If we move along a curve, we
see that the direction of the tangent line will not change as long as the curve is flat. Its direction will
change if the curve bends. The more the curve bends, the more the direction of the tangent line will
change. As we know that the movement of Newton’s method searching process is depended on the
tangent line of each iteration. We are thus led to the following definition and theorem which are taken
from [2].
Definition 1
Let the curve C be given by the differentiable vector function f (t ) f1 (t )i f 2 (t ) j . Suppose that
(t ) denotes the direction of f '(t ) . (i) The curvature of C , denoted by (t ) , is the absolute value
of the rate of change of direction with respect to arc length ( s ), that is,
d
(t ) , note that (t ) 0 .
ds
(ii) The radius of curvature (t ) is define by
1
(t ) , if (t ) 0 .
(t )
10
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Theorem 2
dT
If T (t ) denotes the unit tangent vector to f , then (t ) .
ds
Theorem 3
If C is a curve with equation y f ( x) where f is twice differentiable then
f "( x)
.
1 f '( x)
2 3/ 2
According to [3], when the curvature (t ) 0 , the center of curvature lies along the direction of
N (t ) at distance 1/ from the point (t ) . When (t ) 0 , the center of curvature lies along the
direction N (t ) at distance 1/ from (t ) . In either case, the center of curvature is located at,
1
(t ) N (t ) .
(t )
The osculating circle, when 0 , is the circle at the center of curvature with radius 1/ which is
called the radius of curvature. The osculating circle approximates the curve locally up to the second
order (the illustration is in Figure 4).
3
y
N x
Figure 4
x*k x*k
Figure 5
Basically, in order to make Newton’s method converges to x* , the zero of a function f , we need an
initial estimation that is close enough to x* . Therefore, we would like to get the number such that
xk* becomes the best estimation as an initial point for Newton’s method. We use the radius of
curvature of f x to be the number , therefore xk* is the initial best estimation when we use
11
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
the Newton’s method to find the root of f x . We will prove that xk x is the radius of the
* *
largest interval around x* in which the application of Newton’s method to any point in x *
, x*
Definition 4 ([4])
The function f : D R R is Lipschitz continuous function with constant in an interval D ,
written f Lip D, if for every x, y D ,
f x f y x y .
For the convergence of Newton’s method, we need to show that f ' Lip D . This condition has
been shown in [4] through the following Lemma.
Lemma 5 ([4])
If (i) f : D R R for an open interval D , (ii) f ' Lip D , then for any x, y D ,
( y x)2 .
f ( y) f ( x) f '( x)( y x)
2
For most problems, Newton’s method will converge q -quadratically to the root of one nonlinear
equation in one unknown. We shall now state the fundamental theorem of numerical mathematics.
Theorem 6 ([4])
If (i) f : D R R for an open interval D , (ii) f ' Lip D , (iii) for some 0,
f '( x) for every x D , (iv) f ( x) 0 has a solution x* D , then there is some 0 such that
12
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Proof
Let ̂ be the radius of curvature of f ( x ) at x1* . Let ˆ be the radius of the largest interval around
x * , that is contained in D and define
min{ˆ,(2 / )} .
We will show by induction that for n 0,1,2,... , the equation (4) holds, and
| xn1 x* || xn x* | .
Take ˆ xk* ˆ x* as the radius of the largest interval around x* D , and let x0 xk* ̂ be an initial
point which is a lower bound or an upper bound of x* ˆ , x* ˆ . The proof simply shows at each
iteration that the new error | xn 1 x* | is bounded by a constant times the error the affine model makes,
in approximating f at x * .
For n 0 , we have
x1 x x0 x
* f x0
*
x0 x
*
f x0 f x*
f ' x0 f ' x0
xk* ˆ x *
f xk* ˆ f x*
f ' ˆ xk*
x f ' x ˆ ˆ f ' x ˆ x f ' x ˆ f x ˆ f x
*
k
*
k
*
k
* *
k
*
k
*
f ' x ˆ
*
k f ' x ˆ f ' x ˆ
*
k f ' x ˆ
*
k
*
k
1 f x f x ˆ x f ' x ˆ x f ' x ˆ ˆ f ' x
* * * * * * *
ˆ
f ' x ˆ
* k k k k k
k
1 f x f x ˆ f ' x ˆ x x ˆ
* * * * *
f ' x ˆ
* k k k
k
x1 x*
1
f x* f xk* ˆ f ' xk* ˆ x
*
xk* ˆ .
f ' xk* ˆ
13
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Computation Examples
In this section, we will try to obtain the nearest root closed to an extreme point with initial guess xk
*
, and xk where xk is an extreme point, and is the radius of curvature at this extreme point. In this
* *
section, we will use five examples (Exp.) given in Table 1, and try to obtain a root to the right of extreme
point, and a root the left extreme point of each function.
[5,3]
2
f2 ( x) x2 2x 3
[1.5,1]
3
f3 ( x) x3 x2
4 2 [3,10]
f 4 ( x) sin x sin x
3
5 3 [9,13]
f5 ( x) cos x cos(2 x) sin x
5
Table 2 shows that the use of initial guess xk* , with xk* is a local extremum of a function, and
is a radius of curvature at that local extremum, will make Newton’s method (N) converges (C) to a root
of a function closest to the local extremum. However, when the radius of curvature is too small,
Newton’s method iterative process failed (F) to bring to the expected solution, this can be seen in Exp.
5, the column is colored gray. To overcome this obstacle, we have made a modification to the radius of
curvature which will be discussed in the next section.
14
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
f ( x)
Unfortunate case when Newton’s method encounters a trial guess near such a local extremum, then
Newton’s method will send its solution far away from the desired solution (see Figure 6). This situation
happend in Exp. 5 of Table 2 where the size of radius of curvature to be added to the minima point is
not enough to bring that point to the expected root.
For details, in Exp. 5, xk* 10.9598 is a minimizer of f5 ( x) , 0.191837 is radius of curvature at
* , then
xk xk* is an initial guess in finding the nearest root from xk* on the right, and xk* on the
left. In Table 2, it has been given a sign that Exp. 5 failed in getting the root on the left side of xk* . Now
we try to double up the radius of curvature become 2 , and use xk* 2 as the new initial guess, then
with that new initial guess, we will obtain x* 9.93822 which is the nearest root of xk* on the left. So
we assume that the failure due to Exp. 5 caused by the small radius of curvature. To overcome this
obstacle, we decide to restrict the radius of curvature as
r for r (0,1) .
The algorithm of modified radius of curvature can be described in Algorithm M.
Algorithm M
This simple algorithm computes using data ( x0 , , r , m ) where x0 is local
extremum of a function, is a tolerance, r is a real number, and m is the maximum number of
iteration.
1. j
1 f ' x
0
2 3/ 2
f " x0
2. : j
3. i : 1
4. while i r do
4.1. i : i j
4.2. i : i 1
4.3. : i
5. return.
15
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
6. Numerical Results
In this section, we employ our method to solve some nonlinear equations. All experiments were
performed on a personal computer with AMD Dual-Core Processor E-350 1.6 GHz and 2 GB memory.
The operating system was Windows 7 Starter (32-bit) and the implementations were done in Microsoft
Visual C++ 6.0.
We used the following 20 test functions and display the approximate zeros x* .
2 3
f1 ( x) sin( x) sin x f 2 ( x) cos x cos(2 x) sin( x)
3 5
2 1 4
f3 ( x) cos x sin x cos( x) f 4 ( x) sin x sin( x)
5 10 9
1 1 2
f5 ( x) cos( x) sin(2 x) f6 ( x) sin(2 x)sin( x) sin x
2 3 3
5
10
f 7 ( x) i cosi 1 x i
i 1
f8 ( x) sin( x) sin x ln x 0.84 x 3
3
5 10 10 1
f9 ( x) exp(0.1x) i 1 x i
i 1
f10 ( x) cos( x) cos x 0.84
3 3 x
5
10
f11 ( x) i sin i 1 x i
i 1
f12 ( x) sin( x) sin x ln x 0.84 x
3
f14 ( x) x2 1
5
f13 ( x) sin i 1 x i
i 1
f15 ( x) x 2x 3
2
f16 ( x) x3 x2
1
f17 ( x) 2 x 2 1.05x 4 x6 x f18 ( x) x4 4x3 4x2 0.5
6
f19 ( x) 3x x3 f20 ( x) x6 22x4 9x2 102
16
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
0.96611 0 -9.98415e- 0
-6.4786 -4.59502 -8.66132 -6.2244e-016
8 017
0.92092 0 6.19554e- 0 -9.98415e-
-3.06054 -1.44966 -4.59502
7 017 017
0.099921 0 3.91522e- 0 6.19554e-
1.00009 1.70283 -1.44966
6 017 017
0 0 3.91533e-
3.23845 1.10178 4.88978 2.5247e-016 1.70283
017
0 -2.27954e- 0
6.07014 1.05587 7.16913 4.88978 2.5247e-016
016
0 0 -2.27954e-
9.29931 0.88308 11.2033 8.2974e-016 7.16913
016
0 8.65927e- 0 -3.18162e-
4 -8.42662 1.03325 -7.06858 -9.42478
017 016
0 0 8.65927e-
-6.66794 1.12298 -6.28319 -8.3768e-017 -7.06858
017
0.87740 0 0
-4.50953 -3.14159 1.206e-016 -6.28319 -8.3768e-017
8
0.94786 0 0
-1.93595 -1.08393e-162 0 -3.14159 1.206e-016
3
0.94786 0 0 1.08393e-
1.93595 3.14159 1.206e-016 0
3 016
0.87740 0 0
4.50953 6.28319 -8.3768e-017 3.14159 1.206e-016
8
0 8.65927e- 0
6.66794 1.12298 7.06858 6.28319 -8.3768e-017
017
0 -3.18162e- 0 8.65927e-
8.42662 1.03325 9.42478 7.06858
016 017
0 5.55112e- 0 7.14354e-
5 2.56634 0.61094 3.98965 1.5708
017 017
0.21952 0 -2.44921e- 0 -3.99529e-
6 4.231 4.71239 3.78726
6 016 016
0.21952 0 -3.18051e- 0 -2.44921e-
5.19377 5.63752 4.71239
6 016 016
0.10406 0.1 -9.76996e- 0.1 8.43769e-
7 -9.28634 -9.03415 -9.55476
2 015 015
0.10405 0.1 5.77316e- 0.1 -9.76996e-
-8.79406 -8.57612 -9.03415
1 015 015
0.10135 0.1 3.55271e- 0.1 5.77316e-
-8.29039 -8.05487 -8.57612
4 015 015
0.10270 0.1 -7.21645e- 0.1 3.55271e-
-7.70831 -7.40542 -8.05487
9 015 015
0.10192 0.1 -3.19744e- 0.1 -7.21645e-
-7.08351 -6.73964 -7.40542
7 014 015
0.10327 0.1 -1.77636e- 0.1 -3.19744e-
-6.47857 -6.15885 -6.73964
6 015 014
0.1 0.1 -1.77636e-
-5.94894 0.10492 -5.70985 -1.5099e-014 -6.15885
015
0.1 -8.88178e- 0.1
-5.4614 0.1052 -5.19666 -5.70985 -1.5099e-014
015
0.10479 0.1 -1.11011e- 0.1 -8.88178e-
-4.96318 -4.71693 -5.19666
9 015 015
0.10402 0.1 4.76789e- 0.1 -1.11011e-
-4.47753 -4.23649 -4.71693
7 015 015
17
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3 shows that the use of radius of curvatute at the extreme point will make Newton’s method
always converges to the roots closed to this extreme point. Nonzero value of r indicate that the functions
have the small radius of curvature at their extreme points.
7. Conclusion
In this paper, we have presented that the radius of curvature at maximizer or minimizer points can be
used as an increment to those extremum points in the attempt to find the radius of convergence of
Newton’s method near to the said maximizer or minimizer of a function. Numerical results show that
our valuable method succeed in finding the desire solutions.
19
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
[1] T. T. Ababu, “A Two-Point Newton Method Suitable for Nonconvergent Cases and with Super-
Quadratic Convergence”, Advance in Numerical Analysis, Hindawi Publishing Corporation,
Article ID 687383, http://dx.doi.org/10.1155/2013/687382, 2013.
[2] I. B. Mohd, The Width Is Unreachable, The Travel Is At The Speed Of Light, Siri Syarahan Inaugral
KUSTEM : 6 (2002), Inaugral Lecture of Prof. Dr. Ismail Bin Mohd, 14th September 2002
[3] S. I. Grossman, Calculus Third Edition, Academic Press, 1984.
[4] J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and
Nonlinear Equations, Prentice-Hall, 1983.
20
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In today globalization world, transportation is the main source for huma to
go anywhere, especially for a university student. Basically, transportation is meant that
“any device used to move an item from one location to another”. We could see to our
surrounding, almost all people in this world have their own transportation. Referring to
the transportation management in Universiti Tun Hussein Onn Malaysia (UTHM),
transportation is very important for the students. For the students who live at the
residential college, bus is the main transportation for them to go and from class as they
do not have any transportation. Then, it became an issue after students had to face the
transportation problem. The issue of bus services arised because of the current
requirement that cannot be met. Comparison between Northwest Corner Method
(NWCM) and Vogel Approximation Method (VAM) is used to solve these available
issues. Observation, interview, and data collection will be carried out on the bus service
that send students to the class towards on residential colleges that involved, such as
Residential College of Perwira and Residential College of Taman Universiti to ensure
that the requirements of the research objectives will be achieved.
1. Introduction
According to Tran and Kleiner (2005), public transportation is defined as transportation service
providers on an ongoing basis, general and specifically to the public. Transport does not include school
bus, chrter busses and bus service that offering sightseeing. Examples of public transportation used by
people other than busses are trains and ferries.
The study was conducted in Universiti Tun Hussein Onn Malaysia (The University) and involves daily
bus transportation for students to and from the class. The study involved six (6) facultieswhich are
Faculty of Mechanical and Manufacturing Engineering (FKMP), Faculty of Civil and Environmental
Engineering (FKAAS), and the Faculty of Electrical and Electronic Engineering (FKEE). Some of the
students expressed these faculties are those who were place in the Residential College of Perwira while
the rest are in Residential College of Tun Dr. Ismail and Residential College of Tun Fatimah. While for
students from the Faculty of Computer Science and Information Technology (FSKTM), Faculty of
Technical and Vocational Education (FPTV) and Faculty of Technology Management and Business
(FPTP) most of the students from the these faculties are staying at Residential College of Perwira and
only certain of them who involves third year students and fourth year students are living at the specific
rented house.
Problems can be identified when the increases of sum of the number of students who live in the
Residential College of Perwira and Residential College of Taman Universiti caused the number of bus
provided are not able to accommodate the capacity of students. This lead to a badly damaged for an
existing bus in every two-month. An average estimated cost of bus maintenance for every month is
about RM 500 while the maintenance costs for a buss less than five (5) years of service is at least RM
21
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
3000. Lastly, for the bus more than five (5) years of service take about RM 6000 and above. When this
happens, the parties involved have to spend a big amount of money for repairing a break down bus in
order to recover them for immediate use. Bus travel times also play an important role which it will
decide a student to be at class whether sooner or later. Students often complain that the bus is always
late to send them to class without knowing the exact causes of delay.
This study covers the area which is in UTHM which includes two (2) types of daily bus transportation
which are Colourplus and Sikun Jaya. It involves traveling from college -Such as Colorplus, from
Perwira Residential College to UTHM while Sikun Jaya hand the journey from Residential College of
Taman Universiti to UTHM.
Its helps to identify the problems that often occur in daily bus transportation system and and then it
helps to find the best solution to resolve it from occur again. Therefore, it is very important to come out
with the models that are inherent in the transportation system in order to propose the appropriate model
to be used by UTHM at this current situation. In addition, it is intended to facilitate the journey of bus
driver to by studying the suitable routes for the bus driver use and this can saves the shipment time and
cost of daily operation of bus transportation. For students, the benefits received are through short
delivery time will allows students to arrived early to the class and university shall not have to bear the
cost that much to add more buses in order to accommodate the number of students to go to the class,
particularly during peak periods.
22
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Literature Review
2.1 Introducution
Means different types of transport vehicles used whether by air, land, and water to move or carry goods
and passengers from one place to another.
2.2 Public Transportation
Public transport is defined as a system of motorized transport such as buses, taxis, and trains used by
people in a specific area with fixed fares (Kamus Dewan Bahasa dan Pustaka Edisi Keempat).
According Sudirga (2009), after data has been received from sources of supply and demand in a number
of places, it is compiled into the table. Researchers should determine the most initial feasible solution
in the transportation problem.
According to Samuel and Venkatachalapathy (2011) VAM was improved by using matrix total
opportunity cost (TOC) as well as to consider the cost of replacement provision. Total opportunity cost
matrix is obtained by adding the "matrix opportunity cost row" (matrix of opportunity costs row: for
each of row, the cost of the smallest in the row subtracted from every element in the same row) and for
the "matrix opportunity cost column" (matrix of opportunity costs column: for each of column from the
matrix of the actual cost of transportation, the cost of the smallest value in the column is subtracted by
every element in the same column.
According Rahardiantoro (2006), Minimal Spinning Tree Technique is a technique which is used to
find a way that can connects all the points in the same network until the minimum distance will obtain.
Minimal Spanning Tree problem has similarities with the shortest path problem (Shortest Route), but
the purpose of using this technique is to link the entire nodes in a network so that the total path is
minimized. The resulting network can connect all nodes in the network at minimum total distance.
Technical measures of Minimal Spinning Tree are:
1. Select a node in the network in anarbitrary way
2. Connect the node is the closest node to minimize the total distance
3. All the nodes is observed to determine whether there are still nodes that have not been
connected, discover and connect nodes that have not been connected.
According to Reeb and Leavengood (2002), the transport problem are known as one of the most
important applications and succeed in quantitative analysis to solve business problems involving the
distribution of the product. In essence, it aims to minimize the cost of shipping goods from one location
to another so that the needs in each area can be filled and every ship that operates leading capacity of
goods to a predetermined location.
According to Iles (2005) in his book entitled "Public Transport in Developing Countries", public
transportation is important to a broad majority without involving private transport. The need to personal
23
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
mobility, in particular the entry for job opportunities, but with the ability of low income levels is a
common problem, and the service provided is always required because it is not sufficient.
3. Research Methodology
3.1 Introduction
The research methodology is the most important aspect in chapter three (3) because it discusses the
methods of a study conducted by researchers to prove something authentic study done or not. It is
carefully constructed based on the guidance of reference resources available to ensure the collection of
data to be acquired is systematic.
The research conducted is a case of study that was done at the University Tun Hussein Onn Malaysia
(The University) by using qualitative methods. It was chosen to enable the researcher to understand it
deeply why it is necessary to build an effective transport system model to solve the problem of transport
available.
Respondents were randomly selected as an observation based on the two residential colleges studied.
This refers to the collection of data that are made on a daily basis from one month to another month. As
for the interview, the respondents were selected among bus supervisors and bus drivers.
Selection based on the students who inhabit residential colleges that offer daily bus transport services
such as Residential College of Perwira, and Residential College of Taman Universiti. Total population
can be identified for residential colleges involved are of 3267 students.
Samples were selected through students either boys or girls who use this daily bus service as their main
transportation to get to class. The main focus is on a group of fisrt year students because they are the
biggest users of bus services. The study sample was taken based on the two residential colleges
involved. While through the interview, the sample was selected are the group of people related with
daily bus transportation such as the supervisors and bus drivers.
The suitable instrument that can be used by the researcher in this study is through observation and
interview methods. Indirectly, an observation per passenger volume data can be taken at any time and
it can be recorded accurately and systematically. While by interviewing, the data obtained as a result of
the interview conducted on the respondent particularly supervisors and bus drivers to support the data
obtained as a result of the observations made and the data are recorded in detail.
24
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Primary data obtained are collected through two methods, which are:
The observation that has been done is a daily method which is from one month to another. The purposes
of this method is to identify the estimated number of students who use the bus service on a daily basis
either at the peak or at the usual time. The data obtained are subject to fluctuations in the number of
passengers on the bus at anytime in a month.
The researchers also used the interview technique to support the data derived from observation. The
interview that is conducted is a interview that focus on method which they are more focused on the
group of respondents who are involved with the daily bus transportation services. The researcher will
select three respondents from two daily bus transportation companies, which served as the Colourplus-
Tunas Tinggi Pt. Ltd Company and the Sikun Jaya Pt. Ltd.Company to be interviewed that is related
with the issue of bus transportation problems.
Secondary data is data obtained from the studies that have been done. Data is divided into data obtained
from sources printed and non-printed sources. Print resources available through the magazines, articles
and books in the library. For printed sources, the researcher acquire an existing reference sources
through data from the internet and the researcher can obtain the desired journals through sites such as
Emerald, EBSCO HOST, ProQuest, Science Direct and IEEE.
Data analysis is based on data obtained through observations made from one month to another month
and a structural interview was conducted on the respondents. The analysis of data is very important
because it will determine whether a study reach or fulfill the needs of the research objectives or not. In
the study conducted, the researchers use the Production and Operation Management (POM) Software
to calculate the overall data. In addition, researchers are using QSR NVivo 10 Software to translate the
results obtained in the form of words to statistics.
4. Data Analysis
4.1 Introduction
Data analysis discusses the calculation steps of the two models used for comparison in determining to
decide which model is the best and how to calculate the frequency of respondents that use the service
of daily bus transportation as well as how to identify the best route to save costs and time of the journey
of a bus.
25
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The analysis of data is made based on three (3) main objective that need to be achieved, which are:
a) To identify the best method in terms of capacity to solve problems inherent in the existing
transportation model system either on using the method of North West Corner Method (NWCM)
or Vogel Approximation Method (VAM).
b) To determine the frequency number of student use daily bus service to go to the class.
c) To identify the best route to save timeof a bus trip through the technique of a Minimum Spanning
Tree.
Table generated after data entered. It is very clear that how the supply of transport available to meet the
needs of the current student demand.
Optimal cost results obtained from the analysis using the method of Northwest Corner Method
(NWCM) is at RM 392.00.
Marginal cost, which arise from the analysis of the Production and Operation Management (POM)
Software.
Here can be seen how to calculate the costs incurred through travel bus transportation is by way of
method of Northwest Corner Method.
4.3.5 Iteration
Iterasi 2
KK Perwira 16 44 (-90)
KK Taman Universiti (24) (90) 54
Bus Stop ATM 38 (44) 6
Iterasi 3
KK Perwira 10 44 6
KK Taman Universiti (-66) (0) 54
Bus Stop ATM 44 (44) (90)
Iterasi 4
KK Perwira (66) 44 16
KK Taman Universiti 10 (0) 44
Bus Stop ATM 44 (-22) (24)
Iterasi 5
KK Perwira (66) (22) 60
KK Taman Universiti 54 (22) (0)
Bus Stop ATM (0) 44 (24)
Method of Northwest Corner Method (NWCM) analysed through Production and Operation (POM)
Software has five (5) iteration calculation process where it is shown in the table above.
Figure above shows the cost of delivery schedule of students involves the location of the
PerwiraResidential to Tun Dr.Ismail/Tun Fatimah Residential College of 60/RM0 while for Taman
Universiti Residential College to “Susur Gajah G3” was obtained at a cost of 54/RM216. The rest is
from the Taman Universiti Residential College to Tun Dr.Ismail/Tun Fatimah Residential College
recorded a cost of 0/RM0 and the ATM Bus Stop to “Susur Gajah G3”, the cost is of 0/RM0. The last
one is from ATM Bus Stop to Library is a 44/RM176.
27
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Through this list, the researcher can identify the cost per unit of shipping from Taman Universiti
Residential College to “Susur Gajah G3” is of RM 4.00 while for Taman Universiti Residential College
to Tun Dr.Ismail/Tun Fatimah Residential College was RM 20.00. The last one is from ATM Bus Stop
to Library of RM 4.00.
This data enable the researcher to compute the optimal total cost required to initiate movement of a bus
trip around the university.
Marginal cost analysis of the results of the Production and Operation Management (POM) Software.
28
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Here can be seen how to calculate the costs involved through bus transportation made travel.
4.4.5 Iteration
Table 4.4.5:Iteration
Susur Gajah G3 Library KK
TDI/TF
Iteration 1
KK Perwira 54 (60) 6
KK Taman Universiti (-6) 44 10
Bus Stop ATM (60) (44) 44
Iteration 2
KK Perwira 44 (54) 16
KK Taman Universiti 10 44 (6)
Bus Stop ATM (60) (38) 44
Method of Vogel Approximation Method (VAM) are analysed through Production and Operation
(POM) Software have two (2) iteration calculation process where it is shown in the table above.
Figure above shows the delivery schedule of the students, the costs of location from Perwira Residential
College to the “Susur Gajah G3” is 44/RM0 while for Taman Universiti Residential College to “Susur
Gajah G3” was obtained at a cost of 10/RM140. The rest is from the Residential College of Taman
Universiti to the Libarary recorded at a cost of 44/RM0 and the Residential College of Tun Dr. Ismail/
Tun Fatimah, the cost is as much as 16/RM320. The last one is from Bus Stop”ATM to Kolej Kediaman
Tun Dr. Ismail/ Tun Fatimah the cost is 44/RM0.
29
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Through this list, the researcher can identify the cost per unit of shipping from Perwira Residential
College to Tun Dr. Ismail/ Tun Fatimah Residential College is of RM20.00 while for Taman Universiti
Residential College to “Susur Gajah G3” is RM14.00.
The researcher decided to propose to the university to use Northwest Corner Method model because
through the use of this model to the university, it can save cost through the payment of the minimum
optimal cost of RM 392 to the daily bus transportation contract company for UTHM for one day
operating a bus. This is due to shipping costs using the provided transportation route is calculated only
from the location of the Taman Universiti Residential College to “Susur Gajah G3” and also from the
location of ATM Bus Stop to the Library.
600
29.5
89.5
500 149.5
209.5
269.5
400 329.5
No. of Students (Persons)
389.5
449.5
300 509.5
569.5
629.5
200 689.5
749.5
809.5
100 869.5
929.5
0
Mid Point (Hours)
30
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The graph above shows the high frequency is due at the first bus operation from 7.00 am to 8.00 am of
512 students. It also shows a low frequency in the last hour of bus operation, from example from 10.00
pm to 11.00 pm.
200 329.5
389.5
449.5
150
509.5
569.5
100 629.5
689.5
749.5
50
809.5
869.5
0 929.5
Figure above shows that the frequency of the highest recorded daily is in the first hour of bus service
operates. This shows that the range of peak hours between 7.00 am to 8.00 am, there were 260 students
who use the bus service to go to the class. While the lowest frequency in the sixth hour of the bus
services operated by 49 students. Identified at the time of the sixth hour is covering at 12.10 pm to 1.00
pm.
31
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Total Distance:
= 0.3 + 0.2 + 0.5 + 0.2 + 0.6 + 0.9 + 0.2 + 0.2 + 0.05 + 0.2 + 0.5 + 0.9 + 0.3
= 5.05 km
5.1 Introduction
In this chapter, the researcher describes and discusses the findings obtained of data analysis done. The
findings obtained from the comparison of the model involved, which are Northwest Corner Method
Model and Vogel Approximation Method Model. The researcher also studied the frequency amount of
students who use the bus services daily and subsequently to identify the best route to shorten the travel
distance and save the cost. Researchers has been given the opportunity to highlight some of the
recommendations that are relevant to the research topic
5.2 Recommendation
The researcher have identified a number of recommendations that need to be concern of and this
recemmendation need some actions to be taken by the specific parties to ensure the smooth operation
of daily bus transportation to students running smoothly.
The researcher suggested that each student should start their journey to the class as early as possible,
for example in the morning, as they all aready know at 7.30 am until 8.10 am is the peak time. Students
may be able to wait for the bus as early as 7:10 in the morning to avoid the congestion.
The researchers suggest that students themselves must change their attitudes to be more disciplined and
not take it as easy in some matters. The research also suggests students do not have to return to
residential college or home if the students only have a short time off.
32
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Looking to the existing schedule, at some time it is very suitable for the driver and it is like a guide to
drivers so that drivers try to follow it as best as possible. With a given schedule, a bit of congestion due
to the many students who are waiting for the arrival of bus can be reduced. If the driver can comply
with time given well, the problem of student complaining against the late bus will not arise.
The researcher emphasizes the element of respect and tolerance. Researchers have found that some
students complained that if they asked some of the bus drivers, they get rail from the driver. In addition,
most of the bus drivers working based on their mood or feelings. This situation can be changed if
tolerate’ attitude on each other is manured.
From the front of the examination hall of F2, the researcher suggests that a security officer is required
to keep traffic running smoothly so the movement of vehicles especially in the morning at which time
the staff and students would like to get into the university and in the evening when the staff and students
would like to go home are going well.
The researchers suggested that the management of UTHM play an important role to avoid this matter
from happening again. Depth briefing related to transportation case should be given as early as possible
during the “Minggu Haluan Siswa” (MHS).
5.7 Conclusion
Overall, this study has achieved the three (3) objectives to be studied by researcher’s early of her
research. The researcher hope that Northwest Corner Method model (NWCM) that have been suggested
by her could help UTHM in order to solve the cost problem and thus it can save budget and UTHM can
also provide the more better transport facilities.
References
Baxter, P. and Susan, J. (2008). Qualitative Case Study Methodology: Study Design and
Implementation for Novice Researchers. The Qualitative Report, Vol 13(4), 544-559. Dicapai pada
May 2, 2012, dari ms 1 di http://www.scribd.com/doc/46305862/Maksud-Pengangkutan Dicapai
pada December 11, 2012, dari ms 1 di http://dickyrahardi.blogspot.com/2008/05/minimal-
spanning-tree.html.
Iles, R. (2005). Public Transportation in Developing Countries. 1st ed. United Kingdom. Elsevier.
Transportation.
Reeb, J. and Leavengood, S. (2002). Transportation Problem: A Special Case for Linear Programming
Problems. Operational Research, Vol 1, 1-36.
Samuel, A. E. and Venkatachalapathy, M. (2011). Modified Vogel’s Approximation Method for Fuzzy
Transportation Problems. Applied Mathematical Sciences, Vol 5(28), 1367-1372.
Sudirga, R. S. (2009). Perbandingan Pemecahan Masalah Transportasi Antara Metode Northwest
Corner Rule Dan Stepping-Stone Method Dengan Assignment Method. Business & Management. Vol
4 (1), 29-50.
Tran, T. and Kleiner, B. H. (2005). Managing for Excellence in Public Transportation. Importance of
Public Transportation, Volume 28, 154 – 163.
33
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Nuruljannah Samsudin (2009). Mengkaji Kualiti Perkhidmatan Pengangkutan Bas Dan Van Dari
Perspektif Pelanggan : Sebuah Kajian Kes Di Universiti Tun Hussein Onn Malaysia. Universiti
Tun Hussein Onn Malaysia. Tesis Ijazah Sarjana Muda.
34
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
PRESENTERS
35
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1. Introduction
Uncertainty of events that will happen in the future, it is risky. When high-risk or potentially difficult
to control, then most of the people or companies prefer shifted the risk to the insurance company. The
insurance company takes over or bear some of the risk. Therefore, the policyholder must pay insurance
premiums. As for insurance companies, if the risk occurs, the insurance company must pay the claim
to the insured.
In fact, sometimes the amount of the premium is not balanced by the number of claims filed
insureds. If a claim is filed too much, it will threaten the stability of the insurance company. Therefore,
insurance companies require a solution to overcome it. One way that can be used is to determine the
outstanding claims reserves of insurance.
In determining the amount of the outstanding claims reserves, can use several methods. In this
paper, that the method used is Bornhuetter-Ferguson method.
2. Methodology
Without loss of generality, assume that the data consists of a triangle or an increase in claims in
the form of a run-off triangle. Increase this claim can be written as {𝑆𝑖,𝑘 ∶ 𝑖 = 1, … , 𝑛 ; 𝑘 = 1, … , 𝑛 +
1 − 𝑖} where 𝑖 is the year in which the incident occurred is called the event, 𝑘 is the number of periods
until payment completion are called the development years.
Then from the triangle run-off was the summation row consecutive years until the period of the
development, can be written mathematically as follows:
𝐶𝑖,𝑘 = ∑𝑘𝑗=1 𝑆𝑖,𝑗 . (2.1)
𝐶𝑖,𝑘 expressed the amount of increase in cumulative claims of incident years 𝑖 after 𝑘 the development
years, 1 ≤ 𝑖, 𝑘 ≤ 𝑛.
Bornhuetter-Ferguson method avoids the dependence on the number of claims at this time, then
back up his claim:
𝑅̂𝑖𝐵𝐹 = 𝑈̂𝑖 (1 − 𝑧̂𝑛+1−𝑖 ) (2.2)
where
𝑈̂𝑖 = 𝑣𝑖 𝑞̂𝑖 (2.3)
and 𝑧̂𝑘 ∈ [0,1]
36
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Bornhuetter-Ferguson method aimed at the construction of an estimate for 𝑞𝑖 which is not directly
dependent on the amount of cumulative claims 𝐶𝑖,𝑛+1−𝑖 . The first step is to consider the ratio of the
average increase in claims:
∑𝑛+1−𝑘
𝑖=1 𝑆𝑖,𝑘
𝑚
̂𝑘 = (2.4)
∑𝑛+1−𝑘
𝑖=1 𝑣𝑖
of the development years 𝑘 observed to date. Then used a weighted average 𝑟𝑖 from ratios 𝑆𝑖,𝑘 ⁄𝑣𝑖 and
𝑚
̂ 𝑘 . That is
̂𝑘
𝑚 𝑆𝑖,𝑘 ⁄𝑣𝑖 𝑆𝑖,𝑘 ∑𝑛+1−𝑖
𝑘=1 𝑆𝑖,𝑘 𝐶𝑖,𝑛+1−𝑖
𝑟𝑖 = ∑𝑛+1−𝑖
𝑘=1 ∙ = ∑𝑛+1−𝑖
𝑘=1 = = . (2.5)
∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚 ̂𝑘
𝑚 𝑣𝑖 ∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚 𝑣𝑖 ∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚 𝑣𝑖 ∑𝑛+1−𝑖
𝑗=1 ̂𝑗
𝑚
Because the development year is expressed by 𝑘, then ∑𝑛+1−𝑖
𝑗=1 𝑚 ̂𝑗 on 𝑟𝑖 changed to
𝐶𝑖,𝑛+1−𝑖
𝑟𝑖 = . (2.6)
𝑣𝑖 ∑𝑛+1−𝑖
𝑘=1 𝑚 ̂𝑘
So that, 𝑟𝑖 is the ratio of the individual claims 𝐶𝑖,𝑛+1−𝑖 ⁄𝑣𝑖
𝑑𝑖𝑏𝑎𝑦𝑎𝑟 𝑡𝑒𝑟𝑗𝑎𝑑𝑖
𝑟̅𝑖 = √𝑟𝑖 ∙ 𝑟𝑖 . (2.7)
𝑚̂∗ = 𝑚 ̂ 1∗ + ⋯ + 𝑚 ̂ 𝑛∗ + 𝑚 ∗
̂ 𝑛+1 , (2.9)
This eventually results in prior estimates
𝑞̂𝑖 = 𝑟𝑖∗ 𝑚
̂∗ (2.10)
for the ultimate claims ratio of incident years 𝑖 and the ultimate number of claims
𝑈̂𝑖 = 𝑣𝑖 𝑞̂𝑖 = 𝑣𝑖 𝑟𝑖∗ 𝑚
̂∗ (2.11)
appropriate.
37
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
∑𝑛+1−𝑘 𝑆𝑗,𝑘
because the 𝜁̂𝑘∗ ≈ 𝜁̂𝑘 ≈ 𝑗=1
can be assumed that the
∑𝑛+1−𝑘
𝑗=1 𝑥𝑗
∑𝑛+1−𝑘 𝑆𝑗,𝑘 𝑠𝑘2
𝑉𝑎𝑟(𝜁̂𝑘∗ ) ≈ 𝑉𝑎𝑟 ( 𝑗=1
)= , (2.35)
∑𝑛+1−𝑘
𝑗=1 𝑥𝑗 𝑛+1−𝑘
∑𝑗=1 𝑥𝑗
with 1 ≤ 𝑘 ≤ 𝑛. Therefore 𝑉𝑎𝑟(𝜁̂𝑘∗ ) estimated by
2 ∗
𝑠̂𝑘2
(𝑠. 𝑒. (𝜁̂𝑘∗ )) = , (2.36)
∑𝑛+1−𝑘
𝑗=1
̂𝑗
𝑈
With 1 ≤ 𝑘 ≤ 𝑛.
2
Altogether, an estimated (𝑠. 𝑒. (𝑧̂𝑘∗ )) for 𝑉𝑎𝑟(𝑧̂𝑘∗ ) is
2 2 2 2 2
(𝑠. 𝑒. (𝑧̂𝑘∗ )) = min ((𝑠. 𝑒. (𝜁̂1∗ )) + ⋯ + (𝑠. 𝑒. (𝜁̂𝑘∗ )) , (𝑠. 𝑒. (𝜁̂𝑘+1
∗
)) + ⋯ + (𝑠. 𝑒. (𝜁̂𝑛+1
∗
)) ). (2.37)
So finally obtained estimator for the mean square error of prediction is
2 2
𝑚𝑠𝑒𝑝(𝑅̂𝑖𝐵𝐹 ) = 𝑈 2∗
̂𝑖 (𝑠̂𝑛+2−𝑖 2∗
+ ⋯ + 𝑠̂𝑛+1 ̂𝑖2 + ( 𝑠. 𝑒. (𝑈
) + (𝑈 ∗
̂𝑖 )) ) (𝑠. 𝑒. (𝑧̂𝑛+1−𝑖 ∗
)) + (1 − 𝑧̂𝑛+1−𝑖 )2 ,
(2.38)
Prediction of error
𝑃𝐸(𝑅̂𝑖𝐵𝐹 ) = √𝑚𝑠𝑒𝑝(𝑅̂𝑖𝐵𝐹 ), (2.39)
and % prediction of the error is
𝑃𝐸(𝑅 ̂ 𝐵𝐹 )
%𝑃𝐸(𝑅̂𝑖𝐵𝐹 ) = 𝑅̂𝐵𝐹𝑖 × 100%. (2.40)
𝑖
To check the significance of the difference between the estimated reserves or alternative to building the
confidence interval for 𝐸(𝑈𝑖 ) needed only pure error estimation
2 2 2 2
( 𝑠. 𝑒. (𝑅̂𝑖𝐵𝐹 )) = (𝑈
̂𝑖2 + ( 𝑠. 𝑒. (𝑈 ∗
̂𝑖 )) ) (𝑠. 𝑒. (𝑧̂𝑛+1−𝑖 ∗
̂𝑖 )) (1 − 𝑧̂𝑛+1−𝑖
)) + ( 𝑠. 𝑒. (𝑈 )2 . (2.41)
For total reserves 𝑅 = 𝑅1 + ⋯ + 𝑅𝑛 above, obtained estimates of total reserves is not biased
𝑅̂ 𝐵𝐹 = 𝑅̂1𝐵𝐹 + ⋯ + 𝑅̂𝑛𝐵𝐹 . Mean square error of prediction of total reserves is
𝑚𝑠𝑒𝑝(𝑅̂ 𝐵𝐹 ) = 𝑉𝑎𝑟(𝑅̂ 𝐵𝐹 ) + 𝑉𝑎𝑟(𝑅). (2.42)
𝑉̂ 𝑎𝑟(𝑅) = ∑𝑛𝑖=1 𝑈 2∗
̂𝑖 (𝑠̂𝑛+2−𝑖 2∗
+ ⋯ + 𝑠̂𝑛+1 ). (2.43)
Estimation error 𝑉𝑎𝑟(𝑅̂ ) more needed because 𝑅̂1 , … , 𝑅̂𝑛 positively correlated through
𝐵𝐹 𝐵𝐹 𝐵𝐹
39
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
with
̂𝐶 𝑜𝑣(𝑅̂𝑖𝐵𝐹 , 𝑅̂𝑗𝐵𝐹 ) = 𝜌̂𝑖,𝑗
𝑈
𝑠. 𝑒. (𝑈̂𝑖 )𝑠. 𝑒. (𝑈 ∗
̂𝑗 )(1 − 𝑧̂𝑛+1−𝑖 ∗
)(1 − 𝑧̂𝑛+1−𝑗 )+
𝑧 ∗ ∗ ̂𝑖 𝑈
̂𝑗 .
𝜌̂𝑖,𝑗 𝑠. 𝑒. (𝑧̂𝑛+1−𝑖 )𝑠. 𝑒. (𝑧̂𝑛+1−𝑗 )𝑈 (2.50)
So finally obtained the mean square error for the prediction of the total claims reserves is
𝑚𝑠𝑒𝑝(𝑅̂ 𝐵𝐹 ) = 𝑉𝑎𝑟(𝑅̂ 𝐵𝐹 ) + 𝑉𝑎𝑟(𝑅)
𝑛 𝑛
2
= ∑𝑈 2∗
̂𝑖 (𝑠̂𝑛+2−𝑖 2∗
+ ⋯ + 𝑠̂𝑛+1 ) + ∑ ( 𝑠. 𝑒. (𝑅̂𝑖𝐵𝐹 )) + 2 ∑ 𝐶̂ 𝑜𝑣(𝑅̂𝑖𝐵𝐹 , 𝑅̂𝑗𝐵𝐹 ),
𝑖=1 𝑖=1 𝑖<𝑗
prediction of the total error is
𝑃𝐸(𝑅̂ 𝐵𝐹 ) = √𝑚𝑠𝑒𝑝(𝑅̂ 𝐵𝐹 ), (2.51)
and % total prediction error is
𝑃𝐸(𝑅̂𝐵𝐹 )
%𝑃𝐸(𝑅̂ 𝐵𝐹 ) = 𝐵𝐹 .
𝑅̂
(2.52)
The data used in this paper is the data obtained from the journal Mack and Re (2006). The data is the
increase in claims during the 13 years from 1992 to 2004, and at 13 years of development.
40
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3.3 Increase the ratio of claims and Estimated Average Patterns
Developments for increase Data Claims Happen
𝑘 𝑚̂𝑘 𝑚̂𝑘
̃ 𝑚̂ 𝑘∗ ∑𝑚 ̂ 𝑘∗ 𝜁̂𝑘 𝑧̂𝑘 𝜁̂𝑘∗ 𝑧̂𝑘∗
1 0.0593 0.0692 0.0692 0.0692 0.0502 0.0502 0.0502 0.0502
2 0.1844 0.1998 0.1998 0.2689 0.1449 0.1950 0.1449 0.1950
3 0.2804 0.2752 0.2752 0.5441 0.1996 0.3946 0.1996 0.3946
4 0.3225 0.3006 0.3006 0.8448 0.2180 0.6126 0.2180 0.6126
5 0.2243 0.2039 0.2039 1.0487 0.1479 0.7604 0.1479 0.7604
6 0.1157 0.1166 0.1166 1.1653 0.0845 0.8450 0.0845 0.8450
7 0.0962 0.1258 0.1258 1.2911 0.0912 0.9362 0.0912 0.9362
8 0.0143 0.0230 0.05 1.3411 0.0167 0.9528 0.0363 0.9724
9 0.0206 0.0381 0.02 1.3611 0.0276 0.9805 0.0145 0.9869
10 - - -
0.0034 0.0069 0.01 1.3711 0.0050 0.9755 0.0073 0.9942
11 0.0020 0.0039 0.005 1.3761 0.0028 0.9783 0.0036 0.9978
12 0.0127 0.0238 0.002 1.3781 0.0173 0.9956 0.0015 0.9993
13 - - -
0.0028 0.0049 0.001 1.3791 0.0036 0.9921 0.0007 1
Ekor 0 1.3791 0 1
41
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3.4 Increase the ratio of claims and the average estimate
for the Development of Improved Data Patterns of Claims Paid
𝑘 𝑚
̂𝑘 𝑚̂𝑘
̃ ̂ 𝑘∗
𝑚 ∑𝑚 ̂ 𝑘∗ 𝜁̂𝑘 𝑧̂𝑘 𝜁̂𝑘∗ 𝑧̂𝑘∗
1 0.0074 0.0086 0.0086 0.0086 0.0063 0.0063 0.0063 0.0063
2 0.0564 0.0611 0.0611 0.0697 0.0443 0.0505 0.0443 0.0505
3 0.1793 0.1760 0.1760 0.2457 0.1276 0.1782 0.1276 0.1782
4 0.2812 0.2621 0.2621 0.5078 0.1901 0.3682 0.1901 0.3682
5 0.2270 0.2064 0.2064 0.7143 0.1497 0.5179 0.1497 0.5179
6 0.1450 0.1462 0.1462 0.8604 0.1060 0.6239 0.1060 0.6239
7 0.1307 0.1709 0.1709 1.0313 0.1239 0.7478 0.1239 0.7478
8 0.0562 0.0903 0.11 1.1413 0.0655 0.8133 0.0798 0.8276
9 0.0294 0.0545 0.07 1.2113 0.0395 0.8529 0.0508 0.8784
Table 3.4 Increase the ratio of claims and the average estimate
for the Development of Improved Data Patterns of Claims Paid
10 0.0079 0.0159 0.05 1.2613 0.0115 0.8644 0.0363 0.9146
11 0.0105 0.0205 0.03 1.2913 0.0149 0.8793 0.0218 0.9364
12 0.0137 0.0257 0.02 1.3113 0.0186 0.8979 0.0145 0.9509
13 - -
0.000031 0.000043 0.02 1.3313 0.0000 0.8979 0.0145 0.9654
Ekor 0.0478 1.3791 0.0346 1
Table 3.5 Claims Ratio Index, Initial Claims Ratio Ultimate, Ultimate Claim,
and Estimation of Outstanding Claims Reserves for Claims Data increase Happen
𝑟𝑖 𝑅̂𝑖𝐵𝐹
𝑖 𝑟̅𝑖 𝑟𝑖∗ 𝑞̂𝑖 𝑈̂𝑖
Happen Paid Happen Paid
1 0.5319 0.6129 0.5710 0.5710 0.7874 32299.9191 0 1118.5948
2 0.4737 0.5438 0.5075 0.5075 0.6999 40279.1124 29.2069 1979.0648
3 0.4581 0.5104 0.4835 0.4835 0.6668 40634.6177 88.3941 2585.8263
4 0.4375 0.4744 0.4556 0.4556 0.6283 39604.2625 229.7407 3381.7862
5 0.6536 0.7323 0.6918 0.6918 0.9540 58440.5745 762.7689 7109.0110
6 0.9787 1.0853 1.0307 1.0307 1.4214 81346.9434 2241.4592 14024.4629
7 1.3554 1.2448 1.2989 1.2989 1.7914 163258.6555 10417.5331 41168.2102
8 2.0094 2.0028 2.0061 2.0061 2.7666 268150.6343 41569.6714 100850.1239
9 1.4647 1.4174 1.4409 1.4409 1.9871 331893.0957 79509.4046 160000.5501
10 1.0245 0.8716 0.9450 0.9450 1.3032 193519.8073 74977.1006 122259.4617
11 0.7494 0.7373 0.7433 0.7433 1.0251 169559.7233 102656.5288 139350.9305
12 0.4395 0.2777 0.3494 0.5 0.6895 157381.5643 126690.1157 149426.7861
13 0.5819 1.4354 0.9139 0.5 0.6895 156150.7225 148318.1288 155171.5582
Total 587490.0529 898426.3667
42
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 3.6 Constant variability, Standard Error of 𝜁̂𝑘∗ and Standard Error of 𝑧̂𝑘∗
for Data of An increase Claims
2 2
(𝑠. 𝑒. (𝜁̂∗ )) 𝑠. 𝑒. (𝜁̂𝑘 ) (𝑠. 𝑒. (𝑧̂ ∗ )) 𝑠. 𝑒. (𝑧̂𝑘 )
∗ ∗
𝑘 𝑠̃̂𝑘 𝑠̂𝑘2∗ 𝑘 𝑘
1 157.9638 157.9638 0.000091 0.0095 0.000091 0.0095
2 258.7319 258.7319 0.000164 0.0128 0.000255 0.0160
3 241.0382 241.0382 0.000170 0.0130 0.000425 0.0206
4 193.4240 193.4240 0.000155 0.0124 0.000580 0.0241
5 793.3281 793.3281 0.000751 0.0274 0.001331 0.0365
6 677.8112 444.7212 0.000614 0.0248 0.001946 0.0441
7 359.4531 570.0033 0.001250 0.0354 0.001655 0.0407
8 74.5968 73.8335 0.000252 0.0159 0.001402 0.0374
9 80.2661 32.8787 0.000156 0.0125 0.001247 0.0353
10 12.3862 25.1074 0.000164 0.0128 0.001082 0.0329
11 10.7283 21.9404 0.000194 0.0139 0.000889 0.0298
12 35.3697 20.2354 0.000279 0.0167 0.000610 0.0247
13 19.6970 0.000610 0.0247 0 0
Ekor 19.1729 0 0 0 0
Table 3.7 Constants of Variability, Standard Error of 𝜁̂𝑘∗ and Standard Error of 𝑧̂𝑘∗
for Data of Paid Claims increase
2 2
(𝑠. 𝑒. (𝜁̂∗ )) 𝑠. 𝑒. (𝜁̂𝑘 ) (𝑠. 𝑒. (𝑧̂ ∗ )) 𝑠. 𝑒. (𝑧̂𝑘 )
2∗ ∗ ∗
𝑘 𝑠̃̂𝑘 𝑠̂𝑘 𝑘 𝑘
1 12.5625 12.5625 0.000007 0.0027 0.000007 0.0027
2 97.2829 97.2829 0.000062 0.0079 0.000069 0.0083
3 80.2233 80.2233 0.000057 0.0075 0.000125 0.0112
4 359.6533 359.6533 0.000288 0.0170 0.000413 0.0203
5 204.5555 281.9872 0.000267 0.0163 0.000680 0.0261
6 111.6348 121.1375 0.000167 0.0129 0.000848 0.0291
7 283.9844 171.3740 0.000376 0.0194 0.001224 0.0350
8 69.3003 72.9480 0.000249 0.0158 0.001473 0.0384
9 36.7752 41.6315 0.000197 0.0140 0.001639 0.0405
10 37.5430 31.4504 0.000206 0.0143 0.001434 0.0379
11 22.3647 23.7591 0.000210 0.0145 0.001224 0.0350
12 19.7605 20.6506 0.000285 0.0169 0.000939 0.0306
13 20.6506 0.000639 0.0253 0.000300 0.0173
Ekor 30.4780 0.0002998 0.0173 0 0
43
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. Conclusion
The things that can be inferred from the results of the discussion are as follows: outstanding claims
reserves estimation results obtained using Bornhuetter-Ferguson method is equal to 587,490.0529 for
claims incurred increased data and 898,426.3667 for data enhancement claims paid. It means that the
amount of estimated outstanding claims reserves that must be provided by insurance companies
amounted to 587,490.0529 for claims incurred increased data and 898,426.3667 for data enhancement
claims paid. The magnitude of the prediction error of the estimate of outstanding claims reserves
Bornhuetter-Ferguson method obtained amounted to 8.51% for data enhancement claims occurred and
4.01% for data enhancement claims.
5. References
.
Mack, Thomas and Re, Munich. 2006. Parameter Estimation for Bornhuetter/Ferguson. Casualty
Actuarial Society Forum Fall 2006, 141-157.
Mack, Thomas. 2008. The Prediction Error of Bornhuetter/Ferguson. Astin Bulletin, 38, 87-103.
Verrall, R. J. 2004. A Bayesian Generalized Linear Model for The Bornhuetter-Ferguson Method of
Claims Reserving. North American Actuarial Journal, 8, 67-89.
44
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Most corporations considering debt liabilities issue risky coupon bonds for a finite
maturity which tipically matches the expected life of the assets being financed. For valuing these
coupon bond, we can consider the common stock and and coupon bonds as a compound option.
The other problem is bond indenture provisions often include safety covenants that give bond
investors the right to reorganize a firm if its value falls below a given barrier. This paper will
shown how to value bonds with coupon based on the first passage time approach. We will
construct a formula for probability of default at the maturity date by computing the historical
low of firm values. Using Indonesian corporate coupon bond data, we will predict the
bankruptcy of this firm.
1. Introduction
Credit risk management is one of the most important recent developments inthe ﬁnance industry.It has
been the subject of considerableresearch interest in banking and finance communities, and has recently
drawn theattention of statistical researchers. Credit Risk is the risk induced from credit events such as
credit rating change, restucturing, failure to pay, bankruptcy, etc. More formal denition, credit risk is
the distribution of financial losses due to unexpected changes in the credit quality of a counterparty in
a financial agreement (Giesecke, 2004).Central to credit risk is the default event, which occurs if the
debtor is unable tomeet its legal obligation according to the debt contract.
Merton (1974) firstly builds a model based on the capital structure of the firm, which becomes
the basis of the structural approach. He assumes that the firm is financed by equity and a zero coupon
bond with face value K and maturity date T. In this approach, the company defaults at the bond maturity
time T if its assets value falls below the face value of the bond at time T.
Black and Cox (1976) extends the definition of default event and generalize Merton’s method
into the First Passage Approach. In this approach, the firm defaults when the history low of the firm
assets value below some barrier D. Thus the default event could take place before the maturity date T.
This theory also need assumption that the corporations issues only one zero coupon bond. Reisz and
Perlich (2004) point out that if the barrier is below the bond’s face value, then default time definition
of Black and Cox theory does not reflect economic reality anymore. In their paper, they modified the
classic First Passage Time Approach, and re-defining the formula of default time.
Up to this time, most corporation tend to issue risky coupon bond. At every coupon date until
the final payment, the firms have to pay the coupon. At the maturity date, the bondholder receive the
face value of the bond. The bankruptcy of the firm occurs when the firm fails to pay the coupon at the
coupon payment and/or the face value of the bond at the maturity date. Geske (1977) has derived
formulas for valuing coupon bonds. In a later paper, Geske (1979) suggested that when company has
coupon bond outstanding, the common stock and coupon bond can be viewed as a compound option.
In this paper we proposed a method for unifying some theory above. We want to produce a new
theory in credit risk that fulfill assumptions in real finance industry. We will derive probability of default
45
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
formula for risky coupon bond with modified first passage time approach. We construct the formula by
computing the historical low of firm values.
2. Theoritical Framework
Consider a firm with market value at time t, Vt, which is financed by equity and a zero coupon bond
with face value, K, and maturity date, T. The firm’s contractual obligation is to repay the amount K to
the bondholder at time T. Merton and Black & Scholes (1973) indicated that most corporate liabilities
may be viewed as an option. They derived a formula for valuing call option and discussed the pricing
of a firm’s common stock and bonds when the stock is viewed as an option on the value of the firm.
Thus, valuing the equity price of the firm is identical to the equation for valuing a European call option.
The firm is assumed to default at the bond maturity date T, if the total assets value of the firm
is not sufficient to pay its obligation to the bondholder. Thus the default time τ is a discrete random
variable given by
𝑇 jika 𝑉𝑇 < 𝐾
𝜏= { (1)
∞ jika 𝑉𝑇 ≥ 𝐾
To calculate the probability of default, we make assumption that the standard model for the evolution
of asset prices over time is Geometric Brownian Motion:
𝑑𝑉𝑡 = 𝜇 𝑉𝑡 𝑑𝑡 + 𝜎𝑉𝑡 𝑑𝑊𝑡 and 𝑉0 > 0 (2)
Where µ is a drift parameter, σ> 0 is a volatility parameter, and W is a standard Brownian Motion.
Setting 𝑚 = 𝜇 − 12𝜎 2 Itto’s lemma implies that
𝑉𝑡 = 𝑉0 exp(𝑚𝑡 + 𝜎) (3)
Since Wt is normally distributed with mean zero and variance T, probability of default is given by
log 𝐿−𝑚𝑇
𝑃(𝜏 = 𝑇) = 𝑃[𝑉𝑇 < 𝐾] = 𝑃[𝜎𝑊𝑇 < log 𝐿 − 𝑚𝑇] = Φ ( 𝜎 √𝑇
) (4)
𝐾
where 𝐿 = 𝑉 and is the cumulative standard normal distribution function.
0
46
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Merton’s model, the firm can only default at the maturity date T. As noted by Black & Cox (1976),
bond indenture provisions often include safety covenants that give bondholder right to reorganize a firm
if its value falls below a given barrier.
We still use geometric Brownian motion to model the total assets of the firm Vt. Suppose the
default barrier B is a constant valued in (0,V0), then the default time τ is modified to
𝜏 = inf{𝑡 > 0: 𝑉𝑡 < 𝐵} (5)
This definition says a default takes place when the assets of the firm fall to some positive level B for
the first time. The firm assumed to take the position of not default at time t = 0.
So, the probability of default is calculated as
𝐵
𝑃(𝜏 ≤ 𝑇) = 𝑃[𝑀𝑇 < 𝐵] = 𝑃 [min𝑠≤𝑇 (𝑚𝑠 + 𝜎𝑊𝑠 ) < log (𝑉 )]
0
Where M is the historical low of firm values
𝑀𝑡 = min𝑠≤𝑇 𝑉𝑠
Since the distribution historical low of an arithmetic Brownian Motion is inverse Gaussian, then the
probability of default can be calculated explicitly by
𝐵 2𝑚 𝐵
ln( )−𝑚𝑇 𝐵 2 ln( )+𝑚𝑇
𝑉0 𝑉0
𝑃(𝜏 ≤ 𝑇) = Φ ( 𝜎√𝑇
) + (𝑉 ) 𝜎 Φ ( 𝜎√𝑇
) (6)
0
Figure 2 shows the default event graphically for Black & Cox’s model.
In practice, the most common form of debt instrument is acoupon bond.In the U.S and in many other
countries, coupon bonds paycoupons every six months and face value at maturity.Suppose the firm has
only common stock and coupon bond outstanding. The coupon bond has n interest payments of c dollars
each. The firm is assumed to default at the coupon date, if the total assets value of the firm is not
sufficient to pay the coupon payment to bondholder. And at the maturity date, the firm can default if
the total assets is below the face value of the bond. For this case, if the firm defaults on a coupon
payment, then all subsequent coupon payments (and payments of face value) are also default on.
Geske (1979) proposed a theory for valuing risky coupon bond. When the corporation has
coupon bonds outstanding, the common stock can be considered as a compound option (Geske, 1977).
A compound option is an optionon an option. In other words, the underlying asset is anotheroption(Wee,
2010). For coupon bond, valuing equity price of coupon bond is identical to valuing a European call
option on call option.
At every coupon date until the final payment, the firm have the option of buying the coupon or
forfeiting the firm to bondholder. The final firm option is to repurchase the claims on the firm from the
47
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
bondholders by paying off the principal at maturity. The financing arrangements for making or missing
the interest payments are specified in the indenture conditions of the bond. In Figure 3 we illustrate the
default event of Geske’s model.
3.1 Modified First Passage Time Approach (Reisz & Perlich’s Model)
In their paper, Reisz & Perlich (2004) point out that if the barrier is below the face value of the bond,
then our earlier definition (5) does not reflect economic reality anymore. It does not capture the situation
when the firm is in default because VT<K although MT>B.
Then, they proposed a redefine default as firm value falling below the barrier B<K at any time
before maturity or firm value falling below face value K at maturity. Formally, the default time is now
given by
𝜏 = min(𝜏1 , 𝜏2 ) (7)
Where
τ1 = the maturity time T if assets VT<K at T
τ2 = the first passage time of assets to the barrier B
In other words, the default time is defined as the minimum of the first passage default time (5) and
Merton’s default time (1). This definition of default is consistent with the payoff to equity and bonds.
Even if the firm value does not fall below the barrier, if assets are below the bond’s face value at
maturity he firm default. The default event for Reisz & Perlich’s model is shown at Figure 4.
Assuming that the firm can neither repurchase shares nor issue new senior debt, the payoffs to
the firm’s liabilities at debt maturity T are summarized in Table 1 and Table 2.
Table 1. Payoffs at Maturity in The Modified First Passage Time Approach for B≥ K
State of the firm Assets Bond Equity
No Default MT>B K VT– K
Default MT≤ B B>K B 0
B=K K 0
48
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 2. Payoffs at Maturity in The Modified First Passage Time Approach for B<K
State of the firm Assets Bond Equity
No Default MT>B, VT≥ K K VT– K
Default MT>B, VT< K VT 0
MT≤ B B 0
3.1 Valuation of Coupon Bond with Modified First Passage Time Approach
In this section, we want to begin with assumption that the firm have assets value Vt, which is financed
by equity and a single coupon bond with face value K and only one time coupon payment at tc for the
bond period.
Suppose the default barrier B is a constant valued in (0,V0) and c<B< K, then the default time τ
is given by
𝜏 = min(𝜏1 , 𝜏2 , 𝜏3 , 𝜏4 , 𝜏5 ) (8)
Where
τ1 = the maturity time T if assets VT<K at T
τ2 = the first passage time of assets to the barrier B at time (tc,T)
= inf{𝑡𝑐 < 𝑡 < 𝑇: 𝑉𝑡 < 𝐵}
τ3 = the coupon payment date if assets VT<B or assets VT<c at time tc
τ 4 = the first passage time of assets to the barrier B at time (0,tc)
= inf{0 < 𝑡 ≤ 𝑡𝑐 : 𝑉𝑡 < 𝐵}
τ5 = ∞, otherwise
With the definition above, we can summarize the default time by
𝜏 = min(𝜏1 , 𝜏2 ∗ ) (9)
Where
τ2* =the first passage time of assets to the barrier B at time (0,T)
= inf{0 < 𝑡 < 𝑇: 𝑉𝑡 < 𝐵}
The default event is shown at Figure 5.
49
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 5. Default Event for Coupon Bond with Modified First Passage Time Approach
We have to check wether this default definition is consistent with the payoff to investor. We need to
consider two scenarios.
1. B≥ c + K
a. If the firm value never falls below the barrier B over the term of the bond (MT>B), then at
coupon payment the bond investor receive the coupon c, and at the maturity date receive
the face value c + K, K <V0. The equity holders receive the remaining VT – (c + K) at the
maturity date.
b. If the firm value falls below the barrier at some point during the bond’s term (MT ≤ B), then
the firm default. In this case, the firm stop operating, bond investors take over its assets B
and equity investor receive nothing.
Bond investor is fully protected: they receive at least the face value and coupon c + K upon
default and the bond is not subject to deafult rik anymore.
c. If the assets value VTis less then c + K, the ownership of the firm will be transferred to the
bondholder, who lose the amount (c + K) – VT. Equity is worthless because of limited
liability.
2. B<c + K
This anomaly does not occur if we assume B<c + K so that bondholder is both exposed to some
default risk and compensated for bearing that risk.
a. If the firm value never falls below the barrier B over the term of the bond (MT>B) and VT ≥
c + K, then at coupon payment the bond investor receive the coupon c, and at the maturity
date receive the face value c + K, K <V0. The equity holders receive the remaining VT – (c
+ K) at the maturity date.
b. If MT>B but VT<c + K, then the firm default, since the remaining assets are nor sufficient
to pay off the debt in full. Bondholder collect the remaining assets VT and equity become
worthless.
c. If MT≤B, then the firm default as well. Bond investor receive B<K at default and equity
become worthless.
To calculate probability of default for this case, first we define M as the historical low of firm values,
that is
𝑀𝑡 = min𝑠≤𝑇 𝑉𝑠
𝐵 2𝑚 𝐵2
ln( )−𝑚𝑇 𝐵 2 ln( )+𝑚𝑇
𝑉0 𝐾𝑉0
𝑃(𝜏 ≤ 𝑇) = Φ ( 𝜎√𝑇
)+ (𝑉 ) 𝜎 Φ( 𝜎√𝑇
) (10)
0
The probability of default for coupon bond with modified first passage time approach is higher than
the corresponding probability of the classical approah, equation (6).
In this case study we use data sets from and Indonesian Bond Market Directory 2011 that is published
by Indonesian Stock Exchange (IDX) and Indonesian Bond Pricing Agency (IBPA). We use bond that
is issued by PT Bank Lampung (BPD Lampung), namely Oligasi II Bank Lampung Tahun 2007, with
code number BLAM02 IDA000035208. The profile structure of this bond is given at Table 2. Total
assets data of the firm is published by Indonesian Bank is given at Table 3.
For deriving the probability of default of bond, we have to construct the formula by computing
the historical low of firm values. All the computation is done by R programming. In this study, we use
a fixed barrier level in 2,000,000,000,000.
Using formula (10) we have the probability of default for Obligasi II Bank Lampung Tahun
2007 is 0.00003627191. This probability of default is very small because of the outstanding of the bond
is very low than the total assets value. It can be seen from Table 2 and Table 3, that the face value of
the bond is 300,000,000,000,000 and the total assets value at the end of 2012 is 4,221,274,000,000. In
the normal situation, the total assets value is very sufficient for paying the principal of the bond.
Acknowledgements
We would like to thank to Hibah Disertasi Doktor from DIKTI grant research in 2013.
51
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Black, F. & Cox, J., 1976, Valuing Corporate Securities: Some Effects of Bond Indenture Provisions,
Journal of Finance, 31 (2), 351-367.
Black, F. & Scholes, M., 1973, The Pricing of Option and Corporate Liabilities, Journal of Political
Economy, 81, 637-654.
Geske, R., 1977, The Valuation of Corporate Liabilities as Compound Options, Journal of Financial
and Quantitative Analysis, 12, 541-552.
Geske, R., 1979, The Valuation of Compound Options, Journal of Financial Economics, 7, 63-81.
Giesecke, K., 2004, Credit Risk Modeling and Valuation: An Introduction, Credit Risk: Models and
Management, Vol.2, D. Shimko (Ed.), Wiley. New York.
Merton, R., 1974, On The Pricing of Corporate Debt: The Risk Structure of Interest Rate, Journal of
Finance, 29 (2), 449-470.
Reisz, A. & Perlich, C., 2004, A Market-Based Framework for Brankruptcy Prediction, Working Paper,
Baruch College and New York University.
Wee, L.T., 2010, Compound Option, Teaching Note.
Website of Bank Indonesia (BI), 2013, Data Total Aset Bank. www.bi.go.id. [May 20, 2013]
Website of Bursa Efek Indonesia (BEI), 2012, Indonesian Bond Market Directory 2011, www.idx.co.id
[May 20, 2013]
Website of Indonesia Bond Pricing Agency (IBPA), 2012, Data Obligasiwww.ibpa.co.id. [May 20,
2013]
52
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In this paper, we would like to discuss the question of when the subdirect sum of two
nonsingular 𝑍-matrices be a nonsingular 𝑀-matrix and how to get their inverses. By seeing the
value of entries of an inverse are nonnegative numbers, it is shown that matrix is a nonsingular
𝑀 -matrix. In particular, the two blocks of lower and upper triangular nonsingular 𝑀-matrices,
these are the 𝑘-subdirect sum of two matrices is a nonsingular 𝑀 -matrix. In this paper, all of
above cases are suitable with properties and theorems of these matrices will be given.
1. Introduction
Let 𝐴 and 𝐵 be two square matrices of order 𝑛1 and 𝑛2 , respectively, and let 𝑘 be an integer such that1 ≤
𝑘 ≤ min(𝑛1 , 𝑛2 ). Let 𝐴 and 𝐵 be partitioned into 2 × 2 blocks as follows:
𝐴 𝐴12 𝐵 𝐵12
𝐴 = [ 11 ], and 𝐵 = [ 11 ] (1.1)
𝐴21 𝐴22 𝐵21 𝐵22
where A22 and B11 are square matrices of order k. The 𝑘-subdirect sum of 𝐴 and 𝐵 and denote it by
𝐴11 𝐴12 0
𝐶 = 𝐴 ⨁𝑘 𝐵 = [𝐴21 𝐴22 + 𝐵11 𝐵12 ], (1.2)
0 𝐵21 𝐵22
In the following result we show that nonsingularity of matrix 𝐴̂22 + 𝐵̂11is a necessary and sufficient
condition for the 𝑘-subdirect sum 𝐶 to be nonsingular. The proof is based on the use of the relation
𝑛 = 𝑛1 + 𝑛2 − 𝑘 to properly partition the indicated matrices.
Some definitions, properties and theorems will be given in this paper. A nonsingular 𝑍-matrix
is a matrix which has positive the first-diagonal entries and nonpositive off-diagonal entries. A
nonsingular 𝑀-matrix is a nonsingular 𝑍-matrix which has nonnegative the inverse of 𝑍-matrix
entries. In this paper, we want to obtain the 𝑘-subdirect sum of two nonsingular 𝑍-matrices by using
some definitions, properties and theorems, such that the inverses of 𝑍-matrices be a nonsingular 𝑀-
matrix or not. Furthermore, in this paper, we give some examples which help illustrate the theoretical
results.
Theorem 2.1 Let 𝐴 and 𝐵 be nonsingular matrices of order 𝑛1 and 𝑛2, respectively, and let 𝑘 be an
integer such that 1 ≤ 𝑘 ≤ min(𝑛1 , 𝑛2 ). Let 𝐴 and 𝐵 be partitioned as in (1.1) and their inverses be
partitioned as in (1.3). Let 𝐶 = 𝐴 ⨁𝑘 𝐵, then 𝐶 is nonsingular if and only if 𝐻 ̂ = 𝐴̂22 + 𝐵̂11 is
nonsingular.
Proof. Let 𝐼𝑚 be the identity matrix of order m. The theorem follows from the following relation:
𝐴̂11 𝐴̂12 0 𝐴11 𝐴12 0 𝐼𝑛−𝑛2 0 0
𝐴−1 0 𝐼𝑛−𝑛2 0
̂ ̂
[ ]𝐶 [ ] = [𝐴̂21 𝐴̂22 0 ] [𝐴21 𝐴22 + 𝐵11 𝐵12 ] [ 0 𝐵11 𝐵12 ]
0 𝐼𝑛−𝑛1 0 𝐵−1 0 0 𝐼𝑛−𝑛1 0 𝐵21 𝐵22 0 𝐵̂21 𝐵̂22
𝐼𝑛−𝑛2 𝐴̂12 0
=[ 0 ̂
𝐻 𝐵̂12 ].
0 0 𝐼𝑛−𝑛𝑛
(1.4)
We first consider the 𝑘-subdirect sum of nonsingular 𝑍-matrices. From (1.4) we can explicitly write
𝐼𝑛−𝑛2 −𝐴̂12 𝐻 ̂ −1 𝐴̂12 𝐻 ̂ −1 𝐵̂12
𝐼𝑛−𝑛 0 𝐴−1 0
𝐶 −1 = [ 2
] [ 0 ̂
𝐻 −1
−𝐻 ̂ −1 ̂
𝐵 ] [ ]
0 𝐵 −1 12 0 𝐼𝑛−𝑛1
0 0 𝐼𝑛−𝑛1
From which we can obtain
𝐴̂11 − 𝐴̂12 𝐻 ̂ −1 𝐴̂21 𝐴̂12 − 𝐴̂12 𝐻 ̂ −1 𝐴̂22 𝐴̂12 𝐻
̂ −1 𝐵̂12
𝐶 −1 = [ 𝐵̂11 𝐻 ̂ −1 𝐴̂21 ̂ −1 𝐴̂22
𝐵̂11 𝐻 −𝐵̂11 𝐻 ̂ −1 𝐵̂12 + 𝐵̂12 ] (2.1)
̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂
𝐵21 𝐻 𝐴21 𝐵21 𝐻 𝐴22 −𝐵21 𝐻 𝐵12 + 𝐵22
1 1 𝐵 𝐵12 1 3 −1 3 −1
𝐵=[ ] → 𝐵 = [ 11 ] → 𝐵−1 = [ ]=[ ]
2 3 𝐵21 𝐵22 1 −2 1 −2 1
where 𝐵̂11 = 3, 𝐵̂12 = −1, 𝐵̂21 = −2, 𝐵̂22 = 1 and 𝐻̂ = 𝐴̂22 + 𝐵̂11 = 1 + 3 = 4 is nonsingular.
𝐴11 𝐴12 0 2 1 0 2 1 0
𝐶 = 𝐴 ⨁1 𝐵 = [𝐴21 𝐴22 + 𝐵11 𝐵12 ] = [2 2 + 1 1] = [2 3 1], from here we have
0 𝐵21 𝐵22 0 2 3 0 2 3
1
𝐴−1 0 𝐼 0 1 −2 0 2 1 0 1 0 0
[ ] 𝐶 [ 𝑛−𝑛2 ] = [ −1 1 0 ] [ 2 3 1 ] [ 0 3 −1 ]
0 𝐼𝑛−𝑛1 0 𝐵−1
0 0 1 0 2 3 0 −2 1
1 1 1
1 −2 −2 1 0 0 1 −2 0
= [0 2 1 ] [0 3 −1] = [0 4 −1]
0 2 3 0 −2 1 0 0 1
Thus, 𝐶 is nonsingular.
54
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Theorem 2.3. Let 𝐴 and 𝐵 be nonsingular 𝑍-matrices of order 𝑛1 and 𝑛2 , respectively, and let 𝑘 be an
integer such that 1 ≤ 𝑘 ≤ min(𝑛1 , 𝑛2 ). Let 𝐴 and 𝐵 be partitioned as in (1.1) and their inverses be
partitioned as in (1.3). Let 𝐶 = 𝐴 ⨁𝑘 𝐵, and 𝐻̂ = 𝐴̂22 + 𝐵̂11 be nonsingular. Then 𝐶 is nonsingular
−1
𝑀-matrix if and only if for every entries of 𝐶 is nonnegative.
Example 2.4.
2 −4 4 −3
𝐴=[ ] and 𝐵 = [ ] are nonsingular 𝑍-matrices.
−1 1 −2 1
1
1 1 4 − 2 −2 1 1
−1
We have𝐴 = [
−2 1 2
]=[ 1 ], where𝐴̂11 = − 2 , 𝐴̂12 = −2, 𝐴̂21 = − , 𝐴̂22 = −1.
2
− 2 −1
1 3
1 1 3 − − ̂11 = − 1 , 𝐵̂12 3
𝐵−1 = −2 [ ]=[ 2 2], where 𝐵 = − 2 , 𝐵̂21 = −1, 𝐵̂22 = −2 and
2 4 −1 −2 2
1 3 2
𝐻̂ = 𝐴̂22 + 𝐵̂11 = −1 − = − , so 𝐻 ̂ =− .
−1
2 2 3
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 1 is
𝐴11 𝐴12 0 2 −4 0
𝐶 = 𝐴 ⊕1 𝐵 = [𝐴21 𝐴22 + 𝐵11 𝐵12 ] = [−1 5 −3], and from (2.1) we have
0 𝐵21 𝐵22 0 −2 1
1 2
− −2
𝐴̂11 − 𝐴̂12 𝐻
̂ −1 𝐴̂21 𝐴̂12 − 𝐴̂12 𝐻̂ −1 𝐴̂22 𝐴̂12 𝐻
̂ −1 𝐵̂12 6 3
1 1
𝐶 −1 = [ 𝐵̂11 𝐻 ̂ −1 𝐴̂21 ̂ −1 𝐴̂22
𝐵̂11 𝐻 −𝐵̂11 𝐻
̂ −1 𝐵̂12 + 𝐵̂12 ] = −
6
− 3 −1 , we can
̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂ ̂ −1 ̂ ̂
𝐵21 𝐻 𝐴21 𝐵21 𝐻 𝐴22 −𝐵21 𝐻 𝐵12 + 𝐵22 1 2
[− 3 − 3 −1]
see from here that the entries of 𝐶 −1 are nonpositive. Thus, 𝐶 is not a nonsingular 𝑀-matrix.
Example 2.5.
1 0 0 1 0 0
𝐴 = [−1 3 −1] and 𝐵 = [−2 4 −2] are nonsingular 𝑍-matrices, we have
−2 −1 1 −1 −1 1
1 0 0 3 1 1
2 0 0 3 1 1
1
𝐴−1
= 2 [3 1 1] = [ 2 2 2 ], where 𝐴̂11 = [1], 𝐴̂12 = [0 0], 𝐴̂21 = [27] , 𝐴̂22 = [21 2
3].
7 1 3
7 1 3 2 2 2
2 2 2
1 0 0
2 0 0 1 1 0 0
1 2] = [2 2 1], where 𝐵̂11 = [2 1 ] , 𝐵̂12 = [ ] , 𝐵̂21 = [3 2] , 𝐵̂22 = [2].
1 1
𝐵−1 = [4
2 1 2 1
6 1 4 3 2 2
Since the entries of 𝐴−1 and 𝐵−1 are nonnegative, respectively, then the matrices of 𝐴 and 𝐵 are also
nonsingular 𝑀-matrices.
1 1 3 1 1 8 2
1 0 4
2 −2 −7
̂ = 𝐴̂22 + 𝐵̂11 =
𝐻 [21 2
3] + [2 1] = [25 2 ̂ −1
] and 𝐻 = 7
[ 5 3 ]= [ 710 6 ].
2 2 −2 −
2 2 2 2 7 7
1 0 0 0
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 2 is 𝐶 = 𝐴 ⊕2 𝐵 = [ −1 3 −1 + 1 0 0
]
−2 −1 1 −2 4 −2
0 −1 −1 1
1 0 0 0
−1 4 −1 0
𝐶=[ ] and from (2.1) we have
−2 −3 5 −2
0 −1 −1 1
55
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1 0 0 0
5 3 1 2
7 7 7 7
from which we obtain 𝐶 −1 = 13 5 4 8 .
7 7 7 7
18 8 5 17
[7 7 7 7]
Since the entries of 𝐶 −1 are nonnegative then 𝐶 is a nonsingular 𝑀-matrix.
Example 2.6
2 0 −1 1 −1 0
𝐴 = [−2 1 −3] and 𝐵 = [−1 2 −1] are nonsingular 𝑍-matrices but not nonsingular 𝑀-
0 −1 1 −2 −4 3
matrices.
1 1 1
− −
−2 1 1 3 6 6
−1 1 1 1 4
𝐴 = −6 [ 2 2 8] = − 3 − 3 − 3 ,
2 2 2 1 1 1
[ − 3 − 3 − 3]
1 1 4
1 1 1 − − −
3
where 𝐴̂11 = [3], 𝐴̂12 = [− 6 − 6], 𝐴̂21 = [ 1] , 𝐴̂22 = [ 31 3
1].
−3 −3 −3
2 1
− −1 −
2 3 1 3 3
−1 1 5 1
𝐵 = −3 [5 3 1] = −3 −1 − 3 ,
8 6 1 8 1
[− 3−2 − ]
3
2 1
− −1 − 8 1
where 𝐵̂11 = [ 35 ] , 𝐵̂12 = [ 31] , 𝐵̂21 = [− 3 −2] , 𝐵̂22 = [− 3].
− 3 −1 −3
1 4 2 7
−3 −3 − 3 −1 −1 − 3
̂ ̂ ̂
𝐻 = 𝐴22 + 𝐵11 = [ 1 1] + [ 5 ]=[ 4] and
− − − −1 −2 −
3 3 3 3
2 7
4 7 − 10
𝐻̂ −1 = − 3 [− 3 3 ] = [ 5
10 3 3 ].
2 −1 − 5 10
2 0 −1 0
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 2 is 𝐶 = 𝐴 ⊕2 𝐵 = [−2 1 −3 + 1 −1 0 ]
0 −1 1 −1 2 −1
0 −2 −4 3
2 0 −1 0
−2 2 −4 0
𝐶=[ ] and from (2.1) we have
0 −2 3 −1
0 −2 −4 3
56
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
11 2 1 1
30
− 15 − 10 − 30
1 1 1 1
−6 −6 −2 −6
−1
from here we obtain 𝐶 = 4 4 3 1 .
− 15 − 15 − 15 − 15
7 7 3 2
[− 15
− 15 − 5 15 ]
−1
Since the entries of 𝐶 are not all nonnegative, then 𝐶 is not a nonsingular 𝑀-matrix.
In the special case of 𝐴 and 𝐵 block lower and upper triangular nonsingular 𝑀-matrices, respectively,
the result of Theorem 2.2 is easy to establish.
𝐴11 0 𝐵 𝐵12
Let 𝐴 = [ ] and 𝐵 = [ 11 ]
𝐴21 𝐴22 0 𝐵22
(2.2)
with 𝐴22 and 𝐵11 square matrices of order 𝑘.
Theorem 2.7. Let 𝐴 and 𝐵 be nonsingular lower and upper block triangular nonsingular 𝑀-matrices,
respectively, partitioned as is (2.2). Then 𝐶 = 𝐴 ⊕𝑘 𝐵 is a nonsingular 𝑀-matrix.
In this particular case of block triangular matrices we have
𝐴̂12 = 0, 𝐵̂21 = 0, 𝐴̂22 = 𝐴−1 ̂ −1 ̂ −1 −1
22 , 𝐵11 = 𝐵11 , 𝐴22 = 𝐵11 and 𝐻 = 𝐴22 + 𝐵11 .
𝐴̂11 0 𝐵̂ 𝐵̂12
𝐴−1 = [ ] , 𝐵−1 = [ 11 ].
̂ ̂
𝐴21 𝐴22 0 𝐵̂22
𝐶 = 𝐴 ⊕1 𝐵
where 𝐶11 = (−1)2 (𝐴22 + 𝐵11 ). , 𝐶21 = (−1)3 (0) = 0, 𝐶31 = (−1)4 (0) = 0
𝐶12 = (−1)3 𝐴21 𝐵22 , 𝐶22 = (−1)4 𝐴11 𝐵22 , 𝐶32 = (−1)5 𝐴11 𝐵12
𝐶13 = (−1) (0) = 0, 𝐶23 = (−1) (0) = 0, 𝐶33 = (−1)6 𝐴11 (𝐴22 + 𝐵11 ).
4 5
Example 2.8.
57
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2 0 3 −3
𝐴=[ ] and 𝐵 = [ ] are nonsingular lower and upper block triangular nonsingular 𝑀-
−1 3 0 2
1
1 3 0 0 1 1 1
−1
matrices, then 𝐴 = 6 ( ) = [21 1 ],where 𝐴11−1 −1
= 2 , 𝐴12 = 0, 𝐴−1 −1
21 = 6 , 𝐴22 = 3
1 2
6 3
1 1
1 2 3 3 2 −1 1
−1 1 −1 −1 1
𝐵−1 = 6 ( )=[ 1], where 𝐵11 = 3 , 𝐵12 = , 𝐵21 = 0, 𝐵22 = 2 and 𝐴22 = 𝐵11 .
0 3 0 2
2
The 𝑘-subdirect sum of 𝐴 and 𝐵 for 𝑘 = 1 is 𝐶 = 𝐴 ⊕1 𝐵
2 0 0 2 0 0
𝐶 = [−1 3 + 3 −3] = [−1 6 −3],
0 0 2 0 0 2
−1
𝐴11 0 0
1 1 −1 1
then from (2.3) we obtain 𝐶 −1 = [− 2 𝐴−1 −1
22 𝐴21 𝐴11 𝐴
2 22
− 2 𝐴−1 −1
22 𝐵12 𝐵22 ]
−1
0 0 𝐵22
1 1
0 0 0 0
2 2
1 1 1 1 1 1 1 1 1 1 1
= − 2 . 3 . (−1). 2 .
2 3
− 2 . 3 . (−3). 2 = 12 6 4
and therefore
1 1
[ 0 0 2 ] [0 0 2]
𝐶 is a nonsingular 𝑀-matrix as expected.
3. Conclusion
𝐵̂11 = 𝐵11−1 ̂ −1 ̂
, 𝐵12 = 𝐵12 , 𝐵21 = 0, 𝐵̂22 = 𝐵22 −1
and 𝐴22 = 𝐵11 ,
−1
𝐴11 0 0
1 1 −1 1
then 𝐶 −1 is obtained from 𝐶 −1 = [− 2 𝐴−1 −1
22 𝐴21 𝐴11 𝐴
2 22
− 𝐴−1
2
−1
22 𝐵12 𝐵22 ].
−1
0 0 𝐵22
Acknowledgment
We would like to thank all the people who helped this paper and Department of Mathematics who made
this seminar.
58
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Ayres Jr.Phd, Frank. (1974). Matriks. Translated by I nyoman Susila.(1994). Jakarta: Erlangga.
Bru, Rafael., Francisco Pedroche., and Daniel B.Szyld. (July 2005). Subdirect Sums of Nonsingular M-Matrices
and of Their Inverses. Electronic Journal of Linear Algebra ISSN 1081-3810.Vol.13 pp 162-174. Retrieved
June 1, 2013. http://hermite.cii.fc.ul.pt/iic/ela/ela-articles/13.html.
S.M. Fallat and C.R.Johnson. (1999). Sub-direct sums and positively classes of matrices. Linear Algebra Appl.,
288:149-173.
59
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Quotient group as one of the subjects in abstract algebra is often difficult perceived
most undergraduate students. Therefore, we need a method that can facilitate the students to
understand the concept of quotient group. One alternative that can be done is by using GAP
(Group, Algorithm, and Programming) software as an instrument in learning the quotient group.
GAP can make a presentation of the quotient group concept becomes more attractive. By using
GAP, it is expected to facilitate the students to have a better understanding of the quotient group
concept.
1. Introduction
Quotient group is one of the materials studied in the abstract algebra course at undergraduate level. The
teachers often encounter students’ difficulty in accepting and understanding the material. Most students
have difficulty understanding the concept of sets whose members is set. Orit Hazzan, in his paper
(1999), also mentioned that many of the Abstract Algebra teachers reported students’ difficulties in
understanding the material. In 1994, Dubinsky, Dautermann, Leron, and Zazkis point out that the major
difficulties in understanding group theory appear to begin with the concepts leading up to Lagrange’s
Theorem and Quotient Groups – cosets, cosets multiplication and normality. Therefore, this paper
emphasizes on the teaching quotient group.
Various attempts have been made to assist students in understanding the material. For example,
conducting tutorials, explaining quotient group materials in detail accompanied by examples that more
real. Many researchers have conducted research to develop teaching abstract algebra materials. Brown
(1990), Kiltinen and Mansfield (1990), Czerwinski (1994), and Leganza (1995), they all provide
examples of specific abstract algebra tasks for students and then examined the responses given by the
students. Dubinsky, Dautermann, Leron and Zazkis, in 1994, conducted a study on the development of
learning some topics in abstract algebra, including coset, normality, and quotient group. In 1997, Asiala,
Dubinsky, Mathews, and Morics conduct research that concentrates on developing students'
understanding coset, normality, and quotient group materials.
Some researchers have used programming language for teaching abstract algebra materials. For
example, in 1976 Gallian using a computer program written in the Fortran programming language to
investigate finite groups (Gallian, 1976). There was also a researcher who uses “Exploring Small
Groups” (Geissinger, 1989), and “Cayley” (O 'Bryan & Sherman, 1992). Some of them use software
package that does not specialize in computational in discrete abstract algebra, such as Matlab (Makiw,
1996). On the contrary, to help teach abstract algebra, Dubinsky and Leron (1994) uses software
package that specializes in computational in discrete abstract algebra that is ISETL. However, over the
times, the use of ISETL felt less effective, because the program has a lot of limitations in terms of its
functions and library. In addition, this program is designed specifically for teaching, so it cannot be
used for research purposes. When we teach abstract algebra we should think that we educate
undergraduate students who will be a graduate student in the future, so we better provide them with the
tools they can use to do research in the future.
In this paper, to solve the same problem, GAP software will be used as a tool in teaching the
concept of quotient group to undergraduate students and review the role of software GAP to deepen
students' understanding of the material. GAP is a software package for computational that is used for
computational in discrete abstract algebra (The GAP Group, 2013). Compared with ISETL, GAP has
many advantages. Beside GAP can be used as a tool in teaching abstract algebra materials, it can also
60
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
use for research purposes. Another advantage of GAP is that this software is still being developed until
today.
2. About GAP
GAP stands for Group, Algorithm, and Programming. GAP is a free, open, and extensible software
package which is used for computation in abstract algebra, with particular emphasis on Computational
Group Theory. This software is used in research and teaching for studying groups and their
representations, rings, vector spaces, algebras, combinatorial structures, and more (The GAP Group,
2013).
GAP was first developed in 1985 at Lehrstuhl D für Mathematik, RWTH Aachen Germany by
Joachim Neubüser, Johannes Meier, Alice Niemayer, Werner Nickel, and Martin Schönert. The first
version of GAP, which was released to the public in 1988, is version 2.4. Then, in 1997 coordination
of GAP development was moved to St. Andrews, Scotland. GAP version 4.1, which was released in
July 1999, is a complete internal redesign and almost complete rewrite of the system. In 2008, GAP
received an award from the ACM / SIGSAM Richard Dimick Jenks Memorial Prize as a superior
software engineering for computational algebra. Until now GAP is still being developed at the
University of St. Andrews, in St. Andrews, Scotland (The GAP Group, 2013). The current version of
GAP is 4.6.5 which was released on 20 July 2013. Figure 1 shows the GAP user interface version 4.6.5.
Alexander Hulpke develop a GAP installer version 4.4.12. The installer will install GAP and
GGAP, a graphical user interface for the system. Figure 2 shows GGAP user interface.
Although GGAP look better in terms of appearance, GGAP is still using GAP version 4.4.
Consequently there are some drawbacks. There are many changes between GAP version 4.4 and 4.6.5.
GAP 4.6.5 has more packages and improved functionality than 4.4. Some bugs which was found in 4.4,
which could lead to incorrect result, fixed in version 4.6.5. Therefore, although we can still use GGAP
to execute some specific commands, but the use of GAP version 4.6.5 is recommended.
61
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
GAP has more than 100 packages that serve as algorithms, methods, or library. From a
programming standpoint this software has a lot of functions and operations. GAP currently has more
than 800 default functions to study topics in algebra. So that GAP can be used to provide many
examples, from the simple to the complex example, in a relatively short time compared to manually
search. There are at least five ways GAP can be a useful educational tool, that is GAP can be use as a
fancy calculator, as a way to provide large or complicated examples, as a way for students to write
simple computer algorithms, as a way for producing large amounts of data so that the student can
formulate a conjecture and as a means for students to work in collaboration (Gallian, J.A., 2010).
GAP is an interactive system and based on a "read-eval-print" loop: the system takes in user
input, given in text form, evaluates this input (which typically will do the calculations) and then prints
the result of this calculation. (Hulpke. A, 2011).
The interactive nature of GAP allows the user to write an expression or command and see its
value immediately. User can define a function and apply it to arguments to see how the function works
(The GAP Group, 2006).
When the student has sufficient knowledge about group and subgroup, they can be given an explanation
of the right and left relations concept in the group theory. The rules that are given to this relation leads
to a necessary and sufficient condition for a subset to be subgroups. Both right and left relations are an
equivalence relation. With the above relation, the group will be partitioned into equivalence classes
called coset. In particular, the left relation will result in the formation of left coset, and the right relation
will generate the right coset.
62
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure 3 gives an overview of the work steps that can be done by the student to find cosets. First, in the
GAP worksheet, the students asked to define a group and its subgroup. After that, by using GAP, the
students are assigned to find all the right and left cosets that based on the understanding they have
earned. Students can also view and compare the definition of right and left coset.
The teacher can give some example to help the students understand how to use GAP to find coset. For
example, first define G is group that generated by permutation (1 2 3 4) and use the command in GAP
to print all of its elements. Then, in a similar way to G, define H subgroup of G that generated by
permutation (1 3) (2 4) and print all of its elements. Now, find all left and right cosets H in G one at a
time. After that, the teacher can give another example to find the coset to the students.
By using GAP, students can be trained to find coset with a more pleasant way. That is because they can
see directly the real form of all cosets that they are looking for. So that making it easier for them to
understand the concept of coset which has previously given.
Once they were able to find the cosets one by one, the teacher can raise the question what if we want to
bring all of these cosets in just one command line GAP. This question is raised with the intention that
not only the students can understand the concept of coset addition theoretically, but also they can apply
the concepts they have acquired into a computer algorithm. It can exercise their creativity in learning
mathematics.
The next thing to do is they asked to observe all cosets that they obtained. The question that can be
raised is whether all cosets they obtained different from each other? Or is there cosets that has something
in common. There are several ways to answer these questions. One of them is by manually comparing
cosets they have obtained, i.e. checks whether the right cosets is the same with left cosets? This way,
the first thing they do is compare cosets that they obtained and then write down what conclusions they
can take. From this, the teacher can raise a question, whether it is still an effective way to do for cosets
with many elements? The answer to that question can raise another way to check whether the right coset
is equivalent with the left coset. It can also exercise their creativity. Because to answer that question,
the students are required to be able to apply the formal definition that they have obtained into a computer
algorithm. Figure 4 shows another way of checking wheter every right coset is left coset.
Once the students understand how to find the right and left coset as well as check that the right and left
coset are equivalent, teacher can provide them understanding of normal subgroups. Normal subgroups
are a group that has the same right and left coset. Thus, the students can understand the concept of
normal subgroups easier because they have indirectly applied the definition to GAP.
From the exercise above the students already know what is meant by coset, left and right coset, and
normal subgroup. Thus they have had enough knowledge to face the quotient group material.
To introduce the concept of quotient group G / H, which is the set of cosets Hg or gH, the teacher will
show the necessary and sufficient condition for subgroups H such that operation on G / H well defined.
The necessary and sufficient condition is that H is a normal subgroup (the right coset is equivalent with
the left coset) of the group G.
After introducing the concept of quotient group, students will be introduced some examples of quotient
group. However, sometimes students still cannot understand very well how to find the quotient group.
Therefore, after doing such examples manually, the students are assigned to work on the GAP
worksheet. By using GAP, students will be able to see and understand the concept of quotient group
easier. Figure 5 shows how to obtain quotient group and its elements using GAP.
For example, define a symmetric group S4. Then, using the GAP command, find all of its elements.
After that, define N a subgroup of S4 that generated by (1 2) (3 4) and (1 3) (2 4). Find all elements in
N by using GAP command. Check whether N is a normal subgroup of S4. If N is a normal subgroup of
S4 then print quotient group S4/N, otherwise find another subgroup of S4 that normal in S4.
Based on the explanation above, it can be concluded that by using GAP the teacher can introduce the
concept of quotient group to the students in a fun way. By using GAP, the students are also trained to
improve their creativity, particularly in the mathematics areas. That is because they are trained to apply
algebraic concepts they have obtained in the form of programming algorithms. They can also be easier
to understand a new concept they receive, because they can directly see the real form of the concept.
With the use of GAP, in learning abstract algebra, it is expected to provide the motivation to learn
abstract algebra in a way that is not monotonous, which in turn can increase students' understanding of
the course.
4. Conclusion
Abstract algebra is one of the subjects that are often difficult perceived by most students. Dubinsky et.al
found great difficulty encountered by students starting with the concept that led to quotient group.
Therefore we need an innovative method of learning the quotient group concept. One of the things that
can be done is by using GAP as an instrument in the learning quotient groups. The use of GAP in
learning the concept of quotient group is expected to reduce abstractness of the concept of quotient
group, thus helping students to understand the quotient group concept.
Acknowledgements
We would like to thank all the people who prepared and revised previous versions of this document.
65
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Asiala, M., Dubinsky, E., Mathews, D. M., Morics, S. and Oktaç, A. (1997). Development of Students’
Understanding of Cosets, Normality, and Quotient groups. Journal of Mathematical Behavior, 16(3), 241–
309.
Dubinsky, Ed & Leron, Uri. (1994). Learning Abstract Algebra with ISETL. New York: Springer-Verlag.
Gallian, J.A. (1976). Computers in Group Theory. Mathematics Magazines, 49, 69-73.
Gallian, J. A. (2010), Abstract Algebra with GAP for Contemporary Abstract Algebra 7th edition. Brooks/Cole,
Cengage Learning. Boston.
Geissinger, Ladnor. (1989). Exploring Small Groups (Ver1.2B). San Diego: Harcourt Brace Jovanovich.
Hazzan, O. (1999). Reducing Abstraction Level When Learning Abstract Algebra Concepts. Education Studies
in Mathematics, 40 (1), 71-90.
Hulpke, A. (2011). Abstract Algebra in GAP. The Creative Commons Attribution-Noncommercial-Share Alike
3.0. United States, California.
Makiw, George. (1996). Computing in abstract algebra. The College Mathematics Journal, 27, 136-142.
O’Bryan, John & Sherman, Gary. (1992). Undergraduates, CAYLEY, and mathematics. PRIMUS, 2, 289-308.
Rainbolt, J.G. (2002). Teaching Abstract Algebra with GAP. Saint Louis.
The GAP Group. (2013). GAP – Groups, Algorithms, and Programming, Version 4.6.5. http://www.gap-
system.org.
66
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Inflation is one of economic ills that could not be ignored, because it can bring
economic instability, slow economic growth and increase the number of unemployment.
Usually, inflation become a target of government’s policy. Failing or shocking will make price
market fluctuation in domestic country and its end with inflation in economic (Baasir,
2003:265). There are some factors that cause inflation such as money supply, fuel price,
exchange rate, and BI rate. Controlling inflation is hard to do for maintaining economic stability
because of the inflation is fluctuated. Looking at the most influential factor of inflation is one
of the method to control it. Theory of inflation and Keynes theory are used to analyze inflation
factors. Error correction model method is used to find the most influential factors of inflation,
that factor is fuel price.
1. Introduction
Inflation is the general increase in prices and continuously associated with the market mechanism that
can be caused by various factors. The process of decreasing the value of the currency continuously is
also called the inflation. Inflation is an economic disease that can not be ignored, because it can cause
economic instability, slow economic growth and ever-rising unemployment.
Not infrequently inflation to government policy targets. Failure or shock in the country will
lead to price fluctuations in the domestic market and end up with inflation in the economy (Baasir,
2003:265).
Fluctuation in the rate of inflation in Indonesia with a variety of factors that affect the result in
the more difficult it is to control inflation, so in control of government must know the factors forming
inflation. Inflation in Indonesia is not only a short term phenomenon, as the quantity theory and the
theory of inflation Keynes, but also a long term phenomenon (Baasir, 2003:267).
Inflation fluctuates, causing inflation control is very difficult to maintain stability in the
economy. Therefore, an attempt to control for stable so important to do. One way to control it by looking
at the factors that most influence on inflation. That requires further analysis in the search for the causes
of inflation that occurred through study whether factors affecting inflation in Indonesia and the
influence of these factors in the long term.
This paper can be seen through the relationship between the factors that cause inflation as a
variable in the money supply, national income variable, the variable rate, variable interest rates, with
inflation in the long jangaka. So as to know which are the most influential factor in inflation.
2. Methodology
This paper uses the data in the form of a monthly time series of the year from 2005 to 2012.
The data used in this thesis is a secondary data obtained from the institutions or agencies,
among others, Bank Indonesia (BI) and the Central Statistics Agency (BPS). The data used are:
1. Data of inflation in Indonesia in 2005 – 2012 years.
2. Data of the money supply (M2) in Indonesia in 2005-2012 years.
67
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
ECM econometric methods used in the analysis of time series data. ECM involves the use of
econometric measurement method called cointegration. ECM method is used to look at the short term
dynamic movement, so that the balance can be seen in the short term. While cointegration is used to
look at the long term equilibrium. Before discussing the ECM, first discussed the concept of stationarity.
Stationary test on the data, performed the unit root tests (unit root) to see whether the time series data
used stationary or not. Stationary test data in this study using the Augmented Dickey Fuller test (ADF).
This test by comparing the value ADFtest with Mackinnon Critical Value 1%, 5%, 10% by the
following equation (Gujarati, 2003:817):
m
Z t 0 T Z t 1 Z
i 1
i t i t (1)
Cointegration test popularized by Engle and Granger (1987) (Gujarati, 2009). Cointegration approach
is closely related to the possibility of testing the long-term equilibrium relationship between economic
variables as required by economic theory. Cointegration approach can also be seen as a test of the theory
and is an important part in the formulation and estimation of a dynamic model (Engle and Granger,
1987).
In the concept of cointegration, two or more variables are not stationary time series will be
cointegrated if a linear combination is also in line with the passage of time, although it can occur each
variable is not stationary. When the time series variables are cointegrated then there is a stable
relationship in the long run, if the two series are not stationary consisting of X t and Zt cointegrated,
then there is a special representation as follows:
Zt = β0 + β1 Xt + εt (7)
The hypothesis used to test cointegration according to equation (7) is as follows:
H0 : = 0, meaning that the data of the time series contain unit roots
H1 : ≠ 0, meaning that the data of the time series does not contain unit roots
H0 rejected if = 0 the time series data do not contain unit roots. And H0 is accepted if ≠ 0 the
time series data contain unit roots.
In the long term time series model can be shown to be cointegrated regression or an equilibrium
(stable) in the long term, but in the short-term time series models are probably not experiencing balance
disturbance caused by the error term (εt). Adjustments to the deviation of the short-term real money
demand is done by inserting the error correction term derived from the long-term residual equation. To
correct imbalances in the short term to the long-term equilibrium is called the Error Correction
Mechanism.
ECM model of the relationship between the independent variable (X) and dependent variable
(Y), in the form of:
Yt a0 a1 xt a0 t 1 et
(10)
εt-1 an error cointegration of lag 1. When εt-1 not zero then the model has no equilibrium. if εt-1
positive, a2 εt-1 negative, will cause Yt negative so that Yt rose again to correct the error of balance.
Whereas, if εt-1 negative, a2 εt-1 positive, will lead to Yt positive so that Yt rise in period t to correct
balance errors. The absolute value of a2 describes how quickly the value of the balance can be achieved
again.
One important step in estimating a model is to test whether the model has been estimated that it
deserves to be used or not. This feasibility testing or diagnostic tests that test for serial correlation
between the residuals at some lag. In this study, a diagnostic test used is the Portmanteau test, the test
statistic as follows (Anastia, 2012):
m
rˆk 2
Q nn 2
k 1
nk
(11)
69
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Testing of stationary time series data for variable levels of inflation, the money supply, the
price of raw fuel, U.S. dollas rate, the interest rate used in this thesis using graphs and
Augmented Dickey Fuller test (ADF) in equation (2) with the help of software Eviews 6. The
hypothesis used is:
H 0 : ADFtest> MacKinnon Critical Value (there is a unit root at the level)
H 1 : ADFtest< MacKinnon Critical Value (there is no unit root at the level)
Based on Table 1 it can be seen that the results ADFtest> MacKinnon Critical Value only
variable in the money supply, the money supply variable is not stationary in level at significance level
of 1%, 5%, and 10%. As for the results ADFtest <MacKinnon Critical Value on a variable rate of
inflation, fuel prices, U.S. dollar exchange rate, and interest rates, then the variable is already stationary
in level at significance level of 1%, 5%, and 10%.
Due to the variable in the money supply is not stationary at level, so the data is transformed into
stationary and re-tested using the Augmented Dickey Fuller test (ADF) in equation (3.7) with the help
of software EViews 6 on 1st difference. Hypotheses used are:
H 0 : ADFtest> MacKinnon Critical Value (there is a unit root 1st difference)
H1 : ADFtest <MacKinnon Critical Value (not there 1st difference)
70
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Based on Table 2 it can be seen that the results ADFtest <MacKinnon Critical Value on a
variable in the money supply, then the variable in the money supply has been stationary in 1st difference.
The variables are not stationary at level but stationary at 1st difference, cointegration likely will
occur, which means there is a long-term relationship between the variables. To find out if it is true
berkointegrasi variables, then tested with the Augmented Engle-Granger test using equation (7) with
the help of software EViews 6. That will get the long-term cointegration models.
t-Statistic Prob.*
Std.
Variable Coefficient Error t-Statistic Prob.
Table 3 shows the null hypothesis, the results ADFtest <MacKinnon Critical Value at significance level
of 1%, 5%, and 10%. It can be concluded variable inflation, money supply, fuel prices, U.S. dollar
exchange rate, and interest rates are cointegrated.
Error correction model proposed by Engle-Granger require two stages, so called EG two steps.
The first phase calculates the residual value of cointegration regression results in Table 3. The
second stage regression analysis by including the residuals from the first step. The results of
the first phase of the data in the form of residuals from cointegration.
71
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Table 4 it can be seen some of the variables were not significant. Furthermore, by removing
non-significant variables and the model re-estimation performed, starting from the least significant
variable (the biggest prob value), and greater than the significant level α = 5% , Which is constant, and
then the variable d (b3), d (b1), and d (b4). Eliminate the constant re-estimation results with the help of
software EViews 6 are given in Table 5.
In Table 5 the variable d (b1), d (b3), d (b4) is not significant so that the model should
be re-estimated by eliminating variable d (b3) the least significant (prob the greatest value).
The result of re-estimation by eliminating variable d (b3) with the help of software EViews 6
are given in Table 6.
72
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Table 6 the variable d (b1) and d (b4) is not significant so that the model should be re-estimated by
eliminating variable d (b4) the least significant (prob the greatest value). The result of re-estimation by
eliminating variable d (b4) with the help of software EViews 6 are given in Table 7.
73
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Table 7 the variable d (b1) was not significant so that the model should be re-estimated by
eliminating variable d (b1). The result of re-estimation by eliminating variable d (b1) with the help of
software EViews 6 are given in Table 8.
H0 :
are homoscedastisity
H :
1
are heteroscedasticity
If Q m2 p then reject H 0 . However, if Q m
2
p
then accept H . 0
The results of the diagnostic test with the help of software EViews 6 are given in Table 9.
74
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Test Equation:
Dependent Variable: RESID^2
Method: Least Squares
Date: 09/22/13 Time: 14:18
Sample: 2005M08 2011M12
Included observations: 77
By looking at the results of Obs * R-Squared of 0,112236 < 9,48773 (By looking at the results
of Obs * R-Squared value of the critical Chi square d ( X 2 ) at α = 5%), it can be concluded that the
estimation occurs homoscedasticity. Another way is to look at the probability of the value of chi squares.
In the above result the probability value of 0.9903 means heteroscedasticity not occur at significant
level of 1%, 5%, 10%. The greater the probability value means that the heteroscedasticity is not
happened.
The model above explains that if the variable inflation (d (Y )) increase 1%, the variable price of
crude fuel ( d ( X 2 ) ) will increase by 0.799148%. and variable t 1 showed large residual error
correction or lag 1 for 0.770966 which means a short-term equilibrium will be reached.
4. Conclusion
The conclusion that can be drawn in this study are as follows: For the variable d ( X 2 ) , if the increase
in inflation of 1%, the price of raw fuel will increase by 0.799148% in the short term and long term. As
for the variable d (X 1) , d ( X 3 ) , d ( X 4 ) , if the increase in inflation by 1%, then the money supply,
U.S. dollar exchange rate, and interest rates will affect the long term but had no effect in the short term.
Variables t 1 showed large residual error correction or lag 1 is 0.77097. Due to the variable d(b1),
d(b3), d(b4) has been removed, so that the money supply, U.S. dollar exchange rate, and interest rates
are not significant to be used as a model. Only the price of raw fuel that is significant to be used as a
model so that the price of raw fuel that is most influential on inflation.
75
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5. References
Agustina, R. 2012. Analisis Hubungan Kausalitas dan Keseimbangan Jangka Panjang Pertumbuhan
Penduduk dan Pertumbuhan Ekonomi Jawa Barat Menggunakan Pendekatan Model Vector
Autoregressive (VAR). Skripsi Tidak Dipublikasikan. Bandung : Jurusan Matematika Fakultas
Matematika dan Ilmu Pengetahuan Alam Universitas Padjadjaran.
Anastia, J. N. 2012. Perbandingan Tiga Uji Statistik Dalam Verifikasi Model Runtun Waktu. Skripsi
Tidak Diterbitkan. Bandung : Jurusan Matematika Universitas Pendidikan Indonesia.
Cryer, J.D. 1986. Time Series Analysis. Boston: PWS-KENT Publishing Company.
Gujarati, D. 2003. Basic Econometric. Second Edition. New york : Mcgraw-Hill.
Rosadi, D. 2012. Ekonometrika & Analisis Runtun Waktu Terapan dengan EViews. Yogyakarta: Andi.
Wei, W.W.S. 2006. Time Series Analisys Univariate and Multivariate Methods. Second Edition. USA:
Addison-Wesley Publishing Company.
76
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract :Network Flows models generally explain the problems with the assumption that the
commodity is sent through a network. Sometimes the network can carry different types of
commodities. Multi-commodity problem aims to minimize the total cost when different types
of goods shipped through the same network. Commodities can be distinguished based on their
physical characteristics or only with certain attributes.
This paper focuses on network flows and integer programming models for the two commodities
Keyword : network , integer programming, commodities
1. Introduction
Network problems are often used in transportation, electricity, telephone, and communication.
Networks can also be used in the production, distribution, project planning, layout planning, resource
management, and financial planning. One model of network optimization is the minimum cost flow
problem. This problem is related to the flow through the network with arc capacities are limited. Such
as the shortest path problem that takes into account the cost or the shortest distance through the arc.
a. Networks
A network is defined as a collection of points (nodes) and collection of lines (arcs) which joining
these points. There is normally some flow along these lines, going from one point (node) to another.
arcs
nodes
i j
x i j = 100 passengers
If the flow through an arc is allowed only in one direction, then the arc is said to be a directed arc.
Directed arcs are graphically with arrows in the direction of the flow.
i j
i
Figure 1.3 Directed flow
When the flow on an arc ( between two nodes ) can move in either direction, it is called an
undirected arc. Undirected arcs are graphically represented by a single line ( without arrows )
connecting the two nodes.
d. Arc capacity
Arc capacity is the maximum amount of flow on an arc. Example include restrictions on the number
of flights between two cities.
e. Supply Nodes
Supply nodes are nodes with the amount of flow coming to them greater than the amount of flow
leaving them or nodes with positive net flow.
115
f. Demand Nodes
Demand nodes are nodes with negative net flow or outflow greater than inflow.
-50
g. Transshipment Nodes
Transshipment nodes are nodes with the same amount of flow arriving and leaving or nodes with
zero net flow.
78
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
h. Path
A Path is a sequence of distinct arc that connect two nodes in this fashion.
B E
A A
A i D i
A G
A
i
C i F i
A A
figure 1.7
i A network withi 3 path A to G
i. Cycle
Cycle is a sequence of directed arcs that begins and end at the same node.
2
A
i
1 3
A A
i i
4
A
j. Connected Network
Connected network is a network in which every two nodes are linked by at least one path.
1 2 3
A A A
i i i
4 5
A 4
Figure 1.9 Connected A
Network
i
79
i
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Model Formulation
In this section, it will first explain the basic assumption of network problems. Then the list of input
parameters and the decision variables of the minimum cost flow problem. Finally a mathematical
formulation built in stages.
Minimum cost flow problem on a network attempt to minimize the total cost of shipping supplies are
available through the network to meet the demand. It often occurs in the transportation problem,
transshipment, and shortest path problems. This problem assumes that we know the cost per unit of
flow and capacity associated with each arc. In general, the minimum cost flow problem can be described
as follows :
With the aim of minimizing the total cost of delivery through the network to meet the demand, then
the mathematical models are as follows:
n n
Minimize Z =
i 1 j 1
cij xij (2-1)
n n
s.t x x
i 1
ij
j 1
ij bi for each nodes i
80
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The first summation on the constraint node shows the total flow out of node i, while the second shows
the sum of the total flow into node i, so the difference between them is the net flow generated at this
node. In practice bi and uij are integers, as well as all the basic variables in every basic feasible solutions
include an optimal solution must be an integer. So in general find the optimal solution to this problem
typically use integer programming.
Example :
Consider the following network presented in figure 2.1 (adapted from Anderson et al 2003). An
airlane is tasked with transporting goods from nodes 1 and 2 to nodes 5,6, and 7. The airlane does not
have direct flights from the source nodes to the destination nodes. Instead they are connected trough its
hubs in nodes 3,4. The numbers next to the nodes represent the demand supply in tons. The numbers on
the arcs represent the unit cost of transport the goods from sources to destinations so that the total cost
is minimized. The aircraft flying to and from node 4 can carry a maximum of 50 tons of cargo.
5 1 5 50
75 3 5 4
1 2 A
A A
8 8
i6
i i 60
2
7 3
A
4
75 2 4 4 i
A A 4
7
i i 2 40
Figure 2.1 Network presentation for Minimum
A cost flow
We need to write one constraint for each node. For example, for node 1 we have :
x1,3 + x1,4 ≤ 75
for node 2 we have :
x2,3 + x2,4 ≤ 75
Similarly, we write constraints for the other nodes. Note that the net flow for nodes 3 and 4 should be
zero as these are transshipment nodes.
All the flights to and from node 4 can carry a maximum of 50 tons. Therefore, all the flow to and from
this node must be limited to 50 as follow :
x1,4 ≤ 50
x2,4 ≤ 50
x4,5 ≤ 50
x4,6 ≤ 50
x4,7 ≤ 50
Non negative constraint is xij ≥ 0 , and xij is integer.
Solving this problem using software POM-QM for Windows , generates a total minimum cost of $
1,250
The solution for this problem is presented in figure 2.2.
81
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
50
5 1 5
75 75 5 50 4
3 A
1
2
A 8 25 8 i 60
A 50
i 7 6
i3 10
4 2
75 A
4 40
2 i
A 4 A4
50 7 40
i 2
i
Figure 2.2 Solution to minimum A
cost flow
i
The general model is mathematically expressed as follows (Bazaraa et al 1990) :
Sets
M = set of nodes
Index
i,j,k = index for nodes
Parameters
ci,j = unit cost of flow from node i to node j
bi = amount of supply/demand for node i
Li,j = lower bound on flow through arc (i,j)
Ui,j = upper bound on flow through arc (i,j)
Decision variable
xi,j = amount of flow from node i to node j
Objective function
Minimize Z = x
iM jM
i, j . ci , j ……. (2.2)
Subject to
jM
xi , j x
kM
k ,i bi , i 1,2,3, ..., M
The objective function (2.2) attempts to minimize the total cost of the network. The constraints (2.3)
satisfy the requirements of each node by determining the amount of in flow and out flow from that node,
and impose the lower and upper bound restriction along the arcs.
In general network model to explain the problem with the assumption that one type of commodity or
entity is sent through a network. Sometimes the network can carry different types of commodities.
Minimum cost flow problem for the two commodities are trying to minimize the total cost when
different types of goods shipped through the same network. Both commodities can be distinguished
based on their physical characteristics or only with certain attributes. Two issues are widely used
commodity in the transportation industry. In the airline industry, the model adopted two commodities
to formulate models pair the crew and fleet assignment models.
82
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Example
We modify the example that was presented for the minimum cost flow problem discussed earlier to
explain the two commodities model formulation. Figure 3.1 presents the modified example.
30
40 5 1 5 20
35 4
5 A
1 3
A8 28 30
i
A 30
i 7 3 6
i 4
2
50 A
4
25 2 4 A 4 i 30
A 10
i 7
i
2
A
Figure 3.1 Network presentation for two commodities problem
i
As we see in this figure the scenario is very similar to the earlier case. The only difference is that instead
of having only one type cargo, in this case we have two types ( two commodities ). The numbers next
to each node represent the supply/demand for each cargo at that node. As an example, node 1 supplies
40 and 35 tons of cargo 1 and 2 respectively. The transportation costs per ton are also similar. We want
to determine how much from each cargo should be routed on each arc so that the total transportation
cost is minimum.
Subject to :
x1,3,1 + x1,4,1 ≤ 40
x1,3,2 + x1,4,2 ≤ 35
x2,3,1 + x2,4,1 ≤ 50
x2,3,2 + x2,4,2 ≤ 25
x3,5,1 + x4,5,1 ≤ 30
x3,5,2 + x4,5,2 ≤ 20
x3,6,1 + x4,6,1 ≤ 30
x3,6,2 + x4,6,2 ≤ 30
x3,7,1 + x4,7,1 ≤ 30
x3,7,2 + x4,7,2 ≤ 10
Recall that all the flights to end from node 4 can carry a maximum 0f 50 tons. Therefore :
83
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
x1,4,1 + x1,4,2 ≤ 50
x2,4,1 + x2,4,2 ≤ 50
x4,5,1 + x4,5,2 ≤ 50
x4,6,1 + x4,6,2 ≤ 50
x4,7,1 + x4,7,2 ≤ 50
Solving this problem using software POM-QM ( Production and Operation Management Quantitative
Methods ) version 3.0 generates a total minimum cost of $ 1,150. The solution for this problem is
presented in figure 3.2.
30
40 5 1 30 5 20
35 40 5 20 4
35
30 A
1 3 20
8 2 8 30
A i
A 30 6
7
i 0520 3 00 2
i 4 10 A
50 30
20 i
4 30
25 2 4 A 4 10 30
A 10
i i 7
2
A
Figure 3.2 Solution of minimum cost flow for two Commodities problem
i
The general model is mathematically expressed as follows (Ahuja et al. 1993) :
Sets
M = set of nodes
K = set of commodities
Indeces
i,j = index for nodes
k = index for commodities
Parameters
ci,j,k = unit cost of flow from node i to j for
commodity k
bi,k = amount of supply / demand at node i
for commodity k
ui,j = flow capacity on arc (i,j)
Decision variable
xi,j,k = amount of flow from node i to node j
for commodity k
Objective function
84
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Minimize Z = c
kK iM jM
i , j ,k . xi , j ,k
(3.1)
Subject to :
x
tM
i ,t ,k x
tM
t ,i ,k bi,k (3.2)
kK
xi, j ,k ui, j (3.3)
References
Ahuya, R., Magnanti, T., and Orlin, J. 1993. Network Flows, Theory, Algorithm and Application. Prentice Hall.
Anderson,D.,Sweeney D.,and Williams,T. 2003. Quantitative Methods for Business. 9th Edition. South-Western
Bazaraa, M., Jarvis, J., and Sherali, H. 1990. Linear Programming and Network Flows. John Wiley
Bazargan, M. 2010. Airline Operations and Scheduling. 2nd Edition. MPG Books Group
Hillier, F. and Lieberman, G. 2001. Introduction to Operations Research. 7th Edition. McGraw-Hill.
85
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1. Introduction
Such as the importance of continuity of a function in convergence of the functions sequence [4], then
in any numbers sequence also, need to be studied about conditions that must be satisfied so that a
sequence is convergent. To support a research related to the convergence of sequence, projections,
continuity of functions, existence of point convergence, and fractional derivatives, it will be discussed
in this paper about the convergence of numbers sequence through geometric approach.
As it has been known that the Fibonacci sequence is a sequence (xn) which has a recursion
equation xn = xn-1 + xn-2 , with the initial condition x0 = 1 and x1 = 1. Several features of the
Fibonacci sequence, is that if the greatest common divisor of the numbers m and n is k, then the greatest
common divisor of the term-m ie xm and term-n ie xn is term-k ie xk. Similarly xk is always a divisor
factor of xnk for all natural numbers n [5]. Another feature is that any four consecutive Fibonacci
numbers: w, x, y, z are always forming Pythagorean triple, ie wz, 2xy, and (yz - xw) [6]. Besides those
three things above, also known that although Fibonacci sequence itself is not convergent, but the
Fibonacci ratio sequence is converging on a number called the Golden Ratio [5].
Furthermore the generalized of Fibonacci sequence is (yn) with rules
yn = α yn-1 + β yn-2 (1)
with real constants α and β are both non-zero, and the initial conditions y0 and y1. [2].
In [8] J. M. Tuwankotta have shown that the sequence (yn) with β = 1 - α and 0 < α < 2 is a
contractive sequence so that convergent on R (set of the real numbers) [2], and the convergence point
𝑦 −𝑦
is 𝐿(𝛼) = 𝑦0 + 1 0 .
2−𝛼
In this paper, the author discusses (𝑟𝑛 ) ie the ratio sequence of two successive term of the
generalized Fibonacci sequence (1) in the form
𝑦
𝑟𝑛 = 𝑦 𝑛 , (2)
𝑛−1
86
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The first problem to be studied in this paper is the conditions for constants α and β so that the
sequence (𝑟𝑛 ) well-defined, and how to the relationship of α and β so that (𝑟𝑛 ) converges. Furthermore,
in addition to evidence of convergence also will show convergence point of the sequence.
Suppose given a ratio sequence as in (3). The sequence (𝑟𝑛 ) will be well-defined if rn 0 for all n.
Thus we must choose initial conditions r1 such that r2 0, or r3 0, or r4 0, and so on.
From (3) rn can also be expressed as
𝛽
𝑟𝑛 = 𝛼 + 𝛽
𝛼+ 𝛽
𝛼+ n-1 times the division
𝛽
𝛼+ ⋯
⋯+
𝛽
𝛼+
𝑟1
Hence
−𝛽
1. 𝑟1 = will result in r2 = 0
𝛼
−𝛽
2. 𝑟2 = 𝛽 will result in r3 = 0
𝛼+
𝛼
−𝛽
3. 𝑟2 = 𝛽 will result in r4 = 0
𝛼+ 𝛽
𝛼+
𝛼
and so on.
Thus (𝑟𝑛 ) will be well-defined, if the initial condition r1 CF (α , β) where CF (α , β) is
Continued Fraction
{ −𝛽
𝛼
,
−𝛽
𝛽 ,
−𝛽
𝛽 , . . . }.
𝛼+ 𝛼+ 𝛽
𝛼 𝛼+
𝛼
In particular, if the sequence CF(α , β) = (fn) is a sequence converging to f , then
−𝛽 −𝛽
lim 𝑓𝑛 = 𝑓 = = ,
𝑛→∞ 𝛼 − lim 𝑓𝑛 𝛼−𝑓
𝑛→∞
Hence
𝛼±√𝛼 2 +4𝛽
𝑓= 2
.
In the case α > 0 and β > 0 , then fn < 0 for all n, so the value that satisfies is
2 4
f = < 0.
2
3. Necessary Condition for Convergence
Necessary condition for convergence sequence (𝑟𝑛 ) is determined by the relationship between α and
β. If it is assumed that the sequence (𝑟𝑛 ) converges to a number r, then from equation (3) is obtained:
87
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝛽
lim 𝑟𝑛 = lim (𝛼 + ),
𝑛→∞ 𝑛→∞ 𝑟𝑛−1
resulting equation
𝛽
r = α + 𝑟 or r2 – α r – β = 0 . (4)
In this case, r has a real value if it satisfies α2 + 4 β 0.
Hence, necessary condition for convergence of (𝑟𝑛 ) is α2 + 4 β 0 .
4. Evidence of convergence.
To prove that (𝑟𝑛 ) is a convergent sequence, it will be shown that (𝑟𝑛 ) is a contractive sequence, ie
there is a constant C with 0 < C < 1 so that for every n holds:
|𝑟𝑛 − 𝑟𝑛−1 | < 𝐶 |𝑟𝑛−1 − 𝑟𝑛−2 | .
By using the equation (3) obtained
𝛽 𝛽
|𝑟𝑛 − 𝑟𝑛−1 | = |(𝛼 + ) − (𝛼 + 𝑟 ) |
𝑟 𝑛−1 𝑛−2
𝛽 𝛽
=| − |
𝑟𝑛−1 𝑟𝑛−2
|𝛽|
= |𝑟𝑛−1 − 𝑟𝑛−2 | , (5)
|𝑟𝑛−1 ||𝑟𝑛−2 |
where
y n 1 y n 2
|𝑟𝑛−1 ||𝑟𝑛−2 | = .
y n 2 y n 3
y n 1
=
y n 3
= yn 2 yn 3
yn 3
y n2
= α + β
y n 3
= α rn-2 + β ,
Thus, whether or not contractive sequence (rn) will depend on α and β. In this case the author
divide it in some cases.
Case -1: α > 0 , β > 0 , dan rn > 0 n .
Case -2: α > 0 , β > 0 , dan rn < 0 n .
Case -3: α > 0 , β < 0 , dan rn > 0 n .
Case -4: α > 0 , β < 0 , dan rn < 0 n .
Case -5: α < 0 , β > 0 , dan rn > 0 n .
Case -6: α < 0 , β > 0 , dan rn < 0 n .
Case -7: α < 0 , β < 0 , dan rn > 0 n .
Case -8: α < 0 , β < 0 , dan rn < 0 n .
For Case-1 and Case-6 above, will be obtained α rn-2 + β > β , so if we let
88
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
This proves that (rn) is contractive. According to the theorem, contractive sequence is Cauchy, and
Cauchy sequence in R is convergent [2], thus proving that (rn) is convergent.
For Case-4, will apply α rn-2 + β > β if rn-2 < 0 n, but this is not possible, because
𝛽
if ri < 0 for some i, then ri+1 = α + > 0 so that (rn) is not contractive.
𝑟𝑖
Similarly to the Case-7, the inequality α rn-2 + β > β will apply if rn-2 > 0 n, but this is
𝛽
not possible because if ri > 0 for some i, then ri+1 = α + 𝑟 < 0 , so that (rn) is not contractive.
𝑖
While for the Case: 2, 3, 5, and 8, shall apply α rn-2 + β < β so = C > 1. Thus
rn2
(rn) is not contractive and so not guaranteed covergent.
5. Convergence point
Furthermore is to determine the value of convergence point of the sequence (rn). If (rn) converges to
r, then from equation (4) is obtained
r = 4
2
,
2
so that we have two possible values of r, namely:
r* = 4 r** = 4
2 2
or (7)
2 2
In the Case-1 of the above, where α > 0 and β > 0, then the value will be r * > 0 and r ** < 0. But
since rn > 0 for all n, then lim 𝑟𝑛 > 0 [1], and this means that (rn) converges to r *.
𝑛→∞
Similarly in the Case-6, where α < 0 and β > 0, then the value of r * > 0 and r ** < 0. But because
rn < 0 for all n, then lim 𝑟𝑛 < 0 [1], and this means that (rn) converges to r **.
𝑛→∞
For other cases, convergence of (rn) still needs to be further investigated, including geometry approach.
6. Geometry approach
As noted earer, the convergence point (rn) ie r * or r ** will depend on the value of α and β, so
that in this geometry approach, eight cases of the above can be simplified into four cases, namely:
case-1: α > 0 β > 0, case-2: α < 0, β > 0,
case-3: α > 0 β < 0, and case-4: α < 0, β < 0.
A geometric approach can be used to see the convergence of (rn), where we can compare the
recurrence relation in (3) with the hyperbolic function:
𝛽 𝛼𝑥+𝛽
𝑦= 𝛼+ 𝑥
or y = 𝑥
. (8)
If we call r1 = x, then by substituting r1 to (3) or x to (2) will be obtained r2 = y. Furthermore, by
way of projecting the value of y = r2 to the x-axis through the line y = x, we call call r2 = x. Then by
substituting r2 to (3) or x to (8) will be obtained r3 = y. Process goes on so that r1, r2, r3, . . . and so
on, are all located on the x-axis and toward to abscissa of convergence point on the curve (8).
Because there are two points, ie r* and r** , where one of which may be the point of
convergence, then in the graph will look rn value changes that would toward one of the two points
above. These changes depend on α and β.
In the case-1 α > 0 and β > 0, from (7) we have r * is positive and r ** is negative number.
89
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝛽
From (3) rn = α + 𝑟𝑛−1
for n 2. So if r1 > 0 then rn > 0 n , and if r1 < 0 then there exist
a natural number k so that rn > 0 for all n > k . Hence for case-1, (rn) will converge to r* > 0.
In Figure-1, hyperbole in equation (8) has a horizontal asymptote y = α > 0 and the intersection point
with the x-axis is x = − < 0 where r1 , r2 , r3 , . . . move towards r *, which means that (rn)
converges to r *.
Figure-1 Figure-2
In the case-2, α < 0 and β > 0 from (7) will be obtained r * is positive, and r ** is negative,
so that (rn) will converges to r ** < 0. Hyperbole has a horizontal asymptote y = α < 0 and the
intersection point with the x-axis is x = − > 0. Graph as in Figure-2 above.
In the case-3 α > 0 and β < 0 will be obtained r * > r ** both are positive, so that (rn) will converges
to r * <0. Hyperbole has a horizontal asymptote y = α > 0 where the intersection point with the x-axis
is x = − > 0. Graph as in Figure-3 below.
Figure-3 Figure-4
In the case-4 α < 0 and β < 0 will be obtained r * > r ** both are negative, so that (rn) will converge
to r ** < 0. The Hyperbole has a horizontal asymptote y = α < 0 where the intersection point with
the x-axis is x = − < 0. Graphs as in Figure-4 above.
90
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
7. Conclusion
Based on the description above, it turns out that a necessary condition for convergence of ratio
𝑦
sequence of generalized Fibonacci (rn) = (𝑦 𝑛 ) is α 2 + 4 β 0 where a convergence points are
𝑛−1
= 4
2
r* for α > 0 β > 0 or α > 0 β < 0 , and
2
r** = 4 for α < 0 β > 0 or α < 0 β > 0 .
2
2
One interesting thing is that the sequence of numbers that exist in the CF (α, β) that the
numbers are being "ban" for r1 as the initial condition, it converges to a number f which is one
point of convergence of (rn). Even for the case α < 0 and β > 0 the sequence (rn) and CF (α, β) has
the same convergence point,ie r ** = f .
From the above discussion it is also seen that the convergence point does not depend on rn , but
only depending on α and β only. Similarly, the graph shows that the convergence point is a point on a
curve that have slope ramps, meaning that
if f (r *) < f (r **) then (rn) converges r *, and
if f (r *) > f (r **) then (rn) converges r **.
8. Acknowledgement
This work is fully supported by Universitas Padjadjaran under the Program of Penelitian
Unggulan Perguruan Tinggi Program Hibah Desentralisasi No. 2002/UN6.RKT/KU/2013.
9. References
[1] Apostol, Introduction to Mathematical Analysis, Addison-Weslley,1974
[2] Bartle, R.G & Sherbert, Introduction to Real Analysis, second ed, John Wiley & sons, Inc.1992
[3] Dominic & Vella Alfred, When is Member of Phitagorean Triple, phitagoras@fellas.com, 2002
[4] Endang Rusyaman, Kankan Parmikanti, Eddy Djauhari, dan Ema Carnia , Syarat Kekontinuan
Fungsi Konvergensi Pada Barisan Fungsi Turunan Berorde Fraksional, Seminar Nasional Sains
dan Teknologi Nuklir, Bandung, 2013
[5] Kalman & Menna Robert, The Fibonacci Numbers Expossed, Mat Magazine, 2003, (3:167-
181)
[6] Parmikanti. K, Pendekatan Geometri Untuk Masalah Konvergensi Barisan, Seminar Nasional
Matematika, Unpad, 2006
[7] Rusyaman. E, Konvergensi Barisan Barisan Fibonacci yang Diperumum, Seminar Nasional
Matematika, Unpad, 2006
[8] Tuwankotta. J.M, Contractive Sequence, ITB, 2005
91
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Investment in Islamic stocks investors are also faced with the issue of risk, due to
daily price of Islamic stock also fluctuate. To minimize the level of risk, investors usually
forming an investment portfolio. Establishment of a portfolio consisting of several Islamic
stocks are intended to get the optimal composition of the investment portfolio. This paper
discussed about optimizing investment portfolio of Mean-Variance to Islamic stocks by using
mean and volatility is not constant approaches. Non constant mean analyzed using models
Autoregressive Moving Average (ARMA), while non constant volatility models are analyzed
using the Generalized Autoregressive Conditional heteroscedastic (GARCH). Optimization
process is performed by using the Lagrangian multiplier technique. As a numerical illustration,
the method is used to analyze some Islamic stocks in Indonesia. The expected result is to get
the proportion of investment in each Islamic stock analyzed.
1. Introduction
Investment is basically invest some capital into some form of instrument (asset), can be either fixed
assets or financial assets. Investing in financial assets can generally be done by buying shares in the
stock market. Investing in stocks, investors will be exposed to the risk that the magnitude of the problem
along with the magnitude of the expected return (Kheirollah & Bjarnbo, 2007). The greater the expected
return, generally the greater the risk to be faced. Investment risk is describing rise and fall stock price
changes at any time can be measured by the value of variance (Sukono, et al., 2011).
The strategy is often used by investors in the face of the risks of investing is to form an
investment portfolio. Establishment of an investment portfolio is essentially allocates capital in a few
selected stocks, or often referred to diversify investments (Panjer et al., 1998). The purpose of the
establishment of the investment portfolio is to get a certain return with minimum risk levels, or to get
maximum returns with limited risk. To achieve these objectives, the investor is deemed necessary to
conduct analysis of optimal portfolio selection. Analysis of portfolio selection can be done with
optimum investment portfolio optimization techniques (Shi-Jie Deng, 2004).
Therefore, this paper studied the paper on portfolio optimization model of Mean-Variance,
where the average (mean) and volatility (variance) assumed the value is not constant, which is analyzed
using time series model approach (time series). Non constant mean analyzed using models
Autoregressive Moving Average (ARMA), whereas non constant volatility analyzed using models of
the Generalized Autoregressive Conditional Hetroscedasticity (GARCH) (Shi-Jie Deng, 2004).
Methods such analysis is then used to analyze a Islamic stock in Indonesia. the purpose of this analysis
is to obtain the proportion of investment capital allocation in some Islamic stocks are analyzed, which
can provide a maximum return with a certain level of risk.
92
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Methodology
In this section will discuss the stages of analysis includes the calculation of stock returns, mean
modeling, volatility modeling and portfolio optimization.
2.1 Stocks Return
Suppose Pit Islamic stock price i at time t, and rit Islamic stock return i at time t. The value of rit
can be calculated using the following equation.
P
rit ln it , (1)
Pit 1
where i 1,..., N with N number of stocks that were analyzed, and t 1,..., T with T the number of stock
price data observed (Tsay, 2005; Sukono et al., 2011).
with mean 0 and variance i2 . Sequence {rit } is a model ARMA( p, q ) with mean it , if {rti it } is
a model ARMA( p, q ) (Gujarati, 2004; Shewhart et al., 2004).
Stages of the process modeling the mean include: (i) identification of the model, (ii) parameter
estimation, (iii) diagnostic test, and (iv) Prediction (Tsay, 2005).
2.3 Volatility Models
Volatility models in time series data in general can be analyzed using GARCH models. Suppose {rit }
is Islamic stock returns i at time t is stationary, the residuals of the mean model for Islamic stock i at
time t is ait rtt it . Residual sequence {ait } follow the model GARCH( g, s )when for each has
the following equation:
g s
ait it it , it i 0
2
ik ait2 k ij it2 j it , (3)
k 1 j 1
with { it } is a sequence of residual volatility models, namely the sequence of random variables are
independent and identically distributed (IID) with mean 0 and variance 1. Parameter coefficients satisfy
ij 0 , and k 1
max( g , s)
the property that i 0 0 , ik 0 , ik ij 1 (Shi-Jie Deng, 2004; Tsay,
2005).
Volatility modeling process steps include: (i) The estimated mean model, (ii) Test of ARCH
effects, (iii) Identification of the model, (iv) The estimated volatility models, (v) Test of diagnosis, and
(vi) Prediction (Tsay, 2005).
2.4 Prediction of l –Step Ahead
Using the mean and volatility models, aiming to calculate the prediction of mean ˆit rˆih (l ) and
volatility ˆ it2 ˆ ih
2 (l ) , for l -period ahead of the starting point prediction h (Tsay, 2005; Febrian &
93
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Herwany, 2009). The prediction results of mean ˆt rˆih (l ) dan volatility ˆ it2 ˆ ih
2 (l ) , will then be
Portfolio return can be expressed as r p w' r with w' e 1 ( Zhang, 2006; Panjer et al., 1998). Suppose
μ' ( 1t ,..., Nt ) , expectations of portfolio p can be expressed as:
p E[rp ] w' μ . (4)
Suppose given covariance matrix Σ ( ij )i, j 1,..., N , where ij Cov(rit , r jt ) . Variance of the
portfolio return can be expressed as follows:
2p w' Σw . (5)
Definition 1. (Panjer et al., 1998). A portfolio p * called (Mean-variance) efficient if there is
no portfolio p with p p* and 2p 2p* (Panjer et al., 1998).
To get efficient portfolio, typically using an objective function to maximize
2 p 2p 0
,
where the parameters of the investor's risk tolerance. Means, for investors with risk tolerance ( 0)
need to resolve the problem of portfolio
Maximize 2w' μ - w' Σw (6)
the condition w' e 1
Please note that the completion of (6), for all [0, ) form a complete set of efficient portfolios. Set
of all points in the diagram- ( p , p ) related to efficient portfolio so-called surface efficient (efficient
2
3. Illustrations
In this section will discuss the application of the method and the results of the analysis stage of the
observation that includes Islamic stocks data, the calculation of Islamic stock returns, modeling the
mean of Islamic stocks, volatility modeling, prediction of the mean and variance values, the process of
optimization.
94
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
-.20 -.20
-.3 -.3 -.12
250 500 750 1000 1250 250 500 750 1000 1250
250 500 750 1000 1250 250 500 750 1000 1250 250 500 750 1000 1250
Can be seen by naked eye chart in Figure-1 shows that the five Islamic stock return data were analyzed
(have stationary). For stationary testing is done using the ADF test statistic results respectively values
are: -34.24848; -33.79008; -30.20451; -40.04979; and-28.36974. Further, if the specified level of
significance = 5%, can be obtained by a standard normal distribution critical value is -2.863461. It
is clear that the value of the test statistic for all ADF of Islamic stocks are analyzed located in the
rejection region, so that everything is stationary.
2
on squared residuals correlogram at , the ACF and PACF graphs of each, selected models of volatility
that might be tentative. Volatility model estimation each of Islamic stock return performed
simultaneously (synchronously) with mean models. After going through tests of significance for
parameters and significance tests for models, all equations are written below have been significant. The
result, obtained the best model are respectively:
Islamic stock AKRA follow the model ARMA(1,0)-GARCH(1,1) with equation:
rt 0.073891rt 1 at and t2 0.000014 0.040015 t21 0.9404318 t21 t
Islamic stock CPIN follow the model ARMA(1,0)-GARCH(1,1) with equation:
rt 0.089639rt 1 at and t2 0.000052 0.134049 t21 0.820716 t21 t
Islamic stock ITMG follow the model ARMA(1,0)-GARCH(1,1) with equation:
rt 0.193825rt 1 at and t2 0.000012 0.066024 t21 0.923108 t21 t
Islamic stock MYOR follow the model ARMA(7,0)-GARCH(1,1) with equation:
rt 0.102007rt 7 at and t2 0.000009 0.044332 t21 0.945801 t21 t
Islamic stock TLKM follow the model ARMA(2,0)-GARCH(1,1) with equation:
rt 0.084289rt 2 at and t2 0.000019 0.139166 t21 0.824540 t21 t
Based on the ARCH-LM test statistics, the residuals of the models for Islamic stock AKRA,
CPIN, IMTG, MYOR, and TLKM there is no element of ARCH, and also has white noise. Mean and
volatility models are then used to calculate the values ˆ t = rˆt (l ) and ˆ t2 = t2 (l ) recursively.
96
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
0.001200 0.000136 0.000251 0.000113 0.000401 0.9613 0.0345 0.2020 0.0257 0.2522
0.000136 0.001840 0.000092 0.000315 0.000225 0.0345 0.5904 0.0738 0.2237 0.0801
Σ 0.000251 0.000092 0.001078 0.000512 0.000133 1 3
Σ 10 0.2020 0.0738 1.3037 0.6955 0.0406
0.000113 0.000315 0.000512 0.000956 0.000075 and 0.0257 0.2237 0.6955 1.4880 0.0150
0.000401 0.000225 0.000133 0.000075 0.001399 0.2522 0.0801 0.0406 0.0150 0.8030
Optimization done in order to determine the composition of the portfolio weights, and thus the
portfolio weight vector is determined by using equation (8). The weight vector calculation process, the
values of risk tolerance determined by the simulation begins value = 0.000 with an increase of
0.001. If it is assumed that short sales are not allowed, then the simulation is stopped when the value of
= 0.036, because it has resulted in a portfolio weight at least there is a negative value. The portfolio
weights calculation results are given in Table-2.
w1 w2 w3 w4 w5 wT e ̂ p ˆ 2p ˆ p ˆ 2p ˆ p / ˆ 2p
AKRA CPIN IMTG MYOR TLKM Sum Mean Variance Maximum Ratio
0.000 0.2150 0.1406 0.1895 0.2629 0.1920 1 0.0059 0.00043136 0.00546864 13.7161
0.001 0.2093 0.1411 0.2025 0.2555 0.1916 1 0.0061 0.00043151 0.00566849 14.0507
0.002 0.2036 0.1417 0.2155 0.2480 0.1913 1 0.0062 0.00043195 0.00576805 14.6893
0.003 0.1978 0.1422 0.2285 0.2406 0.1909 1 0.0064 0.00043268 0.00596732 14.6893
0.004 0.1921 0.1428 0.2414 0.2331 0.1905 1 0.0065 0.00043370 0.00606630 14.9921
0.005 0.1864 0.1433 0.2544 0.2257 0.1902 1 0.0066 0.00043502 0.00616498 15.2832
0.006 0.1807 0.1439 0.2674 0.2182 0.1898 1 0.0068 0.00043663 0.00636337 15.5621
0.007 0.1750 0.1444 0.2803 0.2108 0.1894 1 0.0069 0.00043853 0.00646147 15.8284
0.008 0.1693 0.1450 0.2933 0.2033 0.1891 1 0.0071 0.00044073 0.00665927 16.0817
0.009 0.1636 0.1455 0.3063 0.1959 0.1887 1 0.0072 0.00044322 0.00675678 16.3217
0.010 0.1579 0.1461 0.3193 0.1884 0.1883 1 0.0074 0.00044600 0.00695400 16.5481
0.011 0.1522 0.1466 0.3322 0.1810 0.1880 1 0.0075 0.00044907 0.00705093 16.7608
0.012 0.1465 0.1472 0.3452 0.1736 0.1876 1 0.0077 0.00045344 0.00724656 16.9597
0.013 0.1407 0.1477 0.3582 0.1661 0.1872 1 0.0078 0.00045610 0.00734390 17.1445
0.014 0.1350 0.1483 0.3711 0.1587 0.1587 1 0.0080 0.00046005 0.00753995 17.3154
0.015 0.1293 0.1488 0.3841 0.1512 0.1865 1 0.0081 0.00046430 0.00763570 17.4724
0.016 0.1236 0.1494 0.3971 0.1438 0.1461 1 0.0083 0.00046884 0.00783116 17.6155
0.017 0.1179 0.1499 0.4101 0.1363 0.1858 1 0.0084 0.00047367 0.00792633 17.7449
0.018 0.1122 0.1505 0.4230 0.1289 0.1854 1 0.0086 0.00047879 0.00812121 17.8608
0.019 0.1065 0.1510 0.4360 0.1214 0.1850 1 0.0087 0.00048421 0.00821579 17.9633
0.020 0.1008 0.1516 0.4490 0.1140 0.1847 1 0.0088 0.00048992 0.00831008 18.0528
0.021 0.0951 0.1521 0.4619 0.1065 0.1843 1 0.0090 0.00049592 0.00850408 18.1295
0.022 0.0893 0.1527 0.4749 0.0991 0.1840 1 0.0091 0.00050221 0.00859779 18.1937
0.023 0.0836 0.1533 0.4879 0.0916 0.1836 1 0.0093 0.00050880 0.00879120 18.2459
0.024 0.0779 0.1538 0.5009 0.0842 0.1832 1 0.0094 0.00051568 0.00888432 18.2863
0.025 0.0722 0.1544 0.5138 0.0767 0.1829 1 0.0096 0.00052285 0.00907715 18.3154
0.026 0.0665 0.1549 0.5268 0.0693 0.1825 1 0.0097 0.00053032 0.00916968 18.3336
0.027 0.0608 0.1555 0.5398 0.0619 0.1821 1 0.0099 0.00053808 0.00936192 18.3413
0.028 0.0551 0.1560 0.5527 0.0544 0.1818 1 0.0100 0.00054613 0.00945387 18.3390
0.029 0.0494 0.1566 0.5657 0.0470 0.1814 1 0.0102 0.00055447 0.00964553 18.3270
0.030 0.0437 0.1571 0.5787 0.0395 0.1810 1 0.0103 0.00056311 0.00973689 18.3059
0.031 0.0380 0.1577 0.5917 0.0321 0.1807 1 0.0105 0.00057204 0.00992796 18.2760
0.032 0.0322 0.1582 0.6046 0.0246 0.1803 1 0.0106 0.00058126 0.01001874 18.2379
0.033 0.0265 0.1588 0.6176 0.0172 0.1799 1 0.0107 0.00059078 0.01010922 18.1919
0.034 0.0208 0.1593 0.6306 0.0097 0.1796 1 0.0109 0.00060059 0.01029941 18.1386
0.035 0.0151 0.1599 0.6435 0.0023 0.1792 1 0.0110 0.00061069 0.01038931 18.0783
0.036 0.0094 0.1604 0.6565 -0.0052 0.1788 1 0.0112 0.00062108 0.01057892 18.0115
Based on the results of the optimization process are given in Table-2, the pair of points ( ̂ p ,
ˆ 2p )efficient portfolio can be formed or the so-called efficient frontier as given in Figure-2.a. This graph
97
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
shows the efficient frontier decent area for investors with different levels of risk tolerance, to make an
investment. Also by using the optimization process results in Table-2, can be calculated ratio value
̂ p towards ˆ 2p for each level of risk tolerance. The ratio calculation results can be shown as in Figure
2.b. This ratio shows the relationship between the optimum portfolio return expected with variance as
a measure of risk.
18
0.010
17
0.009
Mean
Ratio
16
0.008
15
0.007
14
0.006
Based on the results of the calculation of portfolio optimization, the optimum value is achieved
when the value of the portfolio's risk tolerance = 0.027. The portfolio produces mean value of ̂ p =
0.0099 with the value of risk as the variance ˆ 2p = 0.00053808.
Composition weight of the maximum portfolio respectively are: 0.0608, 0.1555, 0.5398,
0.0619, and 0.1821. This provides reference to investors that invest in Islamic stocks of AKRA, CPIN,
ITMG, MYOR, and TLKM, in order to achieve the maximum value of the portfolio, the composition
of the portfolio weights are as mentioned above.
4. Conclusions
In this paper we analyzed the Mean-Variance portfolio optimization on some Islamic stocks by using
non constant mean and volatility models approaches, in some Islamic stocks are traded in the Islamic
capital market in Indonesia. The analysis showed that be some of Islamic stocks which analyzed all
follow the ARMA( p, q )-GARCH( g, s ) models. Whereas, Based on the results of the calculation of
portfolio optimization, produced that the optimum is achieved when the composition of the portfolio
investment weights in Islamic stocks of AKRA, CPIN, ITMG, MYOR, and TLKM, respectively are:
0.0608, 0.1555, 0.5398, 0.0619, and 0.1821. The composition of the portfolio weights thereby will
produces a portfolio with mean value of 0.0099 and the value of risk, measured as the variance of
0.00053808.
References
Febrian, E. & Herwany, A. (2009). Volatility Forecasting Models and Market Co-Integration: A Study on South-
East Asian Markets. Working Paper in Economics and Development Studies. Department of Ekonomics,
Padjadjaran University.
Goto, S. & Yan Xu. (2012). On Mean Variance Portfolio Optimization: Improving Performance Through Better
Use of Hedging Relations. Working Paper. Moore School of Business, University of South Carolina. email:
shingo.goto@moore.sc.edu.
Gujarati, D.N. (2004). Basic Econometrics. Fourth Edition. The McGraw−Hill Companies, Arizona.
Kheirollah, A. & Bjarnbo, O., (2007). A Quantitative Risk Optimization of Markowitz Model: An Empirical
Investigation on Swedish Large Cap List, Master Thesis, in Mathematics/Applied Mathematics, University
Sweden, Department of Mathematics and Physics, www.mdh.se/polopoly_fs/ 1.16205!MasterTheses.pdf.
98
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Panjer, H.H. Ed., et al. (1998). Financial Economics: With Applicationsto Investments, Insurance, and Pensions.
Schaumburg, Ill.: The Actuarial Foundation.
Rifqiawan, R.A. (2008). Analisis Perbedaan Volume Perdagangan Saham-Saham yang Optimal Pada Jakarta
Islamic Index (JII) di Bursa Efek Indonesia (BEI). Tesis Program Magister. Program Stdi Magister Sains
Akuntansi, Program Pascasarjana, Universitas Diponegoro, Semarang, 2008.
Shewhart, Walter A and Samuel S. Wilks. (2004). Applied Econometric Time Series. John Wiley &Sons, Inc.
United States of America.
Shi-Jie Deng. (2004). Heavy-Tailed GARCH models: Pricing and Risk Management Applications in Power
Market, IMA Control & Pricing in Communication & Power Networks. 7-17 Mar
http://www.ima.umn.edu/talks/.../deng/power_ workshop_ ima032004-deng.pdf.
Sukono, Subanar & Dedi Rosadi. (2011). Pengukuran VaR Dengan Volatilitas Tak Konstan dan Efek Long
Memory. Disertasi. Prtogram Studi S3 Statistika, Jurusan Matematika, Fakultas Matematika dan Ilmu
Pengetahuan Alam, Universitas Gajah Mada, Yogyakarta, 2011.
Tsay, R.S. (2005). Analysis of Financial Time Serie, Second Edition, USA: John Wiley & Sons, Inc.
Yoshimoto, A. (1996). The Mean-Variance Approach To Portfolio Optimization Subject To Transaction Costs.
Journal of the Operations Research Society of Japan, Vol. 39, No. 1, March 1996
Zhang, D., (2006). Portfolio Optimization with Liquidity Impact, Working Paper, Center for Computational
Finance and Economic Agents, University of Essex, www.orfe.princeton.edu/
oxfordprinceton5/slides/yu.pdf.
99
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1. Introduction
The objective of portfolio selection is to find the right asset mix that provides the approriate
combination of return and risk that allows investors to achieve their financial goals. Portfolio selection
problems were ﬁrstly by Markowitz in 1952. In the proposed models, the return is measured by the
expected value of the random portfolio return, while the risk is quantiﬁed by the variance of the portfolio
(mean-variance portfolio).
The mean-variance portfolio (MVP) just requires estimation of mean 𝝁 and covariance matrix
𝚺 of asset returns. Traditionally, the sample mean and covariance matrix have been used for this
purpose. However, because of estimation error, policies constructed using these estimators are
extremely unstable. So, the resulting portfolio weights ﬂuctuate substantially over time, see Chopra dan
Ziemba (1993), Broadie (1993), Bengtsson (2004) also Ceria and Stubbs (2006).
The instability of the mean-variance portfolios can be explained since sample mean and
covariance matrix are maximum likelihood estimators under normality. These estimators possess
desirable statistical properties under the true model. However, their asymptotic breakdown point is
equal to zero (Maronna et.al, 2006), i.e. that they are badly affected by atypical observations.
Several techniques have been suggested to reduce the sensitivity of mean-variance portfolio.
One of them is represented by robust statistics. The theory of robust statistics is concerned with the
construction of statistical procedures that are stable even when the empirical (sample) distribution
deviates from the assumed (normal) distribution. (see Huber 2004, Staudle and Sheather 1990, Maronna
et.al 2006). Other researchers have proposed portfolio policies based on robust estimation techniques,
see Lauprette (2002), Vaz-de Melo and Camara (2003), Perret-Gentil and Victoria-Feser (2004),
Welsch and Zhou (2007) , DeMiguel and Nogales (2009) also Hu (2012).
Based on the previous analysis, this paper examines portfolio policies using robust estimators.
These policies should be less sensitive to deviations of the empirical distribution of returns from
normality than the traditional policies. We focus on certain robust estimators known as Minimum
Volume Ellipsoid (MVE) and Fast Minimum Covariance Determinant (Fast-MCD), which have high
breakdown point. (see Rousseeuw and Van Driessen,1999).
100
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Robust Statistics
Robust statistics is an extension of classical statistics that takes into account the possibility of
model misspesification (including outliers). In this case, the parametric model is the multivariate normal
model with parameters 𝝁 and 𝚺. Robust estimators for location of scale with multivariate data have
first been proposed by Gnanadesikan and Kettenring (1972). One of them is the property of affine
equivariance and is fulfilled by estimators 𝝁 ̂ (𝒓) of location and estimators 𝚺 ̂(𝒓) of scale that satisfy
(see Maronna et.al. 2006):
̂ (𝑨𝒓 + 𝒃) = 𝑨𝝁
𝝁 ̂ (𝒓) + 𝒃 (1)
̂ ̂
𝚺(𝑨𝒓 + 𝒃) = 𝑨′𝚺(𝒓)𝑨 (2)
The most widely used estimators of this type are the minimum volume ellipsoid (MVE)
estimator of Rousseeuw (1985) and Fast Minimum Covariance Determinant was constructed by
Rousseeuw and Van Driessen (1999).
2
The constant c is chosen as 𝜒𝑝,0.5 and denotes the cardinality.
The computation of the MCD estimator is far from being trivial. The naive algorithm would
proceed by exhaustively investigating all subsets of size h out of n to ﬁnd the subset with the smallest
determinant of its covariance matrix, but this will be feasible only for very small data sets. In 1999,
Rousseeuw and Van Diressen constructed a very fast algorithm to calculate the MCD estimator. The
new algorithm is called FAST-MCD and its based on the C-step.
2
1, if 𝑑(𝑇𝑀𝐶𝐷 ,𝐶𝑀𝐶𝐷 ) (𝑖) ≤ √𝜒𝑝,0.975
Where: 𝑢𝑖 = {
0, otherwise
3. Optimal Portfolio
Let the random vector 𝒓 = (𝒓1 , 𝒓2 , … , 𝒓𝑁 )′ denote random returns of the N risky assets with
mean vector mean 𝝁 and covariance matrix 𝚺, and 𝒘 = (𝒘1 , 𝒘2 , … , 𝒘𝑁 )′ denote the proportion of the
portfolio to be invested in the N risky assets. Then the target of the investor is to choose an optimal
portfolio 𝒘 that lies on the mean-risk efﬁcient frontier. In the Markowitz model, the “mean” of a
portfolio is deﬁned as the expected value of the portfolio return, 𝒘𝑇 𝒓, and the “risk” is deﬁned as the
variance of the portfolio return, 𝒘𝑇 𝚺𝒘
Mathematically, minimizing the variance subject to target and budget constraints leads to a
formulation like:
min 𝒘𝑻 𝚺𝒘. (6)
Kendala: 𝒘𝑻 𝝁 ≥ 𝜇0 (7)
𝒆𝑻 𝒘 = 1. (8)
𝒘>0 (9)
102
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Where 𝜇0 is the minimum expected return, 𝒆𝑻 𝒘 = 1 is the budget and 𝒘 > 0 stand for no short-
selling.
In the above formula if parameters are known, then the optimization problem (6) - (9) can be
solved numerically. However, the parameters are never known in practice and they have to be estimated
from an unknown distribution with limited data. Traditionally, Maximum Likelihood Estimator (MLE
) have been used to estimate the sample mean and covariance matrix.
If the data are multivariate normal distribution then 𝚺 ̂𝑴𝑳𝑬 and 𝝁 ̂ 𝑴𝑳𝑬 are the optimal estimator
of the solution problem (6) – (9). But in actual finacial market, the Gaussian model may not completely
unsatisfactory since the empirical distribution of asset return may in fact be asymmetric, skewness and
have heavier tails.
Robust statistics can deal with a part of the data that is not fully compatible with the distribution
implied by the assumed model, i.e. when model misspeciﬁcation exists, and in particular in the presence
of outlying observations. The optimal portfolio weight based on robust estimator then can be solved by
the following equation:
min ̂𝐫𝐨𝐛 𝒘.
𝒘𝑻 𝚺 (10)
Kendala: 𝒘𝑻 𝝁
̂ 𝐫𝐨𝐛 ≥ 𝜇0 (11)
𝒆𝑻 𝒘 = 1. (12)
𝒘>0 (13)
4. Research Methodology
The research utilizes historical daily rates of return for 8 companies from the Jakarta Islamics
Index (JII). There are Alam Sutera Realty Tbk (ASRI), Indofood Sukses Makmur Tbk (INDF), Jasa
Marga Tbk (JSMR), Telekomunikasi Indonsesia (TLKM), Timah Tbk (TINS), Akr Corporindo Tbk
(AKRA), Charoen Pokhphan Indonesia Tbk (CPIN) dan XL Axiata Tbk (EXCL). The data is taken
from January 2012 to December 2012 (see www.finance.yahoo.com).
Classical portfolio establishment that using sample mean and covariance matrix will be
compared with the following robust methods: Minimum Volume Ellipsoid (MVE) and Fast-Minimum
Covariance Determinant (FMCD). For both robust estimators, the fraction of rejected observations is
set at 10% .
In this study, the Sharpe ratio is employed to evaluate the performance of the three portfolios.
This ratio focus on measuring the additional return (or risk premium) per unit of dispersion in
investment asset or trading startegy which is considered as risk, that is, a variance risk measure. The
definition of Sharpe Ratio in portfolio is:
𝐸(𝑅𝑝) − 𝑅𝑓
𝑆𝑅 =
𝜎𝑝
Where 𝑅𝑝 is the portfolio return, 𝑅𝑓 is the risk free return, 𝜎𝑝 is the standard deviation of the
excess of the portfolio return. In practice, the higher Sharpe ratio it has, the better performance portfolio
will have.
5. Result
The data consisting of 259 historical daily arithmetic returns (January 3, 2012 – Desember 31,
2012) of eight stocks chosen from Jakarta Islamic Index. These data are used as scenario to compare
the performance of three portfolio (MV, MVE dan Fast-MCD). Table 1 present the mean of eight stocks,
including the standar deviation.
103
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
As expected, associating with the table 3, it can be observed that the eight return data are not
normally distributed. This is indicated by the Sig> 0.05, its mean that there is no sufficient evidence to
accept H0 (normal distribution of data).
In this section, an analysis of the portfolio composition will be compared between the classic
portfolio and robust portfolios. Establishment of optimal portfolio conducted for various values of µ0 ,
that is 0.0013 - 0.0021. Table 3, 4 and 5 shows the composition of each portfolio.
It can be noticed that increasing of µ0 causes increase in both assets of INDF and CPIN.
Meanwhile, the increasing µ0 cause decrease on assets of ASRI, JSMR, TLKM, TINS, AKRA and
EXCL.
Meanwhile, the MVE portfolio obtaining the following results:
104
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
It can be seen that the formation of optimal MVE portfolios at µ0 = 0.001, the weight of each
asset as follows: ASRI is 4.63%, INDF is 16.6%, JSMR is 27.47%, TLKM is 18.1%, AKRA 4.36%,
CPIN is 6.37%, and ECXL is 13.53%. Fascinatingly the increasing of µ0 causes the rising in both assets
of CPIN and EXCL, respectively other assets decrease.
The establishment optimal portfolios through Fast-MCD is presented in the table below:
The table 5 present an optimal Fast MCD portfolio for various expected return. It can be
observed that the ASRI never involved in the formation of portfolio, indicated by 0% of weight. As
well as happens to TLKM, TINS and AKRA. The table 5. also shown that CPIN, EXCL and INDF
stocks contributed the dominant share compared to other stocks.
Based on the analysis of the performance of portfolio weight, we can conclude that these three
approaches earned different portfolio weights. At various levels of expected return the classical model
diversivied portfolio contrast with robust models. However, the difference between classic and robust
portfolio composition become equal when the given return increases.
In this section, the analysis performance of risk and sharpe ratio will be compared between
classical and robust models. The results are presented in the following table:
105
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Table 6. Standard Deviation and Sharpe Ratio for Given Expected Return
µ0 MV MVE FastMCD
Stdev Sharpe Stdev Sharpe Stdev Sharpe
0.0013 0.009274 0.010783 0.009165 0.010911 0.006633 0.015080
0.0014 0.009381 0.021320 0.009487 0.021081 0.007483 0.026730
0.0015 0.009695 0.030944 0.009798 0.030618 0.008367 0.035860
0.0016 0.010392 0.038491 0.010250 0.039024 0.009487 0.042160
0.0017 0.011225 0.044543 0.010954 0.045645 0.010583 0.047250
0.0018 0.012247 0.048992 0.011662 0.051449 0.011747 0.051077
0.0019 0.013191 0.053066 0.012490 0.056045 0.012961 0.054008
0.0020 0.014142 0.056569 0.013416 0.059630 0.014142 0.056569
0.0021 0.016125 0.058140 0,015492 0.058090 0.015492 0.058090
Table 6. present the risk and sharpe ratio at different expected return. Its shown that the risk of
portfolio Fast-MCD gives the smallest risk than the others. Similarly, the performance of sharpe ratio
of Fast-MCD was the highest. The greater the value of Sharpe Ratio, the better the portfolio, since
Sharpe ratio measures the expected return per unit of risk. Therefore, in the contex of risk and sharpe
ratio, we can conclude that portfolio Fast-MCD is superior compared to classical and MVE portfolio.
Another way to look at the performance of the portfolio is to make the efficient frontier. An
effcient frontier is the curve that shows all efficient portfolios in a risk-return framework. An effcient
portfolio is defined as the portfolio that maximizes the expected return for a given of risk (standard
deviation), or the portfolio that minimizes the risk subject to a given expected return. The following
figure shows the behaviour of the efficient frontier for each portfolio.
0.0022
0.0021
0.002
0.0019
0.0018
0.0017 MV
0.0016 MVE
0.0015 Fast-MCD
0.0014
0.0013
0.0012
0.0011
0.006 0.007 0.008 0.009 0.01 0.011 0.012 0.013 0.014 0.015 0.016 0.017
Based on Figure 1, it can be observed that the Fast-MCD efficient frontier is superior compared
to the MVE and MV efficient frontier.
6. Conclusion
This study mainly compare the performance of three different portfolio, i.e. Mean Variance
portfolio, MVE portfolio and Fast-MCD portfolio. The empirical results shows for a set of given return,
the composition of three portfolio are different. Meanwhile, it is no significant different between MV
portfolio and Fast-MCD portfolio when the expected retun grows.
Through the comparison of the risk (standar deviation) , Sharpe Ratio and the efficient frontier,
it is clear that Fast-MCD portfolio performs better than Mean Variance portfolio and MVE portfolio.
106
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Bengtsson, C. (2004). The Impact of Estimation Error on Portfolio Selection for Investor with Constat
Relative Risk Aversion.
Best, M. J., and Grauer, R. R. (1991). On the sensitivity of mean-variance efﬁcient portfolios to changes
in asset means: some analytical and computational results. Review of Financial Studies, 4(2),
315-342.
Broadie, M. (1993). Computing efﬁcient frontiers using estimated parameters. Annals of Operations
Research, 45, 21-58.
Ceria, S., and Stubbs, R. A. (2006). Incorporating estimation errors into portfolio selection: robust
portfolio construction. Journal of Asset Management, 7(2), 109-127.
Chopra, V. K. and Ziemba,W. T. (1993). The effects of errors in means, variances, and covariances on
optimal portfolio choice. Journal of Portfolio Management, 19(2), 6-11.
DeMiguel, V. and Nogales, F. J. (2008). Portfolio selection with robust estimation. Technical Report,
London Business School.
Gentil, P.C and Feser.V.MP (2004). Robust Mean Variance Portfolio Selection. Working Paper 173,
National Centre of Competence in Research NCCR FINRISK.
Hu, J. (2012). An Empirical Comparison of Different Approaches in Portfolio Selection. U.U.D.M.
Project Report 2012:7.`````
Huber, R. J. (1981). Robust statistics. New York: Wiley.
Lauprete, G.J. (2001). Portfolio risk minimization under departures from normality. PhD thesis, Sloan
school of Management, Massachusetts Institute of Technology, Cambridge,MA.
Markowitz, H. M. (1952). Portfolio selection. Journal of Finance, 7: 77-91.
Rousseeuw, P.J. and K. Van Driessen (1999). A Fast Algorithm for the Minimum Covariance
Determinant Estimator. Technometrics, 41, 212–223.
Staudte, R.G. and Sheather, S.J. (1990). Robust Estimation and Testing. John Wiley and Sons Inc.
Vaz-de Melo, B., R. P. Camara. 2003. Robust modeling of multivariate ﬁnancial data. Coppead
Working Paper Series 355, Federal University at Rio de Janeiro, Rio de Janeiro, Brazil.
Welsh. R.Y. and Zhou. (2007). Application of Robust Statistics to Asset Allocation Models. Statistical
Journal. 5(1): 97-114.
107
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
ABSTRACT: In this research was discussed about a multivariate model for predicting the
efficiency of the financial performance insurance companies. Multivariate models which used
are discriminant model and logistic regression model. Changing net profit is based for grouping
data to two categories because of profit is often to used as an indicator of measurement
performance company. The predictive variables are represented as 7 financial ratios. A
multivariate model is obtained by comparing the results of discriminant analysis and logistic
regression analysis. Five of seven financial ratios are significantly influence for predicting the
efficiency of the financial performance insurance companies.
1. Introduction
Profit is one indicator of the performance of a company. Earnings growth constantly increasing from
year to year can give a positive signal about the prospects of the company in the future performance of
the company (Margaretta, 2010). Financial ratio analysis can be used as a tool for predicting the
financial performance of a company. The financial performance of a company is a picture of a
company's financial statements, as in the financial statements are estimates as assets, liabilities, capital
and profits of the company. One of the usefulness of financial statements is to make a picture of the
company from one period to the next on the growth or decline, and allow it to be compared with other
companies similar industries.
Beaver (1966) using financial ratios as predictors of failure and states that usability can only be tested
ratios relating to some specific purpose. The ratio is now widely used as predictors of failure. BarNiv
and Hershbarger (1990) presents a model that incorporates variables that are designed to identify the
financial solvency of the life insurance. Three multivariate analysis (multidiscriminant, nonparametric,
and logit) has been used to examine the implementation and efficiency of alternative multivariate
models for life insurance solvency (Mahmoud, 2008).
In this study the authors present a multivariate model to predict the efficiency of financial performance
based on net profit insurance companies listed on the Stock Exchange using financial ratios.
Multivariate models were used, namely discriminant models and logistic regression models.
2. Literature Review
2.1 Earnings and Earnings Growth
Profit or gain in accounting is defined as the difference between selling price and cost of production ..
(Wikipedia, 2011). Corporate profit growth is the result of a reduction profit in year t the profit for the
year t-1 divided by profit for the year t-1. (Zainuddin dan Jogiyanto, 1992). Earnings growth forecasts
108
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
are often used by investors, creditors, companies and governments to advance their business. Earnings
growth formula:
𝑋 −𝑋
𝑌 = 𝑡𝑋 𝑡−1 (1)
𝑡−1
where: Xt profit in t, Xt-1 profit in t-1, and Y earnings growth.
The distance d12 , d 22 ,..., d n2 is a random variable that distributed Chi-Square (Johnson and Wichern, 1982).
Although the distance are not independent or not Chi-Square distribution that exactly, but it will be
very useful to plot it. The results of this plot are also known as Chi-Square plot. The algorithm
for the formation of Chi-Square plot is as follows:
1. Sort the distance squared value of the calculation in equation (2), from the smallest to the
largest.
1 1 1
2. Plot it of pair (d( j ) , p (( j ) / n)) , where p2 (( j ) / n) is 100( j ) / n percentile of the Chi-
2 2
2 2 2
Square distribution with degrees of freedom p.
2
Equation (3) will follow the distribution Tp,n1 n2 2 when H0 is true.
n1 n2 p 1 2
T Fp,n1 n2 p 1 (4)
(n1 n2 2) p
where p dimensions of statistic of T2 be the first degrees of freedom for statistic of F.
Fisher classifying an observation based on the score which is calculated from the linear function 𝑌 =
′ 𝑋 where ′ states vector containing the explanatory variable coefficients, that form the linear
equation of the response variable.
′ = [1 , 2 , … , 𝑝 ]
𝑋1
𝑋=[ ]
𝑋2
Xk states of the data matrix in the group k-th
𝑥11𝑘 𝑥12𝑘 … 𝑥1𝑝𝑘
𝑥 𝑥22𝑘 … 𝑥2𝑝𝑘
𝑋𝑘 = [ 21𝑘 ]
⋮ ⋮ ⋱ ⋮
𝑥𝑛1𝑘 𝑥𝑛2𝑘 … 𝑥𝑛𝑝𝑘
i = 1,2,...,n
j = 1,2,...,p
k= 1 and 2
xijkk states of observation i-th, variable j-th, and on group k-th.
The linear combination of that best according to Fisher is to maximize the ratio between the average
squared distance Y that obtained from X of groups 1 and 2 with a variance of Y, or be formulated as
follows:
(𝜇1𝑌 −𝜇2𝑌 )2 ′ (𝜇1 −𝜇2 )(𝜇1 −𝜇2 )′
= (5)
𝜎𝑌 2 ′ 𝛴
(′)2
If (𝜇1 − 𝜇2 ) = , then equation (4) became ′Σ
. Because Σ is a positive definite matrix, then
(′)2
according to the theory of the Cauchy-Schwartz inequality, the ratio ′Σ can be maximized when
= 𝑐𝛴 −1 = 𝑐𝛴 −1 (𝜇1 − 𝜇2 )
by choosing c = 1 produces the linear combination of the so-called Fisher linear combinations of the
following:
𝑌 = ′ 𝑋 = (𝜇1 − 𝜇2 )′𝛴−1 𝑋 (6)
with
eg X
X X ... X
e0 11 2 2 k k
X g X
(10)
1 e 1 e0 1 X1 2 X 2 ... k X k
Principle of maximum likelihood method that determines the parameters that maximize the value of
likelihood function. The estimated value of the parameters can be obtained using assistance IBM SPSS
19 statistical software.
110
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
b. Wald Test
Wald test is used to test the partial or individual independent variables which are significant, and are
not significant to the multiple logistic regression models. The Wald test using the statistic Z which
following the standard normal distribution. The Statistic Z used is:
ˆi
Z (12)
SE ˆi
where ˆi = estimators for the parameters i
= estimator of the standard error for the coefficient
SE ˆi i
If Z Z 1 1
or Z Z 1 1
then H0 rejected, and H1 accepted
k 1 k 1
d. R -Square
Value of R 2 in the logistic regression analysis to show the strength of the relationship between
independent variables and the dependent variable. For the value of R 2 determined by using the formula:
111
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
L2
R2 1 exp (13)
n
where L = log likelihood value of the model.
n = the number of data.
112
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the Table 1 above it can be concluded that in the group coded 0, only independent variables of
ROI (x2), PER (x5), and LR (x7) were normal distribution, and in the group coded 1, only independent
variable ROI (x2), ROE (x3), and ER (x6) were normal distribution. Thus, it can be said that most of the
independent variables are not normally distributed.
In this section we done checking whether or not there of multicollinearity relationship between the
independent variables. Checking relationships multikolinieritas here performed using IBM SPSS 19
statistical software. The results multicollinearity relationship checking is given in Table 2.
From the Table 2 it can be seen that there is some multicollinearity between the independent variables,
as follows:
- Correlation between the independent variables ROI (x2) and ROE (x3) is 0.870 > 0.5 (strong
correlation)
- Correlation between the independent variables SR (x4) and ER (x6) is 0.532 > 0.5 (strong correlation)
- Correlation between the independent variables SR (x4) and LR(x7) is 0.568 > 0.5 (strong correlation)
- Correlation between the independent variables ER (x6) and LR (x7) is 0.652 > 0.5 (strong correlation)
Test vector average value was conducted to determine whether there or not, the difference
between groups. Test vector average value is done by using IBM SPSS 19 statistical software. Test
results the average vector is given in Table 3.
From Table 3 it can be seen from the seven, only two independent variables were
significantly different for the two groups of the discriminant of the efficiency of the financial
performance of the insurance company, which is the independent variable ROI (x2) and ROE
(x3).
113
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
To test the equality of the variance-covariance matrix is done with the Box's M test using IBM SPSS
19 statistical software. The result of the variance covariance equality test is given in Table 4.
df1 28
df2 2070.719
Sig. 0
Because the Sig. = 0,000 < 0,05 meaningful the hypotesis H0 rejected. Means group covariance matrices
are significantly different. Based on the results of the independent variables for normality testing,
checking multicollinearity and variance-covariance matrix similarities apparently violated assumptions.
In the case that there are only two groups / categories if the group covariance matrices differ
significantly advanced the process should not be done. (Santoso, 2005). This research was also
supported by a previous study by Cooper and Emory (1995), the implications of the distribution of most
of the independent variables are not normal, then testing with parametric analysis such as test-t, Z,
ANOVA and discriminant analysis is not appropriate. (Almillia, 2004).
Data that did not meet the assumption of multivariate normality can cause problems in the
estimation of the discriminant function, therefore if possible logistic regression analysis can be used as
an alternative (Gessner, et al., 1998; Huberty, 1984;Johnson and Wichern,1982).
From the Table 5 it can be seen the value of log likelihood = -16,402
114
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Independent
Conclusions
Variables (xi)
From Table 6 obtained multiple logistic regression model while you are the following:
eg x e1,7020,022 x1 27,891x2 16,267 x3 0,011x4 0,034 x5 2,431x6 2,286 x7
ˆ x
1 eg x 1 e
1,702 0,022 x1 27,891x2 16,267 x3 0,011 x4 0,034 x5 2,431 x6 2,286 x7
b. Wald Test
Based on the Wald test has been done in Table 6, it can be concluded that the independent variables
Current Ratio (x1) and Solvency Ratio (x4) not significantly affect the partial or not the insurance
company's financial performance efficiency. The independent variables were significant or partial effect
on the efficiency of the financial performance of insurance companies, which is ROI ( x2 ), ROE ( x3 ),
PER ( x5 ), Expenses Ratio ( x6 ), and Loss Ratio ( x7 ). Independent variables Current Ratio (x1) and
Solvency Ratio (x4) not significant, it is removed from the model. Then the parameter estimation was
repeated with five independent variables were significant.
From the Table 7 it can be seen the value of log likelihood = -16,4255
115
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Independent
Conclusions
Variables (xi)
From the Table 8 obtained multiple logistic regression model that is:
eg x e1,686 25,704 x2 16,927 x3 – 0,034 x5 2,306 x6 – 2,230 x7
ˆ x g x
1 e 1 e1,686 25,704 x2 16,927 x3 – 0,034 x5 2,306 x6 – 2,230 x7
n n
G 2 yi ln ˆi 1 yi ln 1 ˆi n1 ln n1 n0 ln n0 n ln n
i 1 i 1
2 16, 4255 27 ln 27 13ln13 40ln 40
2 16, 4255 88,9876 33,3443 147,5552
17,5956
210,055 1,1455
G 210,055 that is 17,5956 > 1,1455, then H0 rejected.
b. Wald Test
Based on the results in Table 8, after the return parameter estimation and all the independent variables
in the partial test using the Wald test results of five independent variables, that is ROI ( x2 ), ROE ( x3
), PER ( x5 ), Expenses Ratio ( x6 ), and Loss Ratio ( x7 ) significant or partial effect on the efficiency of
the financial performance of insurance companies.
d. R- Square
The value of R 2 in the logistic regression analysis showed strong correlation between the independent
variables and the dependent variable. Value of R 2 is determined as follows:
116
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
L2
R2 1 exp
n
(16, 4255) 2
1 exp
40
1 exp 6,7449
0.9988
Because R 2 = 0,9988 or 99,88 % meaningful independent variables ROI ( x2 ), ROE ( x3 ), PER ( x5 ),
Expenses Ratio ( x6 ), and Loss Ratio ( x7 ) has a strong relationship to the efficiency of the financial
performance of insurance companies.
From Table 9 above classification accuracy of logistic regression models in predicting the financial
performance of the efficiency of insurance companies is equal to 77.5%.
5. Conclusions
In this case, a violation of the assumption of multivariate normal, multicollinearity, and not with him
the variance-covariance matrix in discriminant analysis. Appropriate model to predict the efficiency of
the insurance company's financial performance that are listed in the Indonesia Stock Exchange is the
logistic regression model with a model accuracy of 77.5%, the rest is influenced by other factors.
Variables that affect the efficiency of financial performance based on changes in income insurance
companies listed in the Indonesia Stock Exchange ROI ( x2 ), ROE ( x3 ), PER ( x5 ), Expenses Ratio (
x6 ), and Loss Ratio ( x7 ).
117
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
6. References
Agresti, A. 1996. An Introduction Categorical Data Analysis. New York: John Wiley & Sons, Inc.
Almilia, L.S. 2004. Analisis Faktor-Faktor yang Mempengaruhi Kondisi Financial Distress Suatu
Perusahaan yang Terdaftar di Bursa Efek Jakarta. Jurnal Riset Akuntansi Indonesia. Vol.7 No.1.
Januari. Hal 1-22.
Hajarisman, N. 2008. Seri Buku Ajar Statistika Multivariat. Bandung: Program Studi Statistika
Universitas Islam Bandung.
Johnson, R.A., and Wichern, D.W. 1982. Applied Multivariate Statistical Analysis. New Jersey:
Prentice-Hall, Inc., Englewood Cliffs.
Mahmoud, O.H. 2008. A Multivariate Model for Predicting the Efficiency of Financial Performance
for Property and Liability Egyptian Insurance Companies. Casualty Actuarial Society.
Margaretta, Y. 2010. Analisis Rasio Keuangan, Kebijakan Deviden dengan Ukuran Perusahaan sebagai
Variabel Kontrol dalam Memprediksi Pertumbuhan Laba pada Perusahaan Manufaktur yang
Terdaftar di Bursa Efek Indonesia. Surabaya: Skripsi Program S1 Akuntansi STIE PERBANAS.
Santoso, S. 2005. Menggunakan SPSS untuk Statistika Multivariat. Jakarta: Elex Media Komputindo.
Santoso, S. 2010. Statistika Multivariat Konsep dan Aplikasinya. Jakarta: Elex Media Komputindo.
Zainudin dan Jogiyanto, H. 1999. Manfaat Rasio Keuangan dalam Memprediksi Pertumbuhan Laba
(Studi Empirirs pada Perusahaan Perbankan yang Terdaftar di Bursa Efek Jakarta). Jurnal Riset
Ekonomi dan Akuntansi Indonesia. Vol.2. No.1 . Januari. Hal 66-90.
Tobing, M. 2011. http://www.martintobing.com/view/214 (diakses 31 Januari 2012).
_____________. http://cafe-ekonomi.blogspot.com/2009/09/artikel-tentang-laba.html (diakses 27
Maret 2012).
118
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: The set 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] is a vector space consists of power series in 𝑧 −1 with
coefficients in 𝐹 𝑚 . The subspace of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] have many properties that often used in
the behavior theory. In this paper we will discuss one of many properties of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]]
subspace. We will discuss the necessary and sufficient condition that cause annihilator‘s
preannihilator of a subspace of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] is equal to the subspace itself.
1. Introduction
The behavior theory is an interesting material to be studied in algebraic point of view. Willems [1]
defined behavior as the set of all trajectories of a dynamical system. In the algebraic point of view, we
can see behavior as a linear, shift invariant, and complete subspace ofz 1m[[ z 1 ]] . In this paper,
1 m 1
we’re not focusing on the behavior theory. However, we will discuss a property in the z [[ z ]]
subspace that usefull to studied the behavior theory in the algebraic point of view.
2. Preliminaries
Let be an arbitrary field and m be the space of all m-vectors with coordinates in . The set (( z 1 ))
is defines as follows
(( z 1 )) = f i z i | f i , n f . (1)
i = n f
f ( z) = i =n fi z i and g ( z ) = i =n gi z i both are elements in (( z 1 )) , then the operations of
If
f g
addition and multiplication are defined by
( f g )( z ) = ( f
i = maks n f , n g
i gi ) z i ,
and
( fg )( z ) =
k = n f ng
hk z k ,
where hk = i = i
f gk i (P.A. Fuhrmann, 2010).
The set [z ] , polynomial ring over the field , is defined as follows
0
[ z] = f i z i | f i , n f . ,
i =nf
The set [z ] is defined as the set of all polynomials with form
nf i
f z where
i =0 i
fi and n f
is nonnegative integer. It also can be expressed as a subset of (( z 1 )) as follows:
119
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
0
[ z] = f i z i | f i , n f .
i =nf
The set [z ] is the polynomial ring over the field [2].
The set of all formal series in form z 1 with coefficient in the field , denoted by [[ z 1 ]] ,
which can be expressed as follows:
[[ z 1 ]] = f i z i | f i , i = 0,1,2,.
i =0
The set z 1[[ z 1 ]] is defined by
z 1 [[ z 1 ]] = f i z i | f i , i = 1,2, .
i = 1
We have been defined the sets (( z 1 )) , [z] , and [[ z 1 ]] at the beginning of this section.
Now we will discuss about the sets m (( z 1 )) , m[z] , and m[[ z 1 ]] which is a vector space over
the field . The set m (( z 1 )) defined as follows:
m (( z 1 )) = f i z i | f i m , n f .
i = n f
Elements in [ z]m can be identified as elments in the set m[z] , that is
n
f
[ z ] = f i z i | f i m , n f {0}.
m
i =0
The set m[z] is a module over the ring [z] . The set m[[ z 1 ]] defined as
m [[ z 1 ]] = f i z i | f i m , i = 0,1,2,.
i =0
The set z 1m[[ z 1 ]] is defined by
z 1 m [[ z 1 ]] = f i z i | f i m , i = 1,2, .
i = 1
The set z 1m[[ z 1 ]] is a vector space over the field .
hi z i hi z i .
(2)
i =1 i =1
1 m 1
The subset B z [[ z ]] is complete if for any w =
w z z 1m[[ z 1 ]] and for
i =1 i
i
and
( x, y1 y2 ) = ( x, y1 ) ( x, y2 ), x E, y1 , y2 F
Is called Bilinear Function On E F (Greub, G.H., 1967).
Bilinear onm (( z 1 )) is defined by
[ , ] : m (( z 1 )) m (( z 1 )) m , (3)
1
with the rules that are defined as follows, for all f , g (( z )) , where f ( z ) = j =f f j z j and
m n
g ( z ) = j g= g j z j where
n
f j , g j m ,
f , g = g Tj f j 1
j =
g
j =
T
j f j 1 = g nT f n
g g 1
g0T f 1 g T1 f 0 g n f ,
f 1 n f
(4)
That means [ , ] is well defined. We can also show that [ , ] is a bilinear form on
1 1
(( z )) (( z )) . Selanjutnya perhatikan teorema berikut ini.
m m
Definition 2 (Fuhrmann P.A., 2002) Let a subspace M [z] . We define its Annihilator
m
dari M by
M = { f z 1m[[ z 1 ]] | [ g, f ] = 0, g M},
that also a subspace z 1m[[ z 1 ]] . Let a subspace V z 1m[[ z 1 ]] , thes preannihilator
V
defined as follow
V = {k m [ z] | [h, k ] = 0, h V },
that also a subspace m[z] .
Lemma 3 (Fuhrmann P.A., 2002) Let (m[ z]) = z 1m[[ z 1 ]] , A = Pn0 ( z 1m [[ z 1 ]])
and A* denote its dual. Every functional on A* can be identified by elements in m[z] .
Proposisi 4 (Fuhrmann P.A., 2002) Let V z 1m[[ z 1 ]] be a subspace. (V ) = V if
Proof. [] Assume ( V ) = V holds. We wil show that for any h z 1m[[ z 1 ]] V there exist
1 m 1
k V such that [k , h] 0 . Let h be an arbitrary element in z [[ z ]] V . From the assumtion
above, we have h ( V ) . This implies that there exist k V such that [h, k ] 0 .
[] We will prove this implication with conradiction. Assume ( V ) V . It obvous that V ( V )
. That is means there exist h ( V ) V , such that for all k V we have [h, k ] = 0 . This
1 m 1
contradiction with assumption that for all h z [[ z ]] V there exist k V such that
[k , h] 0 . Thus we have ( V ) = V .
121
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
is a finite dimensional vector space. Let y = Pn (h) and Vn = Pn (V ) . Based on the assumption
0 0 0
a basis for Vn , then B {y} is a basis for span{Vn , y} . So that, the exist where
0 0
: span{Vn0 , y}
bi 0
y 1.
In the other words, there exist such that (Vn0 ) = 0 and ( y ) 0 . Extending to
: X n* , implies ( X n* )* = ( Pn0 ( X * ))* . From Lema 3, we can identify with elements in
m[z] . In the other words, there exist f V such that [ f , h] 0 . Thus, based on Propotition 4, we
have proved V complete lengkap if and only if ( V ) = V .
4. Conclusion
The set 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] is a vector space consists of power series in 𝑧 −1 with coefficients in 𝐹 𝑚 . The
subspace of 𝑧 −1 𝐹 𝑚 [[𝑧 −1 ]] have many properties that often used in the behavior theory. One of them
1 m
is described in proposition 5. That is, a subspace z [[ z 1 ]] is complete if and only if its
annihilator‘s preannihilator is equal to the subspace itself.
Acknowledgements
We would like to thank all the people who prepared and revised previous versions of this document.
References
Fuhrmann, P.A. (2002). A Study of Behavior, Linear Algebra and Its Applications, vol. 351-352, 2002, pp. 303-
380.
Fuhrmann, P.A. (2010). A Polynomial Approach to Linear Algebra. Springer.
Greub, G.H.. (1967). Linear Algebra, Springer-Verlag.
Willems, J.C. (1986). From Time Series to Linear Systems. Part I: Finite-Dimensional Linear Time Invariant
Systems. Automatica 22.
122
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: In this paper, we will investigate the results of graph coloring by Michael Larsen,
James Propp and Daniel Ullman 1995. Namely, "fractional chromatic number of Mycielski
Graf," that the fractional clique number of a graph G bounded below by the number of clique
integer, and it is equal to the fractional chromatic number, which is bounded above by the
number of chromatic integer. In other words,
𝜔(𝐺) ≤ 𝜔𝐹 (𝐺) = 𝜒𝐹 (𝐺) ≤ 𝜒(𝐺)
Given this relationship, giving rise to a question whether the difference 𝜔𝐹 (𝐺) − 𝜔(𝐺) and
𝜒(𝐺) − 𝜒𝐹 (𝐺) can be made arbitrary large. The question then will be proved in the affirmative,
with the order of the graph to show the differences between them are increased without limit.
Proof to determine the fractional coloring and the fractional chromatic number, will be shown
in two different ways: first intuitively, combinatorial way marked in relation to with graph
homomorphisms, and then in relation to with an independent set, with calculations using linear
programming. In this second context, will be defined fractional clique, and see how this relates
to fractional coloring. Relationship between coloring fractions and fractional clique is the key
proof of Larsen, Propp, and Ullman.
Keywords: fractional clique number, fractional chromatic number, Mycielski Graf, linear
programming.
1. Introduction
In this paper, we discuss a result about graph colorings from 1995. The paper we will be investigating
is "The Fractional Chromatic Number of Mycielski's Graphs," by Michael Larsen, James Propp and
Daniel Ullman [3].
We will begin with some preliminary definitions, examples, and results about graph colorings.
Then we will define fractional colorings and the fractional chromatic number, which are the focus of
Larsen, Propp and Ullman's paper. We will define fractional colorings in two different ways: first in a
fairly intuitive, combinatorial manner that is characterized in terms of graph homomorphisms, and then
in terms of independent sets, which as we shall see, lends itself to calculation by means of linear
programming. In this second context, we shall also define fractional cliques, and see how they relate to
fractional colorings. This connection between fractional colorings and fractional cliques is the key to
Larsen, Propp and Ullman's proof.
A graph is defined as a set of vertices and a set of edges joining pairs of vertices. The precise definition
of a graph varies from author to author; in this paper, we will consider only finite, simple graphs, and
shall tailor our definition accordingly.
A graph G is an ordered pair (V (G), E(G)), consisting of a vertex set, V (G), and an edge set,
E(G). The vertex set can be any finite set, as we are considering only finite graphs. Since we are only
considering simple graphs, and excluding loops and multiple edges, we can define E(G) as a subset of
the set of all unordered pairs of distinct elements of V (G).
123
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
If u and v are elements of V (G), and {u, v} ∈ E(G), then we say that u and v are adjacent, denoted u ~
v. Adjacency is a symmetric relation, and in the case of simple graphs, anti-reflexive. A set of pairwise
adjacent vertices in a graph is called a clique and a set of pairwise non-adjacent vertices is called an
independent set.
For any graph G, we define two parameters: 𝛼(G), the independence number, and 𝜔(G), the
clique number. The independence number is the size of the largest independent set in V (G), and the
clique number is the size of the largest clique.
2.1.2 Examples
As examples, we define two families of graphs, the cycles and the complete graphs.
The cycle on n vertices (n > 1), denoted Cn, is a graph with V (Cn) = {1,. . . , n} and x ~ y in Cn if and
only if 𝑥 − 𝑦 ≡ ± 1 (mod n). We often depict Cn as a regular n-gon. The independence and clique
𝑛
numbers are easy to calculate: we have 𝛼(𝐶𝑛 ) = ⌊2 ⌋ and 𝜔(Cn) = 2 (except for C3, which has a clique
number of 3).
The complete graph on n vertices, Kn, is a graph with V (Kn) = {1,. . . , n}and x ~ y in V (Kn) for
all x ≠ y. It is immediate that 𝛼(Kn) = 1 and 𝜔(Kn) = n. The graphs C5 and K5 are shown in Figure 1.
C5 K5
A proper n-coloring (or simply a proper coloring) of a graph G can be thought of as a way of assigning,
from a set of n "colors", one color to each vertex, in such a way that no adjacent vertices have the same
color. A more formal definition of a proper coloring relies on the idea of graph homomorphisms.
If G and H are graphs, a graph homomorphism from G to H is a mapping : V (G) → V (H) such
that u ~ v in G implies 𝜙(u) ~ 𝜙(v) in H. A bijective graph homomorphism whose inverse is also a graph
homomorphism is called a graph isomorphism.
Now we may define a proper n-coloring of a graph G as a graph homomorphism from G to Kn.
This is equivalent to our previous, informal definition, which can be seen as follows. Given a "color"
for each vertex in G, with adjacent vertices always having different colors, we may define a
homomorphism that sends all the vertices of the same color to the same vertex in Kn. Since adjacent
vertices have different colors assigned to them, they will be mapped to different vertices in Kn, which
are adjacent. Conversely, any homomorphism from G to Kn assigns to each vertex of G an element of
{1, 2, . . . , n}, which may be viewed as colors. Since no vertex in Kn is adjacent to itself, no adjacent
vertices in G will be assigned the same color.
In a proper coloring, if we consider the inverse image of a single vertex in Kn, i.e., the set of all
vertices in G with a certain color, it will always be an independent set. This independent set is called a
color class associated with the proper coloring. Thus, a proper n-coloring of a graph G can be thought
of as a covering of the vertex set of G with independent sets.
We define a graph parameter 𝜒(G), the chromatic number of G, as the smallest positive integer n
such that there exists a proper n-coloring of G. Equivalently, the chromatic number is the smallest
number of independent sets required to cover V (G). Any finite graph with k vertices can certainly be
colored with k colors, so we see that 𝜒(G) is well-defined for a finite graph G, and bounded from above
by |𝑉 (𝐺)|. It is also clear that, if we have a proper n-coloring of G, then 𝜒(G) ≤ n.
124
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
We can establish some inequalities relating the chromatic number to the other parameters we
have defined. First, 𝜔(G) ≤ 𝜒(G), since all the vertices in a clique must be different colors. Also, since
|𝑉 (𝐺)|
each color class is an independent set, we have 𝛼(𝐺)
≤ 𝜒(𝐺), where equality is attained if and only if
each color class in an optimal coloring is the size of the largest independent set.
We can calculate the chromatic number for our examples. For the complete graphs, we have
𝜒(Kn) = n, and for the cycles we have 𝜒(Cn) = 2 for n even and 3 for n odd. In Figure 2, we see C5 and
K5 colored with three and five colors, respectively.
2 1
2 5
1 1
3 2 3 4
We now generalize the idea of a proper coloring to that of a fractional coloring (or a set coloring),
which allows us to define a graph's fractional chromatic number, denoted 𝜒𝐹 (G), which can assume
non-integer values.
Given a graph, integers 0 < b ≤ a, and a set of a colors, a proper a/b-coloring is a function that
assigns to each vertex a set of b distinct colors, in such a way that adjacent vertices are assigned disjoint
sets. Thus, a proper n-coloring is equivalent to a proper n/1-coloring.
The definition of a fractional coloring can also be formalized by using graph homomorphisms.
To this end, we define another family of graphs, the Kneser graphs. For each ordered pair of positive
integers (a, b) with a ≥ b, we define a graph Ka:b. As the vertex set of Ka:b, we take the set of all b-
element subsets of the set {1, . . . , a}. Two such subsets are adjacent in Ka:b if and only if they are
disjoint. Note that Ka:b is an empty graph (i.e., its edge set is empty) unless a ≥ 2b.
Just as a proper n-coloring of a graph G can be seen as a graph homomorphism from G to the
graph Kn, so a proper a/b-coloring of G can be seen as a graph homomorphism from G to Ka:b.
The fractional chromatic number of a graph, 𝜒𝐹 (G), is the infimum of all rational numbers a/b
such that there exists a proper a/b-coloring of G. From this definition, it is not immediately clear that
𝜒𝐹 (G) must be a rational number for an arbitrary graph. In order to show that it is, we will use a different
definition of fractional coloring, but first, we establish some bounds for 𝜒𝐹 (G) based on our current
definition.
We can get an upper bound on the fractional chromatic number using the chromatic number. If
𝑛𝑏
we have a proper n-coloring of G, we can obtain a proper 𝑏 coloring for any positive integer b by
replacing each individual color with b different colors. Thus, we have 𝜒𝐹 (G) ≤ 𝜒(G), or in terms of
homomorphisms, we can simply note the existence of a homomorphism from Kn to Knb:b (namely, map
i to the set of j ≡ i (mod n)).
To obtain one lower bound on the fractional chromatic number, we note that a graph containing
an n-clique has a fractional coloring with b colors on each vertex only if we have at least n . b colors to
choose from; in other words, 𝜔(G) ≤ 𝜒𝐹 (G).
Just as with proper colorings, we can obtain another lower bound from the independence number.
Since each color in a fractional coloring is assigned to an independent set of vertices (the fractional
|𝑉 (𝐺)|
color class), we have |𝑉 (𝐺)| . b ≤ 𝛼(G) . a, or 𝛼(𝐺)
≤ 𝜒𝐹 (𝐺).
125
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Another inequality, which will come in handy later, regards fractional colorings of subgraphs. A
graph H is said to be a subgraph of G if V (H) ⊆ V (G) and E(H) ⊆ E(G). Notice that if H is a subgraph
of G, then any proper a/b-coloring of G, restricted to V (H), is a proper a/b-coloring of H. This tells us
that 𝜒𝐹 (H) ≤ 𝜒𝐹 (G).
{1, 2}
{4, 5} {3, 4}
{2, 3} {1, 5}
∑ 𝑓( 𝐽 ) = 1
𝐽∈𝐼(𝐺,𝑢)
for each vertex u. The weight of this fractional coloring is simply the number of colors.
Next, suppose we have a graph G with a proper a/b coloring as defined above, with a b-element
set of colors associated with each vertex. Again, each color determines a color class, which is an
1
independent set. If we define a function that sends each color class to and every other independent set
𝑏
to 0, then again, we have for each vertex u, ∑𝐽∈𝐼(𝐺,𝑢) 𝑓( 𝐽 ) = 1, so we have a fractional coloring by our
new definition, with weight a/b.
Finally, let us consider translating from the new definition to the old one. Suppose we have a
graph G and a function f mapping from I(G) to [0, 1] ⋂ Q. (We will see below why we are justified in
restricting our attention to rational valued functions.) Since the graph G is finite, the set I(G) is finite,
and the image of the function f is a finite set of rational numbers. This set of numbers has a lowest
common denominator, b. Now suppose we have an independent set I which is sent to the number m/b.
Thus, we can choose m different colors, and let the set I be the color class for each of them. Proceeding
in this manner, we will assign at least b different colors to each vertex, because of our condition that for
all u ∑𝐽∈𝐼(𝐺,𝑢) 𝑓( 𝐽 ) ≥ 1. If some vertices are assigned more than b colors, we can ignore all but b of
126
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
them, and we have a fractional coloring according to our old definition if the weight of f is a/b, and we
do not ignore any colors completely, then we will have obtained a proper a/b coloring. If some colors
are ignored, then we actually have a proper d/b fractional coloring, for some d < a.
The usefulness of this new definition of fractional coloring and fractional chromatic number in terms
of independent sets is that it leads us to a method of calculation using the tools of linear programming.
To this end, we will construct a matrix representation of a fractional coloring.
For a graph G, define a matrix A(G), with columns indexed by V (G) and rows indexed by I(G).
Each row is essentially the characteristic function of the corresponding independent set, with entries
equal to 1 on columns corresponding to vertices in the independent set, and 0 otherwise.
Now let f be a fractional coloring of G and let y(G) be the vector indexed by I(G) with entries
given by f. With this notation, and letting 1 denote the all 1's vector, the inequality y(G)TA(G) ≥ 1T
expresses the condition that
∑ 𝑓( 𝐽 ) ≥ 1
𝐽∈𝐼(𝐺,𝑢)
for all u ∈V (G).
In this algebraic representation of a fractional coloring, the determination of fractional chromatic
number becomes a linear programming problem. The entries of the vector y(G) are a set of variables,
one for each independent set in V (G), and our task is to minimize the sum of the variables (the weight
of the fractional coloring), subject to the set of constraints that each entry in the vector y(G)TA(G) be
greater than 1, and that each variable be in the interval [0, 1]. This amounts to minimizing a linear
function within a convex polyhedral region in n-dimensional space defined by a finite number of linear
inequalities, where n = |𝐼(𝐺)|. This minimum must occur at a vertex of the region. Since each
hyperplane forming a face of the region is determined by a linear equation with integer coefficients,
then each vertex has rational coordinates, so our optimal fractional coloring will indeed take on rational
values, as promised.
The regular, integer chromatic number, can be calculated with the same linear program by
restricting the values in the vector y(G) to 0 and 1. This is equivalent to covering the vertex set by
independent sets that may only have weights of 1 or 0. Although polynomial time algorithms exist for
calculating optimal solutions to linear programs, this is not the case for integer programs or 0-1
programs. In fact, many such problems have been shown to be NP-hard. In this respect, fractional
chromatic numbers are easier to calculate than integer chromatic numbers.
The linear program that calculates a graph's fractional chromatic number is the dual of another
linear program, in which we attempt to maximize the sum of elements in a vector x(G), subject to the
constraint A(G)x(G) ≤ 1. We can pose this maximization problem as follows: we want to define a
function h : V (G) → [0, 1], with the condition that, for each independent set in I(G), the sum of function
values on the vertices in that set is no greater than 1. Such a function is called a fractional clique, the
dual concept of a fractional coloring. As with fractional colorings, we define the weight of a fractional
clique to be the sum of its values over its domain. The supremum of weights of fractional cliques
defined for a graph is a parameter, 𝜔𝐹 (G), the fractional clique number.
Just as we saw a fractional coloring as a relaxation of the idea of an integer coloring, we would
like to understand a fractional clique as a relaxation of the concept of a integer clique to the rationals
(or reals). It is fairly straightforward to understand an ordinary clique as a fractional clique: we begin
by considering a graph G, and a clique, C ⊆ V (G). We can define a function h : V (G) → [0, 1] that
takes on the value 1 for each vertex in C and 0 elsewhere. This function satisfies the condition that its
values sum to no more than 1 over each independent set, for no independent set may intersect the clique
C in more than one vertex. Thus the function is a fractional clique, whose weight is the number of
vertices in the clique.
127
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Since an ordinary n-clique can be interpreted as a fractional clique of weight n, we can say that
for any graph G, 𝜔(G) ≤ 𝜔𝐹 (G).
The most important identity we will use to establish our main result is the equality of the
fractional chromatic number and the fractional clique number. Since the linear programs which
calculate these two parameters are dual to each other, we apply the Strong Duality Theorem of Linear
Programming. We state the theorem in full. The reader is referred to [4] for more information about
linear programming.
Maximize cT x
subject to Ax ≤ b
and x≥0
with its dual, of the form:
Minimize yT b
subject to yTA ≥ cT
and y≥0
If both LPs are feasible, i.e., have non-empty feasible regions, then both can be optimized, and the two
objective functions have the same optimal value.
In the case of fractional chromatic number and fractional clique number, our primary LP is that
which calculates the fractional clique of a graph G. The vector c determining the objective function is
the all 1s vector, of dimension |𝑉 (𝐺)|, and the constraint vector b is the all 1s vector, of dimension
|𝐼(𝐺)|. The matrix A is the matrix described above, whose rows are the characteristic vectors of the
independent sets in I(G), defined over V (G). The vector x for which we seek to maximize the objective
function cT x has as its entries the values of a fractional clique at each vertex. The vector y for which we
seek to minimize the objective function y T b has as its entries the values of a fractional coloring on each
independent set.
In order to apply the Strong Duality Theorem, we need only establish that both LPs are feasible.
Fortunately, this is easy: the zero vector is in the feasible region for the primary LP, and any proper
coloring is in the feasible region for the dual. Thus, we may conclude that both objective functions have
the same optimal value; i.e., that for a graph G, we have 𝜔𝐹 (G) = 𝜒𝐹 (G).
This equality gives us a means of calculating these parameters. Suppose that, for a graph G, we
find a fractional clique with weight equal to r. Since the fractional clique number is the supremum of
weights of fractional cliques, we can say that r ≤ 𝜔𝐹 (G). Now suppose we also find a fractional coloring
of weight r. Then, since the fractional chromatic number is the infimum of weights of fractional
colorings, we obtain 𝜒𝐹 (G) ≤ r. Combining these with the equality we obtained from duality, we get
that 𝜔𝐹 (G) = r = 𝜒𝐹 (G). This is the method we use to prove our result.
We have noted that the fractional clique number of a graph G is bounded from below by the integer
clique number, and that it is equal to the fractional chromatic number, which is bounded from above by
the integer chromatic number. In other words, 𝜔(G) ≤ 𝜔𝐹 (G) = 𝜒𝐹 (G) ≤ 𝜒(G).
Given these relations, one natural question to ask is whether the differences 𝜔𝐹 (G) − 𝜔(G) and
𝜒(G) − 𝜒𝐹 (G) can be made arbitrarily large. We shall answer this question in the affirmative, by
displaying a sequence of graphs for which both differences increase without bound.
128
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The sequence of graphs we will consider is obtained by starting with a single edge K2, and
repeatedly applying a graph transformation, which we now define. Suppose we have a graph G, with V
(G) = {v1, v2, . . . , vn}. The Mycielski transformation of G, denoted 𝜇(G), has for its vertex set the set
{x1, x2, . . . , xn, y1, y2, . . . , yn, z} for a total of 2n + 1 vertices. As for adjacency, we put
xi ~ xj in 𝜇(G) if and only if vi ~ vj in G,
xi ~ yj in 𝜇(G) if and only if vi ~ vj in G,
and yi ~ z in 𝜇(G) for all i ∈ {1, 2, . . . , n}. See Figure 4 below.
x1 y1
v1
v2 x2 y2
x1
v1
y1
x2 y2 y5 x5
v2 v5 z
v3 v4 y3 y4
x4
x3
The theorem that we shall prove states that this transformation, applied to a graph G with at least
one edge, results in a graph 𝜇(G) with
(a) 𝜔(𝜇(G)) = 𝜔(G),
(b) 𝜒(𝜇(G)) = 𝜒(G) + 1, and
1
(c) 𝜒𝐹 (𝜇(G)) = 𝜒𝐹 (G) + 𝜒 .
𝐹(𝐺)
First we note that the vertices x1, x2, . . . ,xn form a subgraph of 𝜇(G) which is isomorphic to G.
Thus, any clique in G also appears as a clique in 𝜇(G), so we have that 𝜔(𝜇(G)) ≥ 𝜔(G).
To obtain the opposite inequality, consider cliques in 𝜇(G). First, any clique containing the vertex
z can contain only one other vertex, since z is only adjacent to the y vertices, none of which are adjacent
to each other. Now consider a clique {xi(1), . . . ,xi(r), yj(1), . . . , yj(s)}. From the definition of the Mycielski
transformation, we can see that the sets {i(1), . . . , i(r)} and {j(1), . . . , j(s)} are disjoint, and that the
set {vi(1), . . . ,vi(r), vj(1), . . . , vj(s)} is a clique in G. Thus, having considered cliques with and without
vertex z, we see that for every clique in 𝜇(G), there is a clique of equal size in G, or in other words,
𝜔(𝜇(G)) ≤ 𝜔(G). Combining these inequalities, we have 𝜔(𝜇(G)) = 𝜔(G), as desired.
129
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1
𝑔(𝑦𝑖 ) = 𝑓(𝑣𝑖 )
𝜔𝐹 (𝐺)
1
𝑔(𝑧) =
𝜔𝐹 (𝐺)
We must show that this is a fractional clique. In other words, we must establish that it maps its
domain into [0; 1], and that its values sum to at most 1 on each independent set in 𝜇(G). The codomain
is easy to establish: the range of f lies between 0 and 1, and since 𝜔𝐹 (G) ≥ 𝜔(G) ≥ 2 > 1, then 0 <
1
𝜔 (𝐺)
< 1. Thus each expression in the definition of 𝑔 yields a number between 0 and 1. It remains to
𝐹
show that the values of 𝑔 are sufficiently bounded on independent sets.
130
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
We introduce a notation: for M ⊆ V (G), we let 𝑥(𝑀) = {𝑥𝑖 |𝑣𝑖 ∈ 𝑀} and 𝑦(𝑀) = {𝑦𝑖 |𝑣𝑖 ∈ 𝑀}.
Now we will consider two types of independent sets in 𝜇(G): those containing z and those not containing
z.
Any independent set S ⊆ V (𝜇(G)) that contains z cannot contain any of the yi vertices, so it must
be of the form S = {z} ∪ x(M) for some independent set M in V (G). Summing the values of 𝑔 over all
vertices in the independent set, we obtain:
1 1
= + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑀
1 1
≤ + (1 − )=1
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
Now consider an independent set S ⊆ V(𝜇(G)) with z ∉ S. We can therefore say S = x(M) ⋃ y(N)
for some subsets of V (G), M and N, and we know that M is an independent set. Since S is independent,
then no vertex in y(N) is adjacent to any vertex in x(M), so we can express N as the union of two sets A
and B, with A ⊆ M and with none of the vertices in B adjacent to any vertex in M. Now we can sum the
values of g over the vertices in S = x(M) ⋃ y(N) = x(M) ⋃ y(A) ⋃ y(B):
1 1
∑ 𝑔(𝑣) = (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑆 𝑣∈𝑀 𝑣∈𝑁
1 1 1
= (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑀 𝑣∈𝐴 𝑣∈𝐵
1 1 1
≤ (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣) + (1 − ) ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑀 𝑣∈𝑀 𝑣∈𝐵
1
= ∑ 𝑓(𝑣) + ∑ 𝑓(𝑣)
𝜔𝐹 (𝐺)
𝑣∈𝑀 𝑣∈𝐵
The first two equalities above are simply partitions of the sum into sub-sums corresponding to
subsets. The inequality holds because A ⊆ M, and the final equality is just a simplification. It will now
suffice to show that the final expression obtained above is less than or equal to 1.
Let us consider H, the subgraph of G induced by B. The graph H has some fractional chromatic
number, say r/s. Suppose we have a proper r/s-coloring of H. Recall that the color classes of a fractional
coloring are independent sets, so we have r independent sets of vertices in V (H) = B; let us call them
C1, . . . , Cr. Not only is each of the sets Ci independent in H, but it is also independent in G, and also Ci
⋃ M is independent in G as well, because Ci ⊆ B.
For each i, we note that f is a fractional clique on G, and sum over the independent set Ci ⊆M to
obtain:
∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ 1
𝑣∈𝑀 𝑣∈𝐶𝑖
131
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
𝑟 ∑ 𝑓(𝑣) + 𝑠 ∑ 𝑓(𝑣) ≤ 𝑟
𝑣∈𝑀 𝑣∈𝐶𝑖
The second term on the left side of the inequality results because each vertex in B belongs to s
different color classes in our proper r/s-coloring. Now we divide by r to obtain:
𝑠
∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ 1
𝑟
𝑣∈𝑀 𝑣∈𝐵
𝑠
Since r/s is the fractional chromatic number of H, and H is a subgraph of G, we can say that 𝑟 ≤
1 𝑠
𝜒𝐹 (G) = 𝜔𝐹 (G), or equivalently, 𝜔 ≤ 𝑟 . Thus:
𝐹 (𝐺)
1 𝑠
∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ ∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) ≤ 1
𝜔𝐹 (𝐺) 𝑟
𝑣∈𝑀 𝑣∈𝐵 𝑣∈𝑀 𝑣∈𝐵
as required. We have shown that the mapping g that we defined is indeed a fractional clique on 𝜇(G).
We now check its weight.
𝑛 𝑛
1 1 1
= (1 − ) ∑ 𝑓(𝑣) + ∑ 𝑓(𝑣) +
𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺) 𝜔𝐹 (𝐺)
𝑣∈𝑉(𝐺) 𝑣∈𝑉(𝐺)
1
= ∑ 𝑓(𝑣) +
𝜔𝐹 (𝐺)
𝑣∈𝑉(𝐺)
1 1
= 𝜔𝐹 (𝐺) + = 𝜒𝐹 (𝐺) +
𝜔𝐹 (𝐺) 𝜒𝐹 (𝐺)
This is the required weight, so we have constructed a fractional coloring and a fractional clique
1
on 𝜇(G), both with weight 𝜒𝐹 (𝐺) + 𝜒 (𝐺) . We can now write the inequality
𝐹
1
𝜒𝐹 (𝜇(𝐺)) ≤ 𝜒𝐹 (𝐺) + ≤ 𝜔𝐹 (𝐺)
𝜒𝐹 (𝐺)
and invoke strong duality to declare the terms at either end equal to each other, and thus to the middle
term. ∎
132
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Now that we have a theorem telling us how the Mycielski transformation affects the three parameters
of clique number, chromatic number, and fractional chromatic number, let us apply this result in a
concrete case, and iterate the Mycielski transformation to obtain a sequence of graphs {Gn}, with Gn+1
= 𝜇(Gn) for n > 2. For our starting graph G2 we take a single edge, K2, for which 𝜔(G) = 𝜒𝐹 (G) = 𝜒(G)
= 2.
Applying our theorem, first to clique numbers, we see that 𝜒(Gn) = 2 for all n. Considering
chromatic numbers, we have 𝜒(G2) = 2 and 𝜒(Gn+1) = 𝜒(Gn)+1; thus 𝜒(Gn) = n for all n. Finally, the
fractional chromatic number of Gn is determined by a sequence {𝑎𝑛 }𝑛∈{2,3,… } given by the recurrence:
1
a2 = 2 and an+1 = an + 𝑎 .
𝑛
This sequence has been studied (see [5] or [1]), and it is known that for all n:
1
√2𝑛 ≤ 𝑎𝑛 ≤ √2𝑛 + ln 𝑛
𝑛
1
Clearly, an grows without bound, but less quickly than any sequence of the form nr for r > 2. Thus, the
difference between the fractional clique number and the clique number grows without limit, as does the
difference between the chromatic number and the fractional chromatic number.
References
133
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Pramono SIDIa*, Ismail BIN MOHDb, Wan Muhamad AMIR WAN AHMADc,
Sudradjat SUPIANd, Sukonoe, Lusianaf
a
Department of Mathematics FMIPA Universitas Terbuka, Indonesia
b,c
Department of Mathematics FST Universiti Malaysia Terengganu, Malaysia
d,e,f
Department of Mathematics FMIPA Universitas Padjadjaran, Indonesia
a*
Email : pram@ut.ac.id
Abstract: Natural disasters, such as floods are one cause of property damage in residential
buildings that cannot be avoided, because it cannot know when it happened. This leads to the
risk that financing should be optimized. In this paper, optimization is performed on a
combination of three methods of financing funds (insurance, credit, and savings). Optimization
is made under conditions to ensure a comprehensive loss of damage to property on the
residential buildings. Simulation is done to see the effect of changes in the factors that
influence the optimization value of property, amount of debt, and the future value annuity. As
a result, the value of the loss ratio and the cost ratio is numerically shown in the table, and in
graphical form in three different situations.
1. Introduction
Natural disasters are categorized into four, namely: natural disasters were meteorological or also called
hydro-meteorological disasters are climate-related causes, such as floods (events that occur when the
flow of excess water soak the mainland), geological disaster is a natural disaster that occurs on the
surface of the Earth, such as earthquakes (vibration or shock that occurs in the earth's surface caused by
the release of energy from the sudden that creates seismic waves), volcanic eruptions (events that occur
due to deposition of magma in the earth's crust is pushed out by a high-pressure gas), and also tsunami
(water body displacement caused by changes in sea surface vertically with a sudden). Disasters from
space is the arrival of various celestial bodies such as asteroids or solar storms disorders, such as
asteroids can be a threat to countries with large population such as China, India, United States, Japan,
and Southeast Asia. Outbreak or epidemic is an infectious disease that spreads through the human
population at large in scope, eg, across the country or around the world.
Natural disasters caused the risks. Risks cannot be avoided, eliminated, or moved, but the risk
can be minimized. One of the risks caused by natural disasters is financing risks arising from property
damage. There are various methods of financing (insurance, reserves, loans, etc.). Financial risk
management can be done by one or a combination of several methods of financing. Research on this
problem was made by Hanak (2010), the sensitivity analysis on the optimization of funding providing
property damage repair in residential buildings contained in a journal with a combination of the two
methods of financing. Based on the journals, the research focus is on the optimization of simulation
analysis of funding provision for property damage repairs in residential buildings with a combination
of three methods of financing. Hanak research goal is to ensure that losses covered comprehensively by
utilizing the advantages of the method of financing and credit insurance. Results of research conducted
by Hanak (2010) is valid on the parameters used in the case study.
In this paper conducted a review of the literature study simulation model optimization improved
the provision of funding for property damage to a residential building, which was developed by Hanak
134
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
(2009; 2010). The purpose of this study is a simulation model to determine the factors that influence
the optimization property damage repair fund, the value of the property, annuity future value, and the
amount of debt. These factors are examined for three situations characterized by different input
parameters and is described by two indicator ratio (loss ratio and cost ratio).
2. Mathematical Model
In this section discussed about the ex ante risk financing, the risk of ex post financing, the factors that
Affect the provision of funds, and the optimization models of the supply of funds. This discussion is
very useful in the study of simulation optimization done providing funds to repair damage to residential
buildings by the caused by floods.
AFV A s (2)
i
where, 𝐴𝐹𝑉 annuity future value, As one annuity amount (Amount of each payment), i interest rate
(interest rate), and n period (in years).
135
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
1 r 1
p
AC (3)
1 r r
p
or (Frensidy,2005:62) :
1 1 r
p
AC D (4)
r
where AC annuity credit (total installment credit), D debt amount (installment amount per period), r
annual credit interest rate (interest rate per period), and p term of expiration (long period).
Optimization Model
In this paper examined two studies conducted by Hanak (2009; 2010) related to flood disasters, ex ante
risk financing methods (life insurance and savings), the risk of ex post financing methods (credit). Then
conducted studies optimization models provide funding to repair the damage of property residential
buildings. Further simulations were performed to study the data to determine the case of two indicators:
the ratio of the loss ratio and the cost ratio as a comparison. Simulations are done using the data as it is
presented by Hanak (2010), because the data relating to the provision of funding to repair property
damage to residential buildings not obtained fully in global.
Values of the loss ratio and the cost ratio values shown in the graph in three different situations.
Three different situations are characterized by differences in parameters of the factors that influence the
optimization of funding models for property damage repair (Hanak, 2010), as follows:
Situation 1: The value of property value (not fixed), the value of debt of amount (fixed), and the value
of future value annuity (fixed), the intention is that: the value of the variable VP intended variable (not
fixed); Value of Variable D included a constant, and the value of variables included AFV constant.
Situation 2: The value of property value (not fixed), the value of debt of amount (not fixed), and the
value of future value annuity (fixed), the point is that: the value of the variable VP intended variable
(not fixed ); Value of variable D are included in the models optimized by varying intervals, and Value
of AFV which included variables constant.
Situation 3: The value of value of the property (not fixed), the value of debt of amount (not fixed), and
the annuity value future value (not fixed), the intention is that: the value of the variable VP intended
variable (not fixed); Value of variables D is incorporated in the models optimized by varying intervals,
and Value of variable AFV models optimized by varying intervals.
Mathematical model for optimizing the provision of funding property repairs consist of
objective function is the total cost of repairing damaged property. To model the financing fund repair
of property damage in residential buildings, is defined as the following variables:
𝑁 = tahun, lamanya pembayaran premi yang dilakukan atau lamanya properti diasuransikan.
𝑉𝑃 = value of property, value of the insured property
𝐶𝐴 = capital assured, total insured, the total value of the property to be multiplied with the
insurance rate.
136
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the variables in the model obtained for the total cost of financial backing's creation, namely:
Total cost of insurance premiums
100 DIS 100 ADD
TP N CA IR (5)
100 100
with
1 r 1
p
1 j
m
PL AC
1 r r
p
CA VP (6)
1 PL j
m
1 r 1
p
1 PL j AC 1 r p r
m
100 DIS 100 ADD
TP N VP BIR FIRB k
1 j
m
PL 100 100
(7)
Total payments beyond the limits of the ability of insurance (insurance benefit payment limit).
Calculation of insurance benefit payments over the upper limit (PIBL) influenced by the particular
loss (PL) and the upper limit insurance benefits (UIBL).
If PL UIBL then PIBL PL IB (8)
If PL UIBL then PIBL 0 (9)
Total own risk payment (deductible). Own calculations for the case of flood risk is influenced only
by the magnitude of a particular loss (PL) or a claim filed by the insured is:
DP DR 1 PL j
m
(10)
Total installment credit (annuity credit)
1 r r D
p
AC (11
1 r 1
p
137
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
So, of the five mathematical models for each property in the total cost of repairs data, the mathematical
model for the total cost of funding repairs to the damaged property is a residential building:
a b c d
TC TP 0 PIBL 0 DP 1 AC 1 AS (13)
Or
1 r 1
p
1 PL j AC 1 r p r
m
0 PIBL DR 1 PL j
b m
1 r 1 1 i 1
p n
3. Illustrations
3.1 Data Simulation
The data used for illustration here is a simulated data, with the parameters given in Table-1 as follows:
138
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
m
PL j
LR 1
(15)
VP
TC
CR (16)
VP
Simulation of optimization analysis is used in the case study to the risk of property damage due
to flooding in residential buildings. Sensitivity analysis focused only on three factors: the value of the
property, annuity future value, and the amount of debt. All the factors mentioned above have their
impact on the model and the three variables have a discrete probability distribution. The LR and CR
values shown on the chart in three different situations based on changes in the value of the selected
factors (value of property, annuity future value, and the amount of debt).
Situation 1: The value of property value (not fixed), the value of debt of amount (fixed), and the value
of future value annuity (fixed). VP variables included in the input of the interval (80,000,000;
1,000,000,000); variables constant D (0; 0); constant and variable AFV (0; 0). Simulation results are
shown in graphical form as in Figure-1.
Situation 2: The value of property value (not fixed), the value of debt of amount (not fixed), and the
value of future value annuity (fixed). The VP variables included in the input of the interval (80,000,000;
10,000,000); Variable D optimized by the model on the interval (0; 8,000,000); constant and variable
AFV (0; 0). Simulation results are shown in graphical form as in Figure-2.
Figure-1 Graph of function 𝐿𝑅 and 𝐶𝑅1 – Figure-2 Graph of function 𝑳𝑹 and 𝑪𝑹𝟐 –
Situation 1 Situation 2
Based on Figure-1, the conditions will be favorable if LR> 16%. And become ineffective when
the curve is under CR LR (LR <16%), thus added a new financing method is credit. Meanwhile, Based
on Figure-2, the conditions will be favorable if LR> 11%. And become ineffective when the curve is
under CR LR (LR <11%), thus added a new financing method that is savings.
Situation 3: the value of the property (not fixed), the value of debt of amount (not fixed), and the
annuity future value (not fixed). VP variables included in the input of the interval (1000000; 10000000);
Variable D optimized by the model on the interval (0; 8,000,000), and Variable AFV optimized by the
model on the interval (0; 5,000,000). The results are shown in graphical form as in Figure-3.
139
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Figure-3 Graph of function 𝑳𝑹 and 𝑪𝑹𝟑 Figure-4 Graph of Function 𝐿𝑅, 𝐶𝑅1, 𝐶𝑅2, and
– Situation 3 𝐶𝑅3
Based on Figure-3, a change in the slope of the curve CR, thereby reducing the inefficiencies of
insurance and credit when the condition where the curve LR is under the CR (LR < CR). Figure-4
illustrates the effect of the combination of three types of financial reserve that insurance, credit, and
savings, made with mathematical models. Where by combining three types of reserve financing, method
of financing can make the fund more efficient and effective.
4. Conclusions
In this paper has studied on simulated factors that influence the optimization of funding the provision
for property damage repairs on a residential building, which is caused by the floods. There are two
studies on the method of financing, namely: ex ante risk financing methods (life insurance and savings),
the risk of ex post financing methods (credit). Using three factors, namely insurance, saving, and credit,
simulation studied to determine the characteristics of the factors that influence the optimization of
140
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
providing funding repairs. Optimization analysis is done by comparing the ratio of the two indicators,
the loss ratio and expense ratio. Results of simulation studies on the combination of three factors,
namely insurance reserve financing, credit, and savings, which are made with mathematical models,
shows that by combining these three types of reserve financing, method of financing can make the fund
more efficient and effective, or more optimum.
References
Frensidy, Budi. 2005. Matematika Keuangan. Jakarta : Salemba Empat.
Alcrudo. (2003). Mathematical Modelling Techniques for Flood Propagation in Urban Areas. Working
Paper. Universidad de Zaragoza, SPAIN.
Firdaus, Fahmi. (2011). 1598 Bencana Alam Terjadi Di Indonesia. Online artikel.
http://news.okezone.com/read/2011/12/30/337/549497/bnpb-1-598-bencana-alam-terjadi-
ditahun-2011. (diakses tanggal 16 November 2012).
Friedman, D.G. (2005). Insurance and Natural Hazard. Working Paper. The Travelers Insurance
Company, Hartford, Connecticut, USA.
Hanak, Tomas. 2009. Sensitivity Analysis of Selected Factors Affecting the Optimization of the Funds
Financing Recovery from Property Damage on Residential Building. Nehnuteľnosti a bývanie.
ISSN : 1336-944X.
Hanak, Tomas. 2010. How to Ensure Sufficiency of Financial Backing to Cover Future Losses on
Residential Buildings in Efficient Way?. Nehnuteľnosti a bývanie, Vol. 4, pp. 42-51. ISSN: 1336-
944X.
Irawan, D. & Riman. 2012. Apresiasi Kontraktor Dalam Penggunaan Asuransi Pada Pembangunan
Konstruksi di Malang. Jurnal Widya Teknika Vol.20 No.1; Maret 2012.
Jongman, B., Kreibich, H., Apel, H., Barredo, J.I., Bates, P.D., Feyen, L., Gericke, A., Neal, J., Aerts,
J.C.J.H., & Ward, P.J. (2012). Comparative Flood Damag Model Assessment: Toward a European
Approach. Natural Hazards and Earth System Sciences. 12, 3733-3752, 2012.
Marhusor, Hilda. 2004. Studi Perhitungan Premi Pada Asuransi Konstruksi Untuk Risiko Pada Banjir.
Skripsi, Jurusan Matematika, FMIPA, ITB.
Merz, B., Kreibich, H., Scwarze, R., and Thieken, A. (2010). Assessmen of Economic Flood Damage”
(Review Article). Natural Hazards and Earth System Sciences. 10, 1697-1724, 2010.
Sanders, R., Shaw, F., MacKay, H., Galy, H., & Foote, M. (2005). National Flood Modelling for
Insurance Purposes: Using FSAR for Flood Risk Estimation in Europe. Hidrology & Earth System
Sciences, 9(4), 446-456 (2005) © EGU.
Shrubsole, D., Brooks, G., Halliday, R., Haque, E., Kumar, A., Lacroix, J., Rasid, H., Rossulle, J., &
Simonovic, S.P. (2003). An Assessment of Flood Risk Management in Canada. Working Paper
No. 28. Institute for Catastrophic Loss Reduction.
141
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Stock market is one of the economic driving on a country. It is because the stock market is
capital facilities and accumulation of long term funds that directed for increasing community
participation in moving the funds in order to support financing the national economy. Almost all industry
in the country were represented by stock market. Therefore the fluctuation of stock price that recorded is
reflected through an index movement that better known as Indeks Harga Saham Gabungan (IHSG).
Indeks Harga Saham Gabungan (IHSG) are affected by some factors, such as internal factor and external
factor. Factor that came from overseas is like foreign exchange index, crude oil prices, and sentiment
overseas market. Factor that came from domestic is usually came from foreign exchange rate. Through
this study, we will see the factors influence, both of internal and external against Indeks Harga Saham
Gabungan (IHSG) especially occur in Bursa Efek Indonesia (BEI)
The global economic crisis had a significant impact on the development of the capital market in
Indonesia. The impact of the world financial crisis, better known by the global economic crisis, which
occurred in the United States, is very influential towards Indonesia. Because the majority of Indonesian
exports performed in the U.S. and of course it greatly affects the economy in Indonesia. One of the most
influential impact of the economic crisis America is the rupiah weakened against the dollar, Indeks
Harga Saham Gabungan (IHSG) is increasingly unhealthy, and of course exports are hampered due to
reduced demand from the U.S. market itself. Besides closing for a few days as well as the suspension
of stock trading in Bursa Efek Indonesia (BEI) is one of the real impact and the first time in history,
which of course can reflect on how big the impact of the global nature of this problem.
The capital market is one of the economy movers of a country. Because the stock market is a
tool of capital formation and accumulation of long-term funds, directed to increase participation from
the public in the mobilization of funds, to support its financing national development. In addition, the
stock market is also a representation to assess the condition of the companies in a country. Because
almost all the industries sector in the State represented by the capital market. Capital markets are
experiencing increased (Bullish) or decreased (Bearish) seen from the rise and fall of stock prices listed,
reflected through a movement of the index or better known as the Indeks Harga Saham Gabungan
(IHSG). IHSG is t he value used to measure the combined performance of all capitals (companies /
issuers) listed on the Bursa Efek Indonesia (BEI).
The summary of simultaneous and complex effects on various influencing factors, primarily
economic phenomena. IHSG even today used as a barometer of economic health of a country and as a
foundation of statistical analysis on current market conditions (Widoatmojo, S. 1996:189). Meanwhile,
according to Ang (1997:14.6), Indeks Harga Saham Gabungan (IHSG/Stock Price Index) is a value that
is used to measure the performance of stocks listed in the stock exchanges. The IHSG issued by the
142
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
respective stock exchanges and officially issued by private institutions such as the financial media,
financial institutions, and others.
In this research will be demonstrated the variables that will affect the movement of Indeks
Harga Saham Gabungan (IHSG) as an index of foreign exchange, oil prices, exchange rates, interest
rates, and inflation in Bursa Efek Indonesia (BEI). So we could know which variables affect the
movement of Indeks Harga Saham Gabungan (IHSG) the most.
Based on the description of the background of this paper, we can identified the issues to be
discussed, as follows:
1. What is the influence of foreign stock indices, oil prices, and the exchange rate towards
Indeks Harga Saham Gabungan (IHSG)?
2. Which variables are the most dominant in the movement of Indeks Harga Saham Gabungan
(IHSG)?
IHSG is the earliest published Index, which is a reflection of the development of prices on the Stock
Exchange in general. IHSG is the change in stock price, either common stock or stock preferens, when
calculating the price at the time of the calculation basis. Unit change in the stock price index is for
points. If today's IHSG BEJ is 1800 points, while the previous day is 1810 points, the index fell 10
points, said.
IHSG calculations are done by weighting according to the weighted average of the market value
(market value weighted average index). First each share is calculated based on market value weights,
the number of shares multiplied by the share price. This value is then compared to the overall market
value to gain weight. The method of calculating the index on the BEI is as follows:
Linear regression analysis is a study involving a functional relationship between variables in the data,
expressed in mathematical equations (Sudjana, 2005:310). On its use, it is less simple linear regression
analysis could represent a more complex issue, because it involves only one independent variable.
Simple linear regression analysis was developed into a multiple linear regression analysis.
Model of multiple linear regression analysis equation can generally be written in matrix
notation as follows:
Y=Xβ + ε (1)
143
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Linear regression model in equation (1) is unknown from the elements of the parameter vector.
Therefore we need an estimation of the model.
A common method used to estimate the parameters is the method of least squares regression.
The main principle of this method is estimating the parameters by minimizing the residual sum of
squares.
Equation (1) is a regression model of the population, then the regression model of the sample
is expressed as follows:
ˆ +e
Y = Xβ
if described, would be as follows::
In a simple way, the parameter estimation 𝛽̂0 , 𝛽̂1 , 𝛽̂2 , … , 𝛽̂𝑘 using the LSM as follows:
ˆ = (XT X)-1 XY
β
The number of Square’s Sum (SS) show the number of irregularities around its average value,
which consists of two sources, namely the sum of squares regression which states influence on the
regression and residual sum of squares which is the remainder of the DPS that can’t be explained by
(Sembiring, 2003: 45).
SSR can be written as follows:
𝑛
𝑆𝑆𝑅 = ∑ 𝑒𝑖2
𝑖=1
Multiple Regression Model T Test
T test is used to demonstrate how far the influence of the independent variable or a dependent
individually in explaining variation in the dependent variable or bonded variable. The purpose of the
test is to test the t itself. Here are the steps to test the hypothesis with a t distribution:
1) formulate hypothesis
H0 : βi = 0, means that the independent variables do not have a relationship with the dependent variable
H1 ∃ βi ≠ 0, (i = 1,2, … , n; and n is the index of economic) means that the independent variables
have a relationship with the dependent variable
2) Determine the significant level or degree of certainty
Significant level or degree of certainty that is used for α = 1%, 5%, 10%, with df = n − k
Where: df = degree of freedom
n = number of samples
k = number of regression coefficients
3) Determine the decision regions, the region where the null hypothesis or H0 is accepted or not. With
the following criteria:
144
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
F-test was conducted to determine the effect of independent variables together on the dependent
variable. Here are the steps to test the hypothesis:
1) Formulate Hypotheses
H0 :β1 = β2 = β3 = ⋯ = βk = 0, means there is no relationship between the dependent and
independent variables.
H1 :β1 ≠ β2 ≠ β3 ≠ ⋯ ≠ βk = 0, means there is a relationship between the dependent and independent
variables.
2) Determine the significant level
Significant level or degree of certainty that is used for α = 1%, 5%, 10%
3) Determine the decision the region, _ the region where the null hypothesis or is accepted or not.
H0 is accepted if the calculated Fcount ≤ Ftable, means that all the independent variables together do not
have a relationship with the dependent variable.
H0 is rejected if the Fcount> F table, means that all the independent variables together have a relationship
with the dependent variable.
5) Draw conclusions
Decision-making can be either acceptance or rejection of. Calculated F value obtained in the previous
step and compared with the value of F obtained from the table. If the calculated F is greater than F table,
145
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
then is rejected, so that it can be concluded that there is a relationship between the independent variables
together with the dependent variable. But if F count less than or equal to the F table, then is accepted,
so it can be concluded that there is no relationship of independent variables together with the dependent
variable.
Stepwise Method
This method is a combination of the two methods, the forward selection method (forward selection) and
the allowance method (backward elimination). In this method both methods are applied
interchangeably. Stepwise method is used to get a similar conclusion but from a different direction, that
is by including variables in order to meet the regression equation. The order to insert a variable,
determined by using partial correlation coefficient as a measure, for determining the importance of the
variables in the equation that has not been there.
Stepwise regression analysis for the selection of a good equation similar to forward selection
methods, except for any portion that meets the hypothesis H0 : βi = 0 for all variables tested, and which
satisfy |t i |-statistic is less than the critical value are eliminated from the equation. The next equation is
added into the equation to see that the value of the partial correlation of the forward method. Stepwise
election continues until the achievement of equality without variables |t i |-statistic is less than some
critical value of t correspond and no variables left to put into the equation.
To perform stepwise regression analysis using the SPSS application is very easy. There already
provided tools to process data by stepwise method. But here I will explain the procedure to release the
model stepwise method without using the tools available in SPSS.
The first step for the stepwise method is to find the value of the correlation coefficient of each
variable used. Further note that the value of its variable approach |R| → 1, then insert variables into the
model. Once there is one variable, then estimate the regression model. Note the value of t, if |t count | >
t table(1−α;db) or p - value < α, then the first significant variables entered into the model.
The next step is to calculate the partial correlation remaining independent variables with the
dependent variable, the control variable is the variable that is entered into the model in the first step
earlier. After the results of the calculations obtained, note that the correlation coefficient approaching |
R | → 1 and then enter into the model. Proceed by estimating the model using the first and second
variable. If the value of |t count | > t table(1−α;db) or p - value < α, then resumed by calculating partial
correlation remaining independent variables with the dependent variable. As a second control variable
is the variables obtained from the previous estimate. If |t count | > t table(1−α;db) are not met by the final
estimated variable, then that variable can be eliminated or not used in the model.
Next repeat the above steps until all independent variables until there are no remaining
variables. If all the variables are no longer left, then the next step is to build a model of the variables
that have been obtained from the previous process. Models formed only of the variable or variables that
are not eliminated or qualifying |t count | > t table(1−α;db) or p-value <α. The model that has been
established is the best model of the stepwise method
3. Data Processing
This study uses monthly data in the form of year 2003 to 2012. The data used in this paper is the
simulation data and secondary data. Simulation data is used to see a pattern that can provide conclusions
regarding the methods used. Secondary data obtained from http://finance.yahoo.com/,
http://www.bi.go.id/, and http://www.esdm.go.id/
The data used are:
1) Data Indeks Harga Saham Gabungan (IHSG) in Indonesia in 2003-2012.
2) Data Dow Jones Industrial Average in 2003-2012.
3) Financial Times Stock index data exchange years 2003-2012.
4) U.S. Dollar exchange rate data in 2003-2012.
5) British Pound exchange data years 2003-2012.
6) Data crude oil prices in 2003-2012.
146
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
This section contains the data displayed on the attachment in the form of secondary data.
Secondary data that has been obtained is being processed by using SPSS 17. Dow Jones Industrial
Average data or United States of America’s stock exchanges hereinafter referred to as DJIA. Financial
Times Stock Exchange or Great Britain’s stock exchanges hereinafter referred to as FTSE. World crude
oil data hereinafter referred to as MINYAK. Rupiah’s exchange rate to United States Dollar hereinafter
reffered to as USD. Rupiah’s exchange rate to Great Britain Poundsterling hereinafter referred to as
GBP.
Calculation of the coefficient of determination is done to determine which variables will be entered first
into the model. The result of the calculation is a value | R | which is between 0 and 1.
Oil prices was the first variable entered into the model. The next step is to estimate the model with
variables entered first. Estimate the model used in this step to enter method in SPSS tools.
Table 2 First Stage Estimation Results
147
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta T Sig.
1(Constant) -440.386 29.351 -15.004 .000
MINYAK 36.484 .390 .889 93.564 .000
a. Dependent Variable: JKSE
Based on Table 2 it can be seen that the results |t count | > t table(1−α;db) or p − value < 𝛼 then
the price of oil retained in the model.
Then the next step is further partial correlation calculation. Partial correlation is calculated using the
control variables obtained from existing variables in the model first is the price of oil.
Table 3 Shows the results of the calculation of the partial correlation with oil prices as control
variables. From the table it can be seen the value of |𝑟𝑥𝑖,𝑀𝐼𝑁𝑌𝐴𝐾 | is close to 1 is the correlation of GBP
or pound sterling exchange rate.
148
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
GBP or pound sterling exchange rate entered as the second variable when seen from the results in Table
3. Further back we estimate the model by including two variables: oil prices and the pound sterling
exchange rate.
Table 4 Estimation Results of Phase Two
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta T Sig.
1(Constant) 2318.355 98.454 23.548 .000
MINYAK 35.524 .336 .866 105.759 .000
GBP -.167 .006 -.237 -28.983 .000
a. Dependent Variable: JKSE
Based on Table 4 it can be seen that the results |t count | > t table(1−α;db) or p − value < 𝛼 the
oil price and the exchange rate at the pound retained in the model.
At this stage we re-calculated value of the partial correlation of the remaining variables. Calculation at
this stage using two control variables obtained from the model that has been estimated previously.
Because oil prices and the exchange rate of pound sterling is not eliminated from the model, the two
variables become the control variable to calculate the partial correlation stage.
149
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
DJIA or the U.S. stock price index is the next variable entered into the model. Based on Table 5 of the
calculation of the partial correlation, then DJIA goes into estimation model and a model with the DJIA
as a third variable.
Table 6 Estimation Results Third Stage
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.
1(Constant) 1647.564 98.850 16.667 .000
MINYAK 29.830 .440 .727 67.848 .000
GBP -.207 .006 -.293 -35.659 .000
DJIA .155 .008 .202 18.482 .000
a. Dependent Variable: JKSE
Based on Table 6, it can be seen that the results |t count | > t table(1−α;db) atau p − value < 𝛼
then oil price, exchange rate pound sterling, and U.S. stock index maintained in the model.
The next stage is re-calculating the partial correlation. This time is used three control variables for the
DJIA retained in the model.
Table 7 shows the results of the calculation of the partial correlation with oil prices, exchange
rate pound sterling, and U.S. stock price index as a control variable. From the table it can be seen that
the value of |rxi,MINYAK;GBP;DJIA | which is close to 1 is USD or the value of the U.S. dollar exchange
rate. These variables are subsequently incorporated into the model to be estimated.
USD or U.S. dollar exchange rate is the fourth variable is the next entry. Hereinafter jointly be estimated
models there are three variables that have been previously.
150
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Coefficientsa
Standardized
Unstandardized Coefficients Coefficients
Model B Std. Error Beta t Sig.
1(Constant) -3381.638 142.699 -23.698 .000
MINYAK 21.988 .383 .536 57.336 .000
GBP -.360 .006 -.511 -62.643 .000
DJIA .375 .008 .489 45.300 .000
USD .602 .015 .342 41.422 .000
a. Dependent Variable: JKSE
Based on Table 8, it can be seen that |t count | > t table(1−α;db) or p − value < 𝛼, then no variable
is eliminated. Thus, the price of oil, pound sterling exchange rate, stock price index of the United States,
and the U.S. dollar exchange rate goes into the model.
At this stage the model estimation is done directly by entering the last variable or the FTSE UK
index of stock prices. Calculation of partial correlation is no longer done because this variable is the
last variable that has not been entered into the model. Therefore at this stage the model is estimated
with all variables.
151
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
4. CONCLUSION
Based on the model it can be seen that the coefficient of OIL has the greatest value. It means that
most affect the movement of oil price index compared with other variables. And most do not affect JCI
is the DJIA or the U.S. stock price index since the value of the coefficient is small.
References
152
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Proofs in geometry is considered difficult and boring in school mathematics, so that
many students and teachers as well try to avoid it. Indeed, solving proofs require a
comprehensive basic knowledge of geometry. This is a preliminary research of how to learn
geometry through proofs. The instructions consist of several stages. With three classes of total
60 graduate preservice students, in one semester they enjoy and having fun learning proofs, and
absolutely gain their knowledge of geometry deeper and enhance their curiosity of learning
geometry further.
1. Introduction
Conjecturing and demonstrating the logical validity of conjectures are the essence of
the creative act of doing mathematics (NCTM Standards, 2000).
It’s been a common old story that geometry is a difficult and boring subject in learning mathematics.
As we know, geometry is a subject avoided by most high school teachers (in Indonesia), since their
perception of geometry as a difficult subject, needs a lot of spatial thinking (imagination) and deductive
reasoning. As a matter of fact, it is supposed to be more interesting and easier to teach geometry
compared to algebra. Look everywhere in or outside the class, you will see a lot of geometry shapes
around! Furthermore, evidence in mathematics curriculum tells that, algebra has a lot more materials
than geometry, and consequently teachers and students spend a lot more time in studying algebra, rather
than geometry.
Proof in geometry is one important subject in geometry, yet only a few has been taught in high
school as well as in the university (Burger & Shaughnessy, 1986; Usiskin, 1982). Solving proofs require
comprehensive and continuous concepts from the start to the end. Numerous attempts have been made
to improve students' proof skills by teaching formal proof, albeit largely unsuccessful ones (Harbeck,
1973; Ireland, 1974; Martin, 1971; Summa, 1982; Van Akin, 1972). Moreover Senk (1985) examined
that of over 1500 students; only about 30 percent of students enrolled in full-year geometry courses
achieved 75 percent mastery level in proof writing.
In class, generally geometry is taught in a teacher-centered approach, where the teacher
is the center of attention and students are consider as a whole group. In certain session,
problems are assigned, corrected, and handed back with a little feedback. This approach might
work for some students, but the problem is how to make all a better life-long problem solver.
The purpose of this preliminary research is to describe an approach that is suitable in teaching
geometry, through proofs.
153
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Theoretical Background
In the development of geometrical thinking specifically, where proofs will be the core of geometry,
students must pass through several stages; initial stage is to identify the problem, that is what is given
and its picture, and what has been asked; middle stage is to retrieve all the knowledge they have around
what is given such as definition and properties, how to approach the solution using his or her intuitive
perceptions, and final stage is to write down the proof rigorously. These mean is for discovery or
deduction, students will make conjectures based on the picture or pattern. Proofs might be considered
as an open-ended problem that is there are many ways to construct proofs, so students might prove upon
not only their mathematical knowledge, but also their imagination and creativity. As Silver, et al. (2005)
stated “You can learn more from solving one problem on many different ways than you can from solving
many different problems”.
Moore (1994) stated that there are seven major sources of students’ difficulties in solving
proofs: inability to state the definition, inadequate concept image, inability to use the definition to
structure a proof, inability or unwillingness to generate examples, and difficulties with mathematics
language and notation. Accordingly, in most geometry class, teacher seldom discuss alternative
solutions.
Solving proofs might be also considered as problem solving procedures, but many mathematics
teachers often teach students by having them copy standard solution methods, and not surprisingly that
students find it difficult when facing new problems (Harskamp and Suhre, 2006).
More importantly, proofs supposed to be a meaningful tool to learn mathematics, not as a formal and
boring exercise for students and teachers.
Hanna (1995) argues that “the most important challenge to mathematics educators in the context of
proof is to enhance its role in the classroom by finding more effective ways of using it as a vehicle to
promote mathematical understanding.”
3. Methodology
Three classes of preservice teachers in master’s program, comprised of a total of 60 graduate students
of mathematics education were participating in the Geometry course. The topic is ‘how to prove Euclid-
geometry problem’, ranging from easy to hard problems. At the first and the last session of the semester,
students filled a questionnaire of their perception of geometry.
The design instructional is as follows:
Teacher reviewed some basic properties of several of 2-dimensional shapes, including its
definitions.
The students were provided 30 proof problems. Started from the first 10 problems, students
tried to prove them individually or by discussion. For each problem, there might be different
type of proof. The teacher circled around, provided small hints to students whom asked for
help. If the students could not get them done, then they did it as an assignment for the next
session.
Next, the teacher checked all the students’ assignment, and he will carefully choose a particular
problem (or the teacher might ask the students if there are any problem to be discussed), and 3
students who solved in different ways. These 3 students will write their complete proof on the
board.
The teacher, together with the students, analyzed these student’s proofs, by showing their
mistakes, if exists. The last step, the teacher asked the students: which way of proving is the
most effective, the most comprehendible, the ‘easiest’, etc.
This process continued until all problems have been proven.
Throughout the course, students might use dynamic power of GSP to explore, and lead to
conjectures.
154
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Initially, students felt scary and not confidence in learning geometry, and it appeared that lack of
geometry concept was the problem. Slowly, along with proving problems they comprehend the concept
and gained their confidence. At the end of the semester, there was a huge difference of their perception
of geometry. They concluded that mastering the basic concept were the most fundamental of learning
geometry. In facing problems, they should know what is given and what has been asked, use their
intuition or reasoning to choose the appropriate rules or properties and implement them towards the
goal, and use their skills to write down a accurate and rigorous proof. We believe by writing proofs
accurately and rigorously, students understand the underlying concepts and ideas.
Some benefits from this instruction:
The dynamic geometry software helps students to sharpen their intuition, and reasoning.
Proofs are always open-ended problems, means there are many ways to prove.
Proofs can be considered as problem solving procedure, so that it is a good exercise to approach
problems and make use of proper mathematical tools.
In solving proofs rigorously, one must have comprehensive concepts, which are mastering the
concept.
Students improve how to write proofs accurately by thinking logically.
This approach is a student-centered learning and process-oriented, where students communicate
to each other scientifically.
Finally, the students found that learning geometry was fun, exciting, and challenging, even for less
successful student, and they felt with ecstasy once a problem was proven. What important for students
is they learn with pleasure, and that will enhance their curiosity to learn geometry!
References
Burger, W. F. and Shaughnessy, J.M. (1986). Characterizing the van Hiele Levels of Development in
Geometry. Journal for Research in Mathematics Education 17 :31-48.
Hanna, G. (1995). Some pedagogical Aspects of Proof. Interchange 21: 6-13.
Harbeck, S.C.A. (1973). Experimental Study of the Effect of Two Proof Formats in High School
geometry on Critical Thinking and Selected Student Attitudes. Ph.D diss. Dissertation Abstracts
International 33 :4243A.
Harskamp, E. and Suhre, C. (2006). Improving Mathematical Problem Solving: A Computerized
Approach. Computers in Human Behavior, 22, 801-815.
Ireland, S.H. (1974). The Effects of a One-Semester Geometry Course, Which Emphasizes the Nature
of Proof on Student Comprehension of Deductive Processes. Ph.D diss. Dissertation Abstracts
International 35 :102A-103A.
Martin, R.C. (1971). A Study of Methods of Structuring a Proof as an Aid to the Development of Critical
Thinking Skills in High School Geometry. Ph.D diss. Dissertation Abstracts International 31
:5875A.
Moore, R.C. (1994). Making the Transition to Formal Proof. Educational Studies in Mathematics, 27,
249-266.
NCTM. (2000). Principles and Standards for School Mathematics. Reston, VA: The National Council
of Teachers of Mathematics, Inc.
Senk, S. L. (1985). How Well Do Students Write Geometry Proofs? Mathematics Teacher 78
(September 1985):448-56.
Silver, E.A., et al. (2005). Moving from Rhetoric to Praxis: Issues Faced by Teachers in Having
Students Consider Multiple Solutions for Problems in the Mathematics Classroom. Journal of
Mathematical Behavior, 24, 287-301.
Summa, D.F. (1982). The Effects of Proof Format, Problem Structure, and the Type of Given
Information on Achievement and Efficiency in Geometric Proof. Ph.D diss. Dissertation Abstracts
International 42 :3084A.
155
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Usiskin, Z. (1982) Van Hiele Levels and Achievement in Secondary School Geometry. Final report of
the Cognitive Development and Achievement in Secondary School Geometry Project. Chicago:
University of Chicago, Department of Education.
Van Akin, E. F. (1972). An Experimental Evaluation of Structure in Proof in High School Geometry.
Ph.D diss. Dissertation Abstracts International 33 (1972):1425A.
Willoughby, S. (1990). Mathematics education for a changing world. ASCD: Alexandria, VA.
Ziemek, T.R (2010). Evaluating the Effectiveness of Orientation Indicators with an Awareness of
Individual Differences. Ph.D dissertation. Univ of Utah.
156
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: This research describes a mathematical model of the inflammation dental pulp. The
reaction can be either reversible or irreversible pulpitis, depends on the intensity of stimulation,
the severity of the damaged tissue, and host response. It causes pain from the lightest pain,
namely sensitive teeth complaints to the most severe spontaneous pain localized difficult. The
aim of this research is to obtain the characteristic function of the level of inflammatory dental
pulp (reversible and irreversible pulpitis) based on histogram analysis on the periapical
radiography.
1. Introduction
A mathematical model is a description of a system using mathematical concepts and language. The
process of developing a mathematical model is termed mathematical modelling. Mathematical models
are used not only in the natural sciences (such as physics, biology, earth science, meteorology) and
engineering disciplines (e.g. computer science, artificial intelligence), but also in the social sciences
(such as economics, psychology, sociology and political science); physicists, engineers, statisticians,
operations research analysts, economists and medicine (dentis) are use mathematical models most
extensively. A model may help to explain a system and to study the effects of different components,
and to make predictions about behaviour.
Mathematical models can take many forms, including but not limited to dynamical systems,
statistical models, differential equations, or game theoretic models. These and other types of models
can verlap, with a given model involving a variety of abstract structures. In general, mathematical
models may include logical models, as far as logic is taken as a part of mathematics. In many cases, the
quality of a scientific field depends on how well the mathematical models developed on the theoretical
side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical
models and experimental measurements often leads to important advances as better theories are
developed. (Sudradjat, 2013)
The main limitation in conventional intraoral radiograph for dentoalveolar disease imaging is
representing 3D structure in 2D image. This limitation also occur on caries, pulpa, and periodontal.
(Tyndall and Rathore, 2008). Technology development, especially in computer and information
technology, has affect dental radiology (White and Goaz, 2004). Radiologi imaging, one of technology
development benefits, is able to detect 70% lesion. On digital image, there are two basic form, that are
indirect digital imaging and direct digital imaging. (Langlais, 2004). Indirect digital imaging began to
be used after there are reports that told no differences between indirect digital imaging and direct digital
imaging in measuring demineralization belom enamel surface. The same procedure also reported as
157
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
successful by Eberhard, et al, in monitoring in vitro dental demineralization, and Ortman, et al., in
detecting alveolar bone defect changes with bone loss about 1-5%.
The paper presents a review of number of mathematical model of the inflammation dental pulp.
The reaction can be either reversible or irreversible pulpitis, depends on the intensity of stimulation, the
severity of the damaged tissue, and host response. It causes pain from the lightest pain, namely sensitive
teeth complaints to the most severe spontaneous pain localized difficult. The aim of this research is to
obtain the characteristic function of the level of inflammatory dental pulp (necrosis, pulpitis and normal)
based on histogram analysis on the periapical radiography.
Digital image is a continous image f ( x, y ) that have been mapped into a discrete image including its
properties (i.e., spasial coordinate and brightness level). Digital image is a M x N matrix which its rows
and colums value correspond to pixel value as shown in equation (1). (Munir, R., 2004).
It is usual to digitize the values of the image function f ( x, y ) , in addition to its spatial
coordinates. This process of quantisations involves replacing a continuously varying f ( x, y ) with a
discreate set of quantization levels. The accuracy with wich variations in f ( x, y ) are represented
determined by the number of quantization level that we use; the more levels we use, the better the
approximations.
Convensionally, a set of n quantization levels comparises the integer 0,1, 2, n 1 , 0 and
n 1 are usually displayed or printed as black and white, repectively, with intermediate levels rendered
in various shades of grey. Quantisation level are therefore commonly referred to as grey levels. The
collective term for all the grey levels, ranging from black to white, is a grayscale.
For convenient and efficient processing by a computer, the number of grey levels, n is usually
an integral power two. We may write
n 2K (2)
where K is number of bites used for quantisation. K is typically 8, giving us images with 256 possible
grey levels ranging from 0 (black) to 250 (white) (Phillips, 1994).
Quantitative analysis shows that the average gray scale pixel value can be used to see
remineralization of dental caries, with lesion status. Quantitative value correlate to lesion status in
subjective analysis. The more bigger values are for remineralization of lesion, the lower ones are for
demineralization, gray scale values that close to 128 are stabilize lesion. Quantitative grading form
dental caries lesion does not depend on observer because the operator job is limited to decide ROI and
software automatically show the gray scale pixel values. (Carneiro LS, et al., 2009).
We can get ROI (Region of Interest) value for grading demineralization below enamel surface
by setting caries region as ROI. In some cases, lesion boundaries become harder to set because
raidiolusen from karies lesion is not defined well, moreover when there is an overlapping. We cannot
fully avoid these things in daily clinical routine, but these are important in designing research to assure
operator precision in measuring to avoid ROI disperse because the operator choose bigger or smaller
lesion. (Wenzel A, 2002). Then we will continue with measuring gray scale pixel values using
158
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
histogram in selected area. We can also see the average and standard deviation of grayscale pixel values
on the software. (Carneiro LS, et al., 2009).
Mathematical models can take many forms, including but not limited to dynamical systems, statistical
models, differential equations, or game theoretic models. These and other types of models can overlap,
with a given model involving a variety of abstract structures. In general, mathematical models may
include logical models, as far as logic is taken as a part of mathematics. In many cases, the quality of a
scientific field depends on how well the mathematical models developed on the theoretical side agree
with results of repeatable experiments. Lack of agreement between theoretical mathematical models
and experimental measurements often leads to important advances as better theories are developed. The
basic process modeling we see Figure 1.
This model describes the development of a mathematical model that can provide the basis for
a decision support system to aid dentists (or patients) in making decisions about how often to perform
(or receive) intraoral periapical radiographs. The model, which describes the initiation and
progression of approximal dental pulp used simple descriptive method yaitu dengan membentuk
caracteristic function wich tree levels are necrosis, pulpitis, and normal, Walton and Torabinejad
2008.
Propose by Walton and Torabinejab, 2008 and , if x is mean of grayscale (2),than we see the following
characteristic function,
0 x 64, necrosis
f ( x) 64 x 128, pulpitis (3)
x 128,
normal
The study was conducted with simple descriptive method an sampling using accidental
sampling. Data obtained based on the results of vlinical examination, then the periaptical radiograph
with reversible or irreversible pulpitis diagnosis digitized using Matlab V.7.0.4 wich will in the
histogram graph then can be determined for characteritic function of dental pulp inflammation. For
validation of model (1) we gather sampel 100 intraoral periapical radiographs pulp pathological cases
in Departement Dentomaxillofacial Radiology, Dental Hospital, Padjadjaran University,Bandung, West
Java, Indonesia in September to December 2012 with accidental sampling.
159
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the results of the study 100 samples obtained 29 samples of normal catagory, 30 samples
pulpitis, necrosis 30 and 1 is not clear, the results are presented in Table 1 -3
160
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
161
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
162
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The conclution of this research that the description of dental pulp inflammantion based on analysis of
histrogram graph on periaptical radiograph, the photo is more radiolucent than nomal pulp and more
radiopaque than necrosis pulp (1). The results showed that levels of inflammatory dental pulp x based
on grayscale value are normal x 128 , pulpitis 64 x 128 and necrosis 0 x 64 .
References
Carneiro LS, Nunes CA, Silva MA, Leles CR, and Mendonc EF. 2009. In vivo study of pixel grey measurement
in digital subtraction radiography for monitoring caries remineralization. Dentomaxillofacial Radiology
Journal, 38, 73-78. Available online at http://dmfr.birjournals.org (diakses 17 April 2012).
Langlais, R. P. 2004. Exercises in Oral Radiology and Interpretation, 4 th edition. Missouri W.B. Saunders
Company. Pp. 67-68.
Michael Shwartz, Joseph S. Pliskin, Hans-Göran Gröndahl and Joseph Boffa, 1987, A Mathematical Model of
Dental Caries Used to Evaluate the Benefits of Alternative Frequencies of Bitewing Radiographs
Management Science 1987 33:771-783; doi:10.1287/mnsc.33.6.771.
Munir, R. 2004. Pengolahan Citra Digital dengan Pendekatan Algoritmik. Bandung : Penerbit Informatika.
Phillips, D. 1994. Image Processing in C. Kansas : R&D Publications, Inc.
Sudradjat, 2013, Model and Simulation, Departement of Mathjematics, Faculty of Mathematic and Natural
Sciences, Universitas Padjadjaran.
Tyndall and Rathore, 2008 .Cone-beam CT diagnostic applications: caries, periodontal bone assessment, and
endodontic applications. Dent Clin North Am. 2008 Oct;52(4):825-41, vii. doi: 10.1016/j.cden.2008.05.002.
Walton E Richard dan Torabinejad Mahmoud. 2008. Prinsip dan Praktik Ilmu Endodonsia, Ed 3. Alih bahasa :
Narlan Sumawinata, editor bahasa Indonesia : Lilian Juwono. Jakarta : Penerbit Buku Kedokteran EGC.
Wenzel A. 2002. Computer-automated caries detection in digital bitewings: consistency of a program and its
influence on observer agreement. J Dent Res, 81, 590-593. Available online at
http://www.medicinaoral.com.
White P. Goaz, 1994, Oral Radiology; Principles and Interpretation. Missouri: Mosby. 1-5 pp.
Whaites, Eric. 2007. Essentials of Dental Radiology Principle and Interpretation. St. Louis : Mosby Inc.
Wilhelm Burger , Mark J. Burge, 2008, Principles of Digital Image Processing, Springer-Verlag London Limited
2009
163
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Mode of transportation is a very important requirement for human life. Because
Humans in performing everyday life Often perform movements from one place to another
place, good movement at close range and long distance. Aircraft is a prime choice today to
make long-distance traveling, especially traveling cross-country (international). This has
prompted many airline companies offer international routes. Ticket price competition has also
occurred among airline companies, so that any airline should be able to control the aircraft
operational cost management. The selection of these aircraft from the national airport to the
international airport destination countries need to be considered with the lowest cost. In this
paper, the selection of these aircraft with the lowest operating costs is done using network
analysis. Results of the analysis are expected to provide an overview of how the selection of
one of the methods with the aircraft operating costs lowest done.
1. Introduction
Transportation is an essential requirement for human life today. Rapid technological advances have
driven the need for people to become more mobile (Azizah et al., 2012). Humans are always looking
for ways to shorten move from one place to another. Existing transportation facilities became more
advanced, so humans have not been quite satisfied with the land and sea transportation (Lee, 2000;
Bazargan, 2004), people use air transport. The existence of human aircraft can go to one place to the
premises much faster than using land and sea transportation (Forbes & Lederman, 2005). The existence
of transport aircraft, the company makes a variety of flight service providers began to emerge. Various
airlines not only offer many destinations, but also the best rates varied (Gustia, 2010).
The airlines currently managing its management are given freedom. So that they can freely
manage system services and flights, even setting the rates theirs of flight (Neven et al., 2005). Freedom
to manage: system services, flight routes, and the rates to stimulate the rapid growth of new airline
companies. Competition in the airline market is increasingly tight (Sarndal & Statton, 2011). This
causes any aviation service companies must be increasingly careful and cautious in doing cost control,
by implementing good management (Prihananto, 2007). Good management is one of them is the
selection of the aircraft. The selection of these aircraft flew very influential on the effectiveness and
efficiency of flight planning, where each route that run the company have feasible and meets the
requirements of flight. Terms of flight, among others, is that each required flight leg must be the optimal
combination of these aircraft, the number of aircraft planned to fly, as well as the needs of the turn-
around time for each aircraft (Yan & Young, 1996). The fulfillment of these requirements is reflected
in the total operating cost of each flight of the aircraft (Bower & Kroo, 2008).
Operational costs will affect the financial ability of the airline company. The higher the
operating costs will be lower profits and conversely, the lower the operating costs will be higher profits
(Bae et al., 2010). So the analysis is needed to determine the selected aircraft (Renaud & Boctor, 2002).
Based on the above, this paper aims to analyze the selection of international routes that provide lowest
operational costs. Selection of international routes with the lowest cost is analyzed using network
analysis. As a numerical illustration of the issue will be analyzed the selection Operational flights route
with the lowest cost, as discussed in Section 4 in this paper.
164
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Theoretical
In this section we explore the theoretical basis that includes: Selection Mode of Transportation, Aviation
International, Aircraft Costs and Operating Costs Issues Cheapest Flight routes.
Behavior of public transport service users in selecting transport modes are generally determined by the
3 (three) factors, namely: trip characteristics, trip characteristics of the offender, and the characteristics
of the transportation system. Trip characteristics include: distance and travel purposes, while traveling
offender characteristics, among others: the level of income, the ownership mode of transportation, and
employment. Characteristics of the transport system include: relative travel time, travel expenses
relative, and the relative service levels. When viewed from the side of the transport providers, transport
the selection mode behavior can be influenced by a change in the characteristics of transportation system
(Prihananto, 2007).
Ortuzar (1994), states that the selection of mode of transport is a very important part of
transportation planning models. This is because the selection of modes to be played a key role for policy
makers transportation providers, especially air transport. The main factors affecting public transport
services by Morlok & Willumsen (1998) related to the travel time or travel speed, while the other quality
factors can be ignored. Based on the theory of voting behavior such transportation, airline companies
should be able to select these aircraft, considering three of these characteristics.
In the world of aviation known the difference between civil aircraft with state aircraft, the difference
between civil aircraft by aircraft regulated under the Paris Convention State Convention, 1919, Havana
1928, the 1944 Chicago Convention, 1958 Geneva Convention, and the United Nations Convention on
UNCLOS. In various national laws such as the United States, Australia, the Netherlands, Britain, and
Indonesia also made a distinction between civil aircraft with aircraft State (Prihananto, 2007).
The differences between civil aircraft in one hand with the other state aircraft were based on
authority of each type of aircraft used by each agency. Thus the difference is important, because
according to international law the treatment of civil aircraft in contrast to the treatment of State aircraft.
State aircraft have certain immunity rights are not owned by civil aircraft. The treatment is in line with
the Paris Convention in 1919, the 1944 Chicago Convention, the Geneva Conventions, 1958 and 1982
UNCLOS UN Convention mentioned above (Prihananto, 2007).
Thus from the above description, the notion of international civil aviation is conducted flight
of civil aircraft, which has the nationality mark and registration signs, and in times of peace can cross
the airspace of Member States other international civil aviation organization (Prihananto, 2007).
Aircraft financing can basically be categorized into two that is non-operating expenses and operating
costs. Non-operating Costs are costs that have nothing to do with the operation of the aircraft, while
operating costs are costs of operating aircraft (Prihananto, 2007).
Operating Costs consist of direct operating costs DOC, and indirect operating costs IOC. DOC
is directly related to the cost of flying a plane, while the cost of supporting the IOC is heavily influenced
by the airline company's management policy, but can be expected to need for the IOC. Both types of
these operating costs (DOC and IOC) is one factor in considering the type of aircraft that will be
operated on a route that has been selected (Bazargan, 2004).
165
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
An all expenses directly related to and dependent on the type of aircraft operated, and will change to a
different type of aircraft. Direct operating costs can be grouped into (Bazargan, 2004; Prihananto, 2007):
Flight operating cost, is the cost to be incurred in connection with the operation of the aircraft. This cost
component includes several elements, that is crew costs, fuel costs, leasing costs, and insurance costs.
Maintenance cost, are costs that must be incurred as a result of aircraft maintenance. Consists of labor
costs and material costs.
Depreciation and amortization Costs. Depreciation is an expense due to lower nominal value or price
of the aircraft in the course of Waku since the product came out. While amortization is an expense
allowance periodically for costs such as cabin crew training, and pre-development costs associated with
the operation or use of the development of new aircraft.
Is all fixed costs are not affected by changes in aircraft type, because it does not depend directly with
the operation of the aircraft. These Costs consist of the station and ground cost (the cost of handling
and servicing aircraft on the ground), passenger service cost (passenger service charge). Ticketing, sales
and promotion costs, and administrative costs (Prihananto, 2007).
The problem is the lowest cost problems associated with the search path or route with lowest cost of
location / airport of origin to the location / destination airport. These problems usually arise in air
transport network, both nationally and internationally (Haouari et al., 2009; Clarke et al., 1997). Since
a lot of the problem of determining the application with the lowest cost, then the discussion will start
the model in general. Suppose there is a network of cost flights between the international airports of
origin (origin airport) up to the point of destination (destination airport). Network cost flights between
these airports generally as shown in Figure-1 as follows:
Suppose the cost of flights between the airport i to airport j expressed as d ij , the problem is
to find the total lowest cost of flights from the airport i to airport j , where the amount of d ij always
the same for every natural number i j . If an airport is not connected directly with the origin airport,
then the airport is given the value d ij = . The problem is to determine the values while for the airports,
and find the value or fixed costs. If the fixed cost for destination airport has to be obtained, it will obtain
the lowest cost from origin airport to destination airport (Wu and Coppins, 1981).
Procedure to determine lowest cost of this can be done according to the following stages:
166
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Step 1: Provide the cost for airport purposes yi with a value of 0 (zero).
Step 2: Calculate the cost while the other airport with the destination airport. While the costs of charged
directly to the destination airport. If the airport cannot be achieved directly with the airport next
destination given value, which is a very large value
Step 3: From the temporary costs calculated, select the least cost and the state as a permanent cost to
the destination airport. If there is a fee while the same value, select one of them as the least cost
to the destination airport.
Step 4: Perform calculations while the cost to the airport of origin. From this calculation select d ij y j
which the cost or route with lowest cost.
Calculation of cost or lowest cost with the above procedure can also be formulated in linear
programming. The problem of determining the route with lowest cost is the same as the mathematical
methods jobs assignment, or transshipment. If origin airport is seen as the source and the destination
airport as the need, then the other airport can be viewed as a point or a location of transshipment (Wu
& Coppins, 1981).
When it is assumed that the route with lowest cost or to be traversed has a coefficient of 1 (one),
then these problems can be formulated as a special assignment jobs, so that the formulation of the model
is the route with lowest cost (Wu and Coppins, 1981):
I J
Minimize { d ij xij } ; i j (1)
i 1 j 1
Interpretation of this model is that the airline company wants to determine lowest cost from the
origin airport to the destination airport. Constraints specified in the model showed that only one route
to be taken from the airport of origin, and to the destination airport. Similarly to other routes between
the airports are also only taken one point (Wu and Coppins, 1981). As an illustration, the model with
the lowest cost route selection will be given examples of numerical analysis as follows.
3. Illustrations
In section 3 of this discussion: description of the problem, Problem Solving Using Network Analysis
and Formulation to Linear Programming, as follows.
Suppose an airline company wanted to develop a flight that originated from Bandung and intended
finish in Saint Petersburg. During the transiting flight will be planned in several countries. Some
alternative and the cost of operating each route cost studies have been conducted, for example, as given
in Table-1 below.
167
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Based on the operational costs data given in Table-1 can be composed of a network model of the
operational costs of each route between cities / countries as given in Figure-1 below.
Application of the procedures discussed in section 2.4, for determine the cost of route between the
airport is as follows:
Step 1: y1 y11 0
168
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Step 3: Operating costs of the airport 5 to airport 11 through the airport 9, is equal to the operating
costs of the airport 5 straight to the airport 11. Therefore, the chosen one, for example, selected
operating costs of the airport 5 to airport 11 through the airport 6 and 9, or y 5 $115 .
Operating costs of the airport 6 to airport 11 through airport 9, is smaller than the operating
costs of the airport 6 to airport 11 through the airport 5. Therefore operating costs of the airport
6 to airport 11 through airport 9 were chosen as the route with the lowest cost that is $ 105 or
y 6 $105 . Similarly, with the same reason, the operational cost of the airport 7 to the airport
11 through the airport 8, or y 7 $115 . Operating costs of airport 8 to airport 11 through the
airport 7 and 9, or y8 130 .
Based on the results of recent calculations, it appears that the lowest operational cost is $ 240. So route
airplanes with the lowest operating costs was obtained through city or state: y1 y 4 y6 y9 y11
, with each of the operating cost is: $ 75 + $ 60 + $ 55 + $ 50 = $ 240. Thus route selection based solely
on operating costs, so that other factors beyond the operational costs are not considered, and will be
studied further in subsequent research.
The issue of route selection airplane lowest operational costs described in section 3.1, can be formulated
into a linear programming model. Linear programming model is formulated in a theoretical discussion
refer section 2.4 above. The objective function is formulated based on the operational costs of data
given in Table-1 or in the network given in Figure-2. While the constraint functions is formulated based
on the direction the network arcs shown in Figure-2, whose coefficients are described in Table 2 below.
169
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
The issue of route selection operating aircraft with lowest cost can be expressed as a
transshipment problem, so that the formulation of linear programming models can be created by using
a variable xij . This variable xij is the route of aircraft from the airport i to airport j . The objective
function of linear programming models is the minimization of operational costs from airport i to airport
j . Because of the current flight out at a cost equal to the current airport is entered, the number of
coefficients on the right hand side of the constraint functions will be equal to zero. While the coefficient
of constraint functions is the +1 for express the current flight out, and -1 for express the current flight
into an airport.
Based on the operational costs of data described in Table-1 or networks flights alternative given
in Figure-2, the objective function of the linear programming model can be formulated as follows:
Maximize z $70x1.2 $65x1.3 $75x1.4 $50x 2.4 $90x2.5 $35x 2.4 $85x3.8
$60x 4.6 $65x 4.7 $40x5.6 $115x5.11 $40x6.5 $20x6.7 $55x6.9
$20x7.6 $45x7.8 $65x7.9 $45x8.7 $60x8.10 $50x9.11 $70 x10.11
Subject to:
x1.2 x1.3 x1.4 1 ;
x1.2 x 2.4 x 2.5 0 ;
x1.3 x3.4 x3.8 0 ;
x1.4 x 2.4 x3.4 x 4.6 x 4.7 0 ;
x 2.5 x5.6 x5.11 0 ;
x 4.6 x5.6 x6.7 x6.9 0 ;
x 4.7 x6.7 x7.8 x7.9 0 ;
x3.8 x7.8 x8.10 0 ;
x6.9 x7.9 x9.11 0 ;
x8.10 x10.11 0 ;
x5.11 x9.11 x10.11 1 ;
xij 0 ; i, j 1,...,11
While using the coefficients are described in Table-2, the constraints function are formulated as
described above. Completion of linear programming models will produce minimum total operating cost
is the $ 240.
4. Conclusions
This paper has discussed the selection of international routes with lowest operating costs using network
analysis. Based on the description given problem, for the selection of alternative networks route of
aircraft as given in Figure-2. Completion of network analysis in Figure-2 is selected route aircraft from
Bandung airport to the Saint Petersburg airport, through the airport-airport: Singapore, Kuala Lumpur,
Moscow. The route has the lowest total cost of $ 240. The issue of route selection lowest flight operating
costs can also be viewed as a transshipment problem, and can be formulated as linear programming
models. Completion of the linear programming model would also obtained lowest total operating cost
of $ 240.
170
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
References
Azizah, F., Rusdiansyah, A., & Indah, N.A. (2012). Perancangan Alat Bantu Pengambilan Keputusan Untuk
Penentuan Jumlah dan Rute Armada Pesawat Terbang. Kertas Kerja. Jurusan Teknik Industri, Institut
Teknologi Sepuluh Nopember, Surabaya.
Bae, K.H., Sherali, H.D., Kolling, C.P., Sarin, S.C., Trani, A.A. (2010). Integrated Airline Operations: Schedule
Design, Fleet Assignment, Aircraft Routing, and Crew Scheduling. Dissertation. Doctor of Philosophy in
Industrial and System Engineering. Faculty of the Virginia Polytechnic Institute and State University.
Blacksburg, Virginia.
Bower, G.C. & Kroo, I.M. (2008). Multi-Objective Aircraft Optimization for Minimum Cost and Emissions Over
Specific Route Networks. 26th International Congress of the Aeronautical Sciences. ICAS 2008.
Bazargan, M., (2004). Airline Operations and Scheduling. Hampshire: Ashgate Publishing Limited,
Burlington,USA.
Clarke, L., Johnson, E., Nemhauser, G., and Zhu, Z. (1997). The Aircraft Rotation Problem. Annals of Operations
Research 69(1997)33-46.
Forbes, S.J. & Lederman, M. (2005). The Role Regional Airlines in the U.S. Airline Industry. Working Paper.
Department of Economics, 9500 Gilman Drive, La Jolla, CA 92093-0508, USA.
Gustia, R.S. (2010). Penerapan Dynamic Programing Dalam Menentukan Rute Penerbangan International dari
Indonesia. Kertas Kerja. Program Studi Teknik Informatika, Institut Teknologi Bandung.
Haouari, M., Aissaoui, N. & Mansour, F.Z., (2009). Network flow-based approaches for integrated aircraft
fleeting and routing. European Journal of Operational Research, pp.591-99.
Lee, J.J. (2000). Historical and Future Trends in Aircraft Performance, Cost, and Emissions. Thesis. Master of
Science in Aeronautics and Astronautics and the Engineering System Division. Massachusetts Institute of
Technology.
Morlok, E.K. (1998). Introduction to Transportation Engineering and Planning. New York: McGraw-Hill Ltd.
Neven, D.J., Roller, L.H., & Zhang, Z. (2005). Endogenous Costs and Price-Cost Margin An Application to the
Europeann Airline Industry. Working Paper. Graduate Institute of International Studies, Universit of
Geneva.
Otuzar, J.D. & Willumsen, L.G. (1994). Modelling Transportation. Second Edition, New York: John Wiley &
Sons.
Prihananto, D. (2007). Pemilihan Tipe Pesawat Terbang Untuk Rute Yogyakarta- Jakarta Berdasarkan Perkiraan
Biaya Operasional. Seminar Nasional Teknologi 2007 (SNT 2007). Yogyakarta 24 November 2007. ISSN:
1978-9777.
Renaud, J. & Boctor, F.F., (2002). A sweep-based algorithm for the fleet size and mix vehicle routing problem.
European Journal of Operational Research, pp.618- 28.
Sarndal, C.E. & Statton, W.B. (2011). Factors Influencing Operating Cost in the Airline Industry. Working Paper.
Centre fo Transportation Studies, The University of British Columbia.
Wu, N. & Coppins, R. (1981). Linear Programming and Extensions. New York: McGraw-Hill Book Company.
Yan, S. & Young, H.F. (1996). A Decision Support Framework for Multi-Fleet Routing and Multi-Stop Flight
Sceduling. Transpn Res.-A, Vol. 30, No. 5, pp. 379-398, 1996.
171
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: This study examines the effect of scientific debate instructional strategy on the
enhancement students’ mathematical communication, reasoning, and connections ability in the
concept of Integral. This study is quasi-experimental with a static group comparison design
involves 96 students from Department of Mathematics Education. Research instruments
include student’ prior knowledge of mathematics (PAM), a test of mathematical
communication, reasoning and connection ability as well as teaching materials. The data are
analyzed by using Mann-Whitney U test, two-path ANOVA and Kruskal Wallis Test. The study
finds that the enhancement in mathematical communication and reasoning abilities in students
with the scientific debate is not significantly different from the conventional instruction.
Students’ mathematical connection ability that follows instruction with scientific debate
strategy is better than that of students who follow the conventional instruction. There was no
difference in the average rate of the increasing mathematical communication and connection
skills of students between the interactions of PAM with learning approach. There is no
interaction between instructional factors and PAM factors on the increasing mathematical
communication and connection skills. The enhancement of student’ mathematical reasoning
abilities with a scientific debate based on the PAM, it is not completely distinctive. On the other
hand, the enhancement of student’ mathematical reasoning abilities with a conventional
instruction based on the PAM was considerably different. On the scientific debate, student’s
educational background differences do not give major effect on the enhancement mathematical
communication, reasoning and connection ability but on the conventional instruction provides
a better effect. It means that, the students with background of Senior High School have enhanced
mathematical communication, reasoning, and connections ability better than compare to the
students of the Islamic Senior High School.
1. Introduction
Integration and differentiation derivative are the two important concept in mathematics. Integrals and
derivatives are the two main operations in calculus. Integration principles are formulated by Isaac
Newton and Gottfried Leibniz in the 17th century by leveraging the close relationship that exists
between the anti-derivative and the definite integrals, which is a relationship that allows to calculate the
actual value of the integral would be easier without the need to wear a Riemann sum. This relationship
is called the fundamental theorem of calculus. Through the fundamental theorem of calculus, which
they independently developed, integration is connected with differentiation. So that the integral can be
defined as anti-derivative.
In France, the concept of integrals are introduced to secondary education students (17-18) years,
which is presented in the form of the traditional definition in the form of primitive functions. In 1972,
integrals calculus introduced that includes: the definition of the Riemann sum for numerical functions
of a real variable in the interval limited; integrabel theorem of continuous functions and monotone
functions. After the reforms in 1982, to return again to see the integral as a function of the primitive
and the area under the positive function, and introduce the example integral value approach with a
variety of numerical methods.
172
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In Indonesia, the concept of integral given to senior high school students (SMU) and the Higher
Education course on calculus 2. Ability of Integrals tested for the senior high school and are equal: (1)
to calculate the indefinite integral, (2) to calculate definite integrals of algebraic functions and
trigonometric functions, (3) to measure the area under the curves, and (4) to calculate the volume of the
rotary. Ability which tested, to revolves the understanding of concepts around, and it is included in the
category of low-level in the level of high-order mathematical thinking. It is characterized by problems
in the form: to remember, to apply the formula regularly, to calculate and to apply simply formulas or
concepts in the simple cases or in the similar cases. Whereas competency standards (SKL), which must
be achieved to understand the concept of the integrals are an integral concept of algebraic functions and
trigonometric functions and be able to apply them in solving problems.
Although the abilities tested are included in the category of low-level and not in accordance
with the SKL, but some studies show that even low levels of student learning outcomes for this integral
concept was included in the category of low compared to other mathematical material. The low ability
of students to understand the concept of integral proposed by Orton (1983) that the average value of
materials integral to the evaluation results have the lowest value, ie 1.895 to 1.685 for the level of
schooling and college level on a scale of 0 to 4, compared with the material in Calculus such as: line,
limits, and derivatives. Sabella and Redish (2011) states that most college students in conventional
classes have a superficial understanding and incomplete information about the basic concepts in
calculus. Romberg and Tufte (1987) states that students view mathematics as a collection of static
concepts and technical to be solved step by step. In the mathematics learning, students are only required
to complete, to describe in graphic form, to locate, to evaluate, to define, and to quantify in a model that
is obvious. They are rarely challenged to solve the problems of high order mathematical thinking
(Ferrini-Mundy 627).
Results of trial UN 2010 were given to 879 senior high school students in the city Bandung
showed that students who were able to answer correctly to the concept of integral only 30.22%. This
condition is certainly not achieve mastery in groups. While the test results the UN 2011 which was
followed by 1578 students in the city of Bandung, also demonstrated the ability of students is still low
in the integral concept that only 6.7% of students were able to answer correctly compared to other
calculus concepts such as limits and derivatives 42.3% 11 , 5%. The not achieve mastery students in
materials integral to the course will have an impact on students who continue their education to the
Department of Mathematics or Mathematics Education. One the causing of low ability students to the
concept of integrals is the learning mathematics is presented in the form of basic concepts, the
explanation of concepts through examples, and the exercises about the settlement. The learning process
is generally carried out in line with the pattern of a dish as it is available in the reference books. The
learning process is more likely to encourage this kind of thinking reproductive process as a result of the
reasoning process is more developed imitative. Such a situation gives less room to enhance the ability
of high-order mathematical thinking and critical and creative thinking for students. Students tend to
solve problems by looking at the integral existing examples, so that when given a non-routine matter,
students' difficulties.
Development of high-order mathematical thinking ability is very important, because in all
disciplines and in the world of work requires a person to be able to: (1) to present ideas through speech,
to write, to demonstrate and to describe the presentation visually in a variety of different, (2) to
understand , to interpret, and to evaluate ideas presented orally, in writing, or in visual form, (3) to
construct, to interpretr, and to connect different representations of ideas and relations, (4) to make
investigations and allegations, to formulate questions, and to draw conclusions and to evaluate
information, and (5) to produce and to present convincing arguments (Secretary's Commission on
Achieving Necessary Skills, 1991). These abilities are closely related to communication, reasoning, and
mathematical connections ability.
In mathematics education, communication, reasoning, and connections ability are high-orger
thinking ability which must be had to solve mathematical problems and life issues that can be used on
any state, such as critical thinking, logical and systematic. This is consistent with the characteristics of
mathematics as a science of value to that reflected in the role of mathematics as a symbolic language
and powerful communication tool, a short, solid, accurate, precise, and do not have a double meaning
(Wahyudin, 2003). This the statement indicated that mathematics has a very important role for the
development of a person's thought patterns both as a representation of understanding of mathematical
173
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
concepts, communication tools, as well as a tool that serves the field of science.To through
mathematical communication ability, students can exchange their knowledge and clarify understanding.
The communication process can help students to construct the meaning and completeness of the ideas
and to avoid misconceptions. Aspects of communication also help students to be able to communicate
ideas both verbally and in the writing. When a student is challenged and asked to argue to communicate
the results of their thinking to others both orally and in writing, the students learn to explain and to
convince others, to listen or to explan ideas of the others, and to provide the opportunity for him to
develop his experience. In addition, the communication ability, other the ability that should be
developed is the reasoning ability. A person’s reasoning ability can be seen from its ability to overcome
life issues. A person with high reasoning ability will always be able to quickly make decisions in the
solving various problems in their’s live. This capability is supported by the strength of his reasoning to
be able to connect the facts and evidence to arrive at a proper conclusion. Mathematical reasoning
ability is not only necessary to resolve issues related to the field of mathematics, but also necessary to
solve the problems faced in life. Mathematical reasoning needed someone when confronted with the
issue, in which we have to evaluate arguments and to select a few feasible solutions. This condition
implies that when one is faced with a number of statements or arguments were related by the issues it
faces, mathematical reasoning ability is required to make judgments or to evaluate the statement before
to make a decision. Thus, the mathematical ability of a person is not only used for the purpose of
calculation but also to provide argument or to claim that requires logically presentation to ensure that
the way of the thinking is right. Thus, the development of reasoning ability is essential for every student,
as a preparation to be able to do the analysis before to make a decision and be able to make an argument
to defend.
Another important ability which be used to be developed by students is the mathematical
connection ability. This connection ability will also appear on the student's ability to communicate and
to reason. mathematical connection ability can be closely related with the relational understanding.
Relational understanding requires people to be able to understand more than one concept and to relate
between these concepts. While the mathematical connection ability are the ability to connect a wide
range of ideas in mathematics and in other areas as well as the real world.
Mathematical ability are developed above, in accordance with mathematical competence
proposed by Niss (in Kusumah, 2012:3), namely: (1) mathematical thinking and reasoning, (2)
mathematical argumentation, mathematical communication, (4) modeling, (5) problem possing and
problem solving, (6) representation, (7) symbol, and (8) tool and technology. NCTM (2000) has
identified that, communication, reasoning, and problem solving ability are an important process in the
mathematics instructional in an effort to solve the mathematical problems.
Communication, reasoning, and connections mathematical ability, can only be achieved
through the instructional that can enhance the capabilities particularly in the cognitive domain, affective
and psychomotor ability. Suryadi study (2005) on the development of high-order mathematical thinking
through an indirect approach, there are two fundamental things that need further study and research and
the relationship of student-depth material and student-teacher relationships. In this study was found that
to encourage student’s mental action, the instructional process must be preceded by a presentation that
contains problems to challenge students to think. Besides the instructional process should also facilitate
the students to construct the knowledge or self-concept so that students will be able to find the back of
knowledge (reinvention).
One of the learning model that can meet the demands is scientific debate. This is supported by
the results of research Legrand, et al. (1986), which revealed that the effect of the application of
scientific debate in the learning can improve students' understanding in the concept of integral during
the final exam. The other result was indicated by Alibert, et al. (1987) that the application of scientific
debate is the majority of students in the learning attained mastery in understanding the concept of
integral, in addition, students can explore their knowledge where settlement is not implemented
algorithms.
In the application of scientific debate, students are trained to communicate the knowledge and
to sustain its arguments according to the truth of the mathematical concepts through debate. This ability
to argue will spur develop mathematical reasoning and connections ability, because the students must
be able to think logically and systematically, and be able to relate various concepts to sustain his
argument. This is consistent with the theory of constructivism which states that, the knowledge was
174
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
gained the students, in which the students construct their own knowledge through interaction, conflict,
and re-equilibration that to involve the knowledge of mathematics, other students, and a variety of
issues. The interaction is governed by a lecturer to choices fundamental issues. Based on the above
conditions, researcher is interested to review and to analyze the application of scientific l debate with
the title of the research is: "Instructional the Scientific Debate to Enhance Students’ Mathematical
Communication, Reasoning, and Connections Ability in the concept of Integral.
2. Problem Formulation
Based on the background of the issues above, research problems can be formulated as follows:
a. Are there differences the enhancement student’ mathematical communication, reasoning, and
connections ability between students that follows instruction scientific debate with the students who
follow the conventional instruction?
b. Are there interaction between instructional factors and students’ Mathematical Prior Ability (PAM)
on the enhancement students’ mathematical communication and connections ability?
c. Are there differences the enhancement student’ mathematical reasoning ability among students
that follows instruction scientific debate with the students who follow the conventional instruction were based
on PAM?
d. Are there differences the enhancement student’ mathematical communication, reasoning, and
connections between students that follows instruction scientific debate with the students who follow the
conventional instruction?
3. Research Objectives
4. Research Methods
This quasi-experimental study with a static group comparison design involves 96 students from who
take Calculus Lecture 2. For the purpose of category 1, the data are analized by using normalized gain
test with the formula:
Normalized gain (g) = (Score Postes-Score pretest)/(Score Ideal-Score pretest).
Test statistic used to test whether there differences the enhancement student’ mathematical
communication, reasoning, and connections ability between students that follows instruction scientific
debate with the students who follow the conventional instruction was used Mann-whiney (U). For purpose 2 was
used ANOVA, For the purpose of 3 was used Kruskal Wallis, and for the purpose of 4 was used the
Mann-Whitney test (U).
175
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
5. Research Result
Subjects of study were 94 students who its are 60 students from senior high school and 34 students
from Islamic senior high school. The enhancement student’ mathematical communication, reasoning,
and connections ability are at criteria middle with an average normalized gain scores are 0.660, 0.522,
and 0.635. While statistically calculated by Mann-whiney Test (U) to test the hypothesis that
enhancement student’ mathematical communication, reasoning, and connections ability that follows
scientific debate instruction is better than the students that follow conventional instruction show:
1) There was no difference the enhancement student’ mathematical communication, ability
between students that follows instruction with scientific debate with the students who follow the conventional
instruction.
2) There were no differences the enhancement student’ mathematical reasoning ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction.
3) There are differences the enhancement student’ mathematical connections ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction.
In other words, the enhancement student’ mathematical connections ability students that
follows instruction with scientific debate better than the students who follow the conventional instruction.
Since data of the enhancement mathematical communication and connections showed normal
distribution and homogeneous variance, the interaction between instructional models with PAM were
analyzed using ANOVA. The calculation result is presented in Table 2 as follows:
Graphically, the interaction between instructional model with PAM to the enhancement student’
mathematical communication and connections shown in Figure 1.
176
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
From the graphical representation, it appears that the group of students with midle and less PAM
learning outcomes with learning model scientific debate produced an average improvement of
communication capabilities mathematically larger than conventional learning. While the group of
students with high PAM yielded an average improvement of communication capabilities mathematical
smaller than conventional learning. The average increase in the ability of mathematical connections to
each group of PAM in the group of students with learning scientific debate is always better than the
conventional learning student group. This gives an indication that the application of scientific debate
learning model produces a better impact than conventional learning to increase student mathematical
connection capabilities. The average improvement of mathematical reasoning abilities in students
learning model scientific debate to the level of high, medium, low and did not differ significantly.
Whereas in conventional learning models, the average increase in mathematical reasoning ability for
high-level, midle, and low differ significantly.
177
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
mathematical communication, reasoning, and connections ability. This means the enhancement
mathematical communication, reasoning, and connections ability that the students come from senior
high school better than students from Islamic senior high school.
6. Discussion
The factors discussed in this study are instructional model (conventional and Scientific Debate), prior
knowledge of mathematics (high, midle and low), and educational background of students (Senior High
School and Islamic Senior High School).
a. Instructional Model
Characteristics of the instructional model can be used as an idea of how teaching and learning occurs
in the classroom. From the results of the study literature about scientific debate instructional model and
conventional instructional acquired characteristics for each of the instructional model as shown in Table
4 below,
From the instructional model characteristics of the scientific debate, it appears that each student
is required to construct and to understand knowledge independently. Thus, students have a very big role
178
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
in the effort to understand the concept, to developed the procedure, to discovered the principle, and to
implement of concepts, procedures, and principles to solve a given problem. The lecturer main role is
fasilisator should always facilitate any development that happens to students during the learning process
takes place.
To implement of the scientific debate instructional model, the lecturer prepare teaching
materials and Student Worksheet (LKS) for the experimental class. Teaching materials were developed
in this study were designed so that students were able to find the concepts, procedures, principles, and
be able to apply them to solve a given problem. Teaching materials were developed in such a way that
students allowed to achieve relevant mathematical competence to the material being studied. In
addition, focus on the developing teaching materials are directed to develop students’ mathematical
communication, reasoning, and connection ability optimally. LKS were used to equip students as a
reference for themselves in the debate and problem solving. LKS contain not only the problems to be
solved by the students, but also includes concepts, procedures, and principles, as well as the presentation
examples of applications that can be learned students before the preparation in the debate. In the
scientific debate instructional model, students are required to solve the problem in the form writing and
they also mush account their answers in the debate. This is consistent with the statement Pugalee (2001)
that the students mush trained to communicate their mathematical knowledge, to provide arguments for
each answer and, to respond every the answers given by others, so that what it is learned to be more
meaningful for him . In addition, Huggins (1999) stated that in order to enhance conceptual
understanding of mathematics, students can do to express mathematical ideas to others.
From the quantitative calculations results, showed that there were not difference the
enhancement student’ mathematical communication, reasoning, and connections ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction.
Although the results of the calculations did not indicate any difference, but we mush compare between
the scientific debate instructional model with conventional instructional. The scientific debate
instructional model challenged students to learn actively. Now it the question is whether an active
learning is better than passive learning? If the goal of students’ learning result only in terms of the
ability to answer the questions which are generally from the cognitive, the result was not much different.
Highest actively learning student will be positively correlated with academic achievement but low.
However, another result seen that everyday life requires activity of that to study and to work with active
or it's more fun, that active learning will broaden your horizons and others, the current study is very
important. Minimum to help realize human beings are most active will not be needed in the days to
come (Ruseffefendi, 1991).
There are difference the enhancement student’ mathematical connections ability between
students that follows instruction with scientific debate with the students who follow the conventional instruction. This
means, students acquire instruction with scientific debate has the enhancement in the mathematical
connections ability are better than conventional instructional. This rationale is because student should
be solved applications mathematics problems, the students’ knowledge become more insight to interpret
mathematics not only in mathematics itself but also its role in other fields. This is consistent with
research Harnisch (in Qohar, 2011) that students can get an overview of the concepts and big ideas
about relations between mathematics and science, as well as students gain more experience.
Additionally, NCTM (2000) states that mathematics is not a collection of separate topics, but it is a
network of ideas that are related very closely. The problems were solved student is able to lead students
to improve their mathematical connections. Thus, students are trained to perform well in mathematics
as well as linkages to other areas so that students recognize the importance of the mathematical
connections ability.
It this consistent with Piaget's statement that "knowledge is actively constructed by the leaner,
not passively received from the environment" (Dougiamas, 1998). Piaget’ statement means that the
students do not passively receive the knowledge of the environment but must be active to discover
knowledge on their own again. While the role of the teacher only drove students to seek knowledge and
to understand significantly. This is consistent with the Geoghegan’ statement (2005) that learning and
teaching become a reflective phenomenon based on the inter-connections between teachers and students
together to co-instructor in the search to mean and to understand. Salamon and Perkins (in Dewanto,
2007) suggested that the 'acquisition' and 'participation' in learning to relate and to interact with each
other in a synergic way.
179
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Student’ intelligence can be measured or kumulatif achievement index (GPA) although sometimes
misses, but the GPA can be a guarantee that the student will be able to attend a particular school. In this
study, student’ mathematical prior ability (PAM) is the students’ ability in the Calculus 1 material. PAM
was used to see the readiness of students to receive course materials and to relate with the other subject
matter that has been received. The students’ success in their lessons depends on the readiness of students
in their lessons. PAM classified into three groups: high, meddle, and low.
From the the quantitative analysis results, PAM provides significant effect on the enhancement
student’ mathematical communication ability. This is consistent with the Arends’opinion (2008a: 268),
that the student's ability learn new ideas rely on their prior knowledge previous and to exist cognitive
structures. Meanwhile, according to Ruseffendi, the student’ success in learning is almost entirely
influenced by the intelligence of students, student’readiness, and students’ talent (1991). The effect of
PAM that was ubased on the instructional model difference (scientific debate and conventional) indicate
that the instructional models differences do not provide a significant effect on the enhancement student’
mathematical communication ability.
The quantitative analysis result, showed that the difference in the level of PAM did not give
significant effects on the enhancement student’ mathematical connection ability.The difference learning
model gives significant effects on the enhancement student’ mathematical connection ability. This
means that, average the enhancement students’ mathematical connections with scientific debate modelis
always better than the student with conventional instructional. This gives an indication that the
application of scientific debate instructional model produces a better effect than the application the
conventional instructional to student’ mathematical connection ability.
There was not interaction between model instructional with PAM on the enhancement student’
mathematical communication and connection ability. The results was consistent with the results of Nana
(2009) which concluded that there is not interaction between learning approach with PAM students in
mathematical problem solving ability. Thus, the utility factor learning model does not provide a strong
influence the enhancement student’ mathematical communication and connection ability. The scientific
debate instructional model can enhance students’ mathematical reasoning ability evenly to the various
levels of PAM. This it wass possible, because at the scientific debate instructional model, each student
is challenged to put forward ideas and to reflex their answer in the debate. This condition may spur
increased student‘ mathematical reasoning and understanding in depth. Huggins (1999) statement that
in order to enhance conceptual understanding of mathematics, students can do to express mathematical
ideas to others. Brenner (1998: 108) and Kadir (2008: 346), through discussions with teachers and
activity partner, students are expected to gain a better understanding of the basic concepts of
mathematics and become better problem solvers. Increasing students' mathematical conceptual
understanding will eventually lead to increase of mathematical reasoning ability. Goldberg and Larson
(2006: 97) statement that, the discussion can improve reasoning ability, human relations skills, and
communication ability.
The average enhancement students’mathematical reasoning ability in the conventional
instructional learning based on PAM significantly different. This means that in the conventional
instructional, rate students’ PAM has a strong influence on increasing student’ reasoning ability. This
is possible, because in the conventional instructional, students receive teaching materials passively, so
that the enhancement reasoning will rely on its prior knowledge. In addition to instructional
conventional, students will rely on the textbook to enhance knowledge in more depth. Yet according to
180
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Baroody (1993: 99), teachers and textbooks often have the words and symbols little meaning for
children. Furthermore, Baroody said that, the students rarely asked to explain their ideas in any form.
These conditions certainly less developed students’ reasoning ability optimally, because students are
not challenged to discover and to construct knowledge independently.
Students’ educational background in this study grouped the students come from Senior High School
(SMU) and Islamic Senior High School (MA). In the group with scientific debate instructional, the
student’ educational background differences does not have a strong influence on the enhancement of
students’ mathematical communication, reasoning, and connection ability. The absence of the influence
of educational background to the enhancement of student’ mathematical communication, reasoning,
and connections ability are great adaptability of each student. Adaptability of student in the learning is
affected by the application of scientific debate instructional, in which the instructional process is faced
on an application problems. This application mathematical problems are the focus and stimulus for
student’ instructional and as a vehicle for the development of problem solving ability. So the
instructional is focuse on students (students-centered), the lecturer is a facilitator or coaches, as well as
acquired the newly information or concepts through self-directed learning. The importance of students
are faced with the problems that this application are in accordance with the Walshs’opinion (2008),
which can form a positive attitude, to form creativity, to enhance deep understanding, and to develop
problem solving ability or to investigative ability which can be applied in various fields of life.
Scientific debate instructional emphasis students' learning activities. Students work
collaboratively to identify what they need to develop solutions and to find relevant sources, to share
and to synthesize of findings, and to ask questions that lead to further learning. In this case, teachers act
as facilitator who facilitate student in the learning. According to, the CIDR (2004), as a facilitator,
teacher can ask questions to the students to sharpen or deepen students' understanding of the relationship
between concepts that they built. Lecturer seeks balance between activities provide direct guidance and
to encourage students the self-directed learning. These conditions will trigger to the enhancement of
students mathematical communication, reasoning, and mathematical connections ability equally,
because the scientific debate instructional challenged students to have greater adaptability.
In the group of students with conventional instructional, the results showed that the differences
in the students’ educational background have an influence on the enhancement of students’
mathematical communication, reasoning, and connections ability. It becomes logical because in the
conventional instructional, lecturers explain the subject matter actively, to give examples and to
exercises, while students act like machines, students listen, to take notes and to do the exercises
givenlecturer. In these circumstances, students are not given much time to find his own knowledge
because learning is dominated lecturer. The discussions are not often implemented, so the interaction
and communication between students and other students, students and lecturer did not show up. This
results in mathematical communication, reasoning, and connections less develop students’ ability
optimally. Students feel less mathematical applications in social life, whereas mathematical literacy is
very important in today's information era. This results in the conventional instructional, student’
educational background can not enhance mathematical communication, reasoning, and connections
evenly because the students will learn and to receive course materials according to his ability.
7. Conclusion
Based on the formulation of the problem, results, and discussion presented in the previous chapter can
be concluded as follows:
a. The enhancement students’ mathematical communication, reasoning, and connections ability with
scientific debate instructional has criteria middle. The enhancement students’ mathematical
communication and reasoning ability with scientific debate instructional is not significantly
different than the group of students with conventional instructional. The enhancement students’
181
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
mathematical connections ability with scientific debate instructional is better than conventional
instructional.
b. There is no interaction between the PAM (high, middle, low) by a factor instructional model
(scientific debate and conventional) to increase students’ mathematical communication and
connection ability.
c. The average enhancement of mathematical reasoning abilities in the group of students with
scientific debate instructional based on PAM did not differ significantly. The average enhancement
of mathematical reasoning ability in the student group with conventional instructional based on
PAM significantly different.
d. The difference in the student’ educational background does not have an influence on the
enhancement mathematical communication, reasoning, and connections ability for the students that
follow scientific debate instructional. In the group of students with conventional instructional, the
differences student’ educational background have a significant effect on increasing mathematical
communication, reasoning, and connection ability.
8. References
Alibert, D., Legrand, M. & Richard, F., (1987) ‘Alteration of didactic contract in codidactic situation’, Proceeding
of PME 11, Monteral, 379-386.
Alibert, D., (1988), ‘Towards New Customs in the Classroom’, For the Learning of Mathematics, 8(2), 31-35.
Arends, R. I. (2008). Learning to Teach. New York: McGraw-Hill Companies, Inc.
Brenner, M. E. (1998) Development of Mathematical Communication in Problem Solving Groups by Language
Minority Students. Bilingual Research Journal, 22:2, 3, & 4 Spring, Summer, & Fall 1998.
Ferrini-Mundy, Joan and Karen G. Graham, (1991). An Overview of the Calculus Curriculum Reform Effort:
Issues for Learning, Teaching, and Curriculum Development, American Mathematical Monthly, 98 (7) 627-
635.
Huggins, B., & Maiste, T.(1999). Communication in Mathematics. Master’s Action Research Project, St. Xavier
University & IRI/Skylight.
http://blog.elearning.unesa.ac.id/m-saikhul-arif/makalah-pembelajaran-dengan-pendekatan-teori-
konstruktivistik.
http://www.distrodocs.com/16186-cara-belajar-efektif-belajar-akuntasi.
Kusumah, Y.S., (2012). Current Trends in Mathematics and Mathematics Eduacation: Teacher Professionsl
Development in The Enhancement of Students’ Mathematical Literacy and Competency. Makalah Seminar
Nasional: Universitas Pendidikan Indonesia.
Legrand, M., et al, (1986), “Le debat scientifique”, Actes du Collaques franco-allemands de Marseille, 53-66.
NCTM (2000). Principles and Standards for School Mathematics, Reston, Virginia.
Niss, G. (1996). Goals of mathematics teaching. In A.J. Bishop, K. Clementa, C. Keitel, J. Kilpatrick,& C. Laborde
(Eds.). International handbook of mathematical education. Netherlands: Kluwer Academic Publisher.
Orton, A. (1983). Student’understanding of Integration. Educational Studies in Mathematics, 14, 1-18.
Polya, G. (1973). How to Solve It. A New Aspect of Mathematical Method. New Jersey: Princenton University
Press.
Pugalee, D.A. (2001). Using Communication to Develop Student’s Literacy. Journal Research of Mathematics
Education 6(5) , 296-299.
Romberg, TA, dan Fredric W. Tufte, (1987). Kurikulum Matematika Rekayasa: Beberapa Saran dari Cognitive
Science, Monitoring Matematika Sekolah: Latar Belakang Makalah.
Ruseffendi, E. T. (1991). Pengantar kepada Membantu Guru Mengembangkan Kompetensinya dalam Pengajaran
Matematika untuk Meningkatkan CBSA. Bandung: Tarsito.
Sabella, Mel S. dan Redish, E. F. Student Understanding of Topics in Calculus, University of Maryland Physics
Education Research Group.
http://www.physics.umd.edu/perg/plinks/calc.htm.
Suryadi. D (2005). Penggunaan Pendekatan Pembelajaran Tidak Langsung Serta Pendekatan Gabungan
Langsung dan Tidak Langsung dalam Rangka Meningkatkan Kemampuan Berfikir Matematik Tingkat Tinggi
Siswa SLTP. Disertasi pada PPS UPI: tidak diterbitkan.
Suryadi. D (2010). Model Antisipasi dan Situasi Didaktis dalam Pembelajaran Matematika Kombinatorik
Berbasis Pendekatan Tidak Langsung. Bandung: UPI.
Wahyudin, (2003), Kemampuan Guru Matematika, Calon Guru Matematika, dan Siswa dalam Mata Pelajaran
Matematika. Disertasi pada PPS UPI: tidak diterbitkan.
182
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th, 2013
Abstract: In an effort to give policymakers of food crisis alerts, IFPRI have developed a new
tool for the early warning system for identifying extreme price variability of agricultural
products. This tool measures excessive food price variability. By providing an early warning
system to alert the world of price abnormalities for key commodities in the global agricultural
markets, policymakers can make better informed plans and decisions, including when to release
stocks from emergency grain reserve. The World Bank is currently developing a framework of
monitoring the food price crisis in the global and national levels. This framework focused on
price and did not examine other factors that govern the food crisis. Development is not directed
to define the time of food crisis but for operational indicators that can monitor how close food
prices to a level which is categorized crisis. The indicators are used to predict global food price
spike that occurred in June 2008 and February 2011. Analyzes were conducted to select
indicators that sounded an alert about the crisis. The best one is then used to monitor where
current global prices stand with respect to the selected crisis threshold .The weakness of the
monitoring frame work done by World Bank and IFPRI above is that maybe we will never seen
rice price hike, but we may can still see the rice crises. We may have no natural disaster but we
may still have the rice crises caused by miss management. To overcome the weakness of the
models studied on ten papers. The author’s proposed different approach called: Quasi Static
Rice Crises with Quasi Static Prediction Model And Justifiable Action.
Keywords: crisis, quasi static, interval forecasting, supply forecasting
1. Introduction
Extensive shrinkage of rice rate fields in Indonesia areas such as Tangerang, Serang, Bekasi, Karawang,
Purwakarta, Bandung and Bogor as a regional center for rice national production is much higher
compared to other regions. In these areas have been established thousands of industries. In Bekasi,
thousands of hectares of rice fields converted to the industry, even rice fields with the technical
irrigation. Central Bureau of Statistics indicate, paddy fields conversion rate reached 110 thousand
hectares per year since 2000, and the printing of the new rice fields is only about 30-52 thousand
hectares per year. In Kabupaten Bandung, shrinkage of rice rate is up to 30 hectares per year, paddy
fields were converted into residential or industrial complex (http://oc.its.ac.id). According to Bandung
mayor reports, in 2012 the city of Bandung has 14,725 hectares of land specialized planted with rice.
Rice production in the city of Bandung was at 4.5% of the total requirement, (Hidayat Yuyun, 2012).
Conditions of rice durability Bandung longest lasted for 7 years since 2011, 7th year supply of rice is
still considered sufficient to prevent social unrest due to shortage of rice. In the year to 8 City of
Bandung predicted rice is in crisis or public turmoil associated with rice shortage. It must be realized
that the rice crisis year-to-8 occurs when the Extent Program to the gardens and dry field and
Intensification of all the land was done in 2011. If the program is not done in year 2011, the predicted
figures will be even worse and rice crisis will occurs sooner or later. Assuming the conditions of rice
supply , the rate of decrease in production (due to decreased land and or reduced productivity, and rice
prices constant as it is today, (HidayatYuyun, 2011). Rice crisis sooner or later it will happen so that
accurate information about the predicted time of crisis in Bandung become very pressing.
To overcome these issues, the aim of this research is to identify rice crisis time in Bandung Indonesia.
To support the research aim, the authors outlined five specific objectives; (1) to develop rice crisis
criteria; (2) to forecast rice demand in Bandung; (3) to forecast rice supply in Bandung; (4) to determine
183
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
simultaneous model of demand-supply-price of rice in Bandung; and lastly (5) to forecast rice crisis
time in Bandung. However, in this paper, the discussion is limited to objective 1 and methods used to
achieve the objective. Expected results of this study will provide a wide and deep insight of the rice
scarcity situation in Bandung, Indonesia. The results are expected to give these benefits: (1) Preventive
policy to slow down a crisis with rice in city of Bandung or to delay rice crisis time; and (2) Bandung
city government will have a clear agricultural strategy to strengthening the resilience of rice is-oriented.
Urgency or primacy of this study was food crisis is not the matter of existence, the matter is when. This
thought inspired by the theory of Malthus; in this case the demand will follow a geometrical
phenomenon, while supply will indicate behavior arithmetically. This is a consequence of thinking that
human population grew faster than food production. Malthusian theory clearly emphasizes the
importance of balance in the number of population geometrically the supply of food by arithmetically.
The Malthusian theory has actually questioned the environmental carrying capacity and the capacity of
the environment. Land as a component of the natural environment is not able to provide agricultural to
meet the needs of a growing population. Carrying capacity of the land as components of the environment
decline, because the burden of more and more human. In line with the Malthusian theory, then sooner
or later a crisis will occur, thus there is a strong basis for conducting research with great question, when
the rice crisis will happen? Perhaps the crisis could be happen tomorrow or near future. Thoughts on
the occurrence of a crisis with rice in 2012 also supported of data the low internal production of
Bandung, at 4.5% residents. The other data that strengthens the idea is that the crisis the paddy field
occurred because rice farming is not financially attractive. The data show that the local minimum wage
per month of Bandung per worker amounted IDR.1,271,625 while the wages of farm laborers of
Bandung per person per month, amounting IDR.395,390. This means that the wages of the farm laborers
of Bandung far below the minimum wage of Bandung. For the wages of farm workers is much smaller
than the minimum wage then how can we expect the supply of rice in the city of Bandung will increase.
The trend is a decline, this is where the role of the Department of Agriculture and Food Security of
Bandung to seek so that resilience of rice rose. Of course if crisis with rice will occur in the near future,
the effective action must be done quickly. It demands knowing when the time crisis with rice will occur.
Predictions that the City of Bandung will have a time of crisis in 2018 as reported in the study
(HidayatYuyun, 2011) functions as a trigger for a more profound research. Remembering preliminary
nature of these studies over which shows that it is possible to predict the time crisis with rice. Previous
research has a lot of drawbacks in the assumption that the conditions of supply of rice, the rate of
production decline [due to decreased land and or reduced productivity], and rice prices constant like
now. Of course this is unrealistic.
The Limitations of Study: (1). This study is geographically limited to the Bandung city at Republic of
Indonesia; (2).Under the assumption that rice demand behavior of people in Bandung are “coded in
their DNA”, modified demand behavior is not possible or has small probability and not challenging.
Prediction of rice crisis time is determined when other food substitute is not available; and (3).Data for
the study were collected from a number of secondary sources except for model of price and number of
rioters. Most of the data for the analysis were obtained from various publications of Indonesian Central
Bureau of Statistics .
To show the originality of the research the authors have done literature study to some papers efficiently
and off course effective. The papers are:
(IRRI-INTERNATIONAL RICE RESEARCH INSTITUTE,2008), (David Dawe& Tom
Slayton,2010), (Saifullah a.,2008), (Won W. Koo MHK, Gordon W. Erlandson,1985), (Dawe D,2010),
(Parhusip U.,2012), ( Cuesta J.,2012), Can The Next Crisis Be Prevented , Dawe D. (2010). Supply and
Demand Analysis of Rice in Indonesia ,Parhusip U. (2012).Global Price Trend Cuesta J. (2012),
(ATENEO ECONOMICS ASSOCIATION ,2008), ( Berthelsen J.,2011), and (Maximo Torero,2012).
184
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
2. Methodology
This research has limitations and is not to answer a lot of things. In other words, a state that we take
care of is not to be the cause of the poor management of the rice riots. It means when people come to
the streets what the overwhelming problem is, if the overwhelming problem is the rice, now it's our
responsibility.
Step1. Crisis Definition
What are crises anyway? There is no standard definition. In this paper, crisis is defined as the situation
when the cops cannot control the riot caused totally or partially by rice scarcity. Riot caused by other
factors except rice scarcity is beyond concern. This is the best and robust definition.
Step2. Riot Control Model
Next question is when will it happen? Immediate answer of it must be riot control model. Its because
we are worry or afraid of uncontrollable riot causing the crises. So we must know riot control model
and its validity period. This model is not for the present moment. Its validity period must cover when
crises is happen and not for now. So it involved time variable.
Step3. People Stress Model
Let assume we have riot controlling model, what next? Riot somehow caused by people stress. Crucial
thing in developing people stress model are how many people are rioting in and in what condition, or
because of what condition. So the questions include magnitude and duration. Why people stress? In this
research the authors assume that the rice price caused stress of the people. As a consequence of it then
we must develop price model
Step4. Price Econometric Model
We need price model and I don’t want to model it in time series approach because the forecast sometime
is too late and some time contaminated. Another reason, price does not indicate scarcity dan
abundances. Does cheap rice prices showed a lot of rice? An example when the dollar being held at a
price of IDR 2000.One day he jumped from IDR 2000 to IDR 9000. Time series price data lose the
dynamic of dollar. Time series then Contaminated. Price does not show dynamic. Price dynamic cannot
be seen through the time series.
The most important thing I realize that price is effect not a caused. Then time series modeling in prices
resulting in too late. That why 1 use econometric model instead of time series model in forecasting the
price dynamic. There are two reasons for not practicing price forecasting using the time series method.
Firstly, Contaminated and secondly it’s too late. Too late because price is just an effect and
contaminated because there are changes in state regulated prices (administered price) resulting not the
true price. Economically speaking price is determined by the interaction of demand and supply. Rice
Price is controlled by a dynamic system which has numerous of factors. Time series is not the
appropriate way of analyzing thing. Prices are controlled by at least demand and supply variable. As is
well known, the price of a commodity and the quantity sold are determined by the intersection of
demand-and-supply curves for that commodity. Thereof we must develop simultaneous-equation
model, involving demand, supply, and price.
Step5. Time Series Forecasting Model for Demand and Supply
As a consequence of econometric models in Step4 then the authors should seek the best time series
forecasting model for demand and supply of rice separately. Forecasting demand and supply output in
step 5 will be the input to Step 4, Step 3 and Step 2 and ended up with activities to determine when the
rice crisis happen
185
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
As an integral part of modeling the rice crises forecasting this paper shows genuine approach of rice
demand forecasting in Bandung city. Here is the process.
2.2.1 Data
The data used is rice supply to Bandung city (internal and external) from 2004 until 2012. Data compiled
from three sources, namely Bandung Central Bureau of Statistics, Logistics Agency Sub Division West
Java, the Department of Agriculture and Food Security in Bandung (Indonesia).
Ft 1 X t (2.1)
b. Model Naive Rate of Change
The equation of naive rate of change model is , [20]
Xt
Ft 1 X t (2.2)
X t 1
c. Simple Linear Regression Model
The equation is:
Ft a b(t ) . (2.3)
d. Double Moving Averages Model
The equations used are:
X X t 1 X t 2 ... X t N 1
S t' t . (2.4)
N
S ' S t' 1 S t' 2 ... S t' N 1
S t' ' t . (2.5)
N
at S t' (S t' S t' ' ) 2S t' S t' ' . (2.6)
2
bt ( S t' S t' ' ) (2.7)
N 1
Ft m at bt m . (2.8)
e. Single Exponential Smoothing
The equation of Single Exponential Smoothing is written as follow [21]
Ft 1 X t (1 ) Ft (2.9)
et X t Ft (2.10)
f. Double Exponential Smoothing : Brown Liniear Method One-Parameter
The Equation are used to implement the method are shown [21]
S t' X t (1 )S t' 1 (2.11)
S S (1 ) S
t
''
t
' ''
t 1 (2.12)
at S t' (S t' S t' ' ) 2S t' S t' ' (2.13)
bt ( S t' S t' ' ) (2.14)
1
Ft m at bt m (2.15)
St X t (1 )(St 1 bt 1 ) (2.16)
bt (St St 1 ) (1 )bt 1 (2.17)
Ft m St bt m (2.18)
bt
(6 5 )St' (10 8 )St'' (4 3 )St''' (2.23)
21
2
2
ct ( S t' 2S t' ' S t' ' ' ) (2.24)
(1 ) 2
1
Ft m at bt m ct m 2 (2.25)
2
3. Forecasting Quality
Forecasting quality are determined by three critical parameter namely Accuracy, Precision, and
Visibility. These three parameters are measure for selection of the best forecasting methods.
Visibility is the ability of a model to predict the future. Measure of the accuracy and precision of
forecasting methods will vary depending on how far the method can predict the future. Visibility of the
forecasting model in this study is 3 years. So that for the 4th year and the subsequent need re-examine.
Accuracy is the degree of closeness to actual value. Accuracy is a must, accuracy exist in any
forecasting activity. Forecasting accuracy represent the forecast error it is is the difference between the
actual value and the forecast value of the time series and we call it error measure. The forecast error is
simply, et= Yt – Ft , regardless of how the forecast was produced. Yt is the actual value at period t, and
Ft is the forecast for period t. The most commonly used metric is Mean Absolute Percentage Error
(MAPE) = mean(|pt |) .The percentage error is given by pt = 100et / Yt [12].
Percentage errors have the advantage of being scale independent, so they are frequently used
to compare forecast performance between different data series. A measurement system can be accurate
but not precise, precise but not accurate, neither, or both. Accurate but no precision, is not meaningful.
Forecast precision associated with a wide interval of the forecast results. A very wide forecasting
interval indicates low precision. Thus, the narrower forecast interval is the better. This study attempted
to forecast the demand variable in forecasting interval format so that it can assess the results of
forecasting precision.
187
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
In this paper, focus has been on fitting a forecasting model as close as possible to time series data. The
ultimate test is of course to see how the forecasts fit the future, but these tests can only be done
retrospectively. Therefore it is common practice to do “blind tests” by splitting the data in two series,
the first part is used to fit the model to the data, and the recent part of the time series is used as a hold
out test, to see how accurate the model forecast the “unknown” future. The model that most accurately
describes the holdout series is then selected for making the actual forecast. These previous examples
may be regarded as this last step.
The rationale behind this practice is the belief that the model that most accurately prolonged its
fitted pattern in the initial series into the holdout series also will be most accurate when prolonging the
pattern from the complete data series into the real future. Just selecting the model that has the best fit to
the complete time series may result in selecting a model that “over-fits” the data—incorporating random
fluctuations that do not repeat themselves [11]. The stand is we don’t believe in accuracy at model
building phase. I believe in accuracy at testing phase. Whatever performance measure in model building
phase to me means nothing. It is because the models should be tested in the way they will be used. How
will we test the models? We use hold out as testing ground. This study makes sure the model is tested
in the same way.
Sources: Regional Planning Development Board of Bandung in co-operations with Central Bureau of
Statistics of Bandung, Bandung logistics agency, and Department of Agriculture and Food Security of
Bandung.
188
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Look at the data plot in Figure 1.It can be seen that time series data shows upward trend pattern. Test
results also indicate stationarity of the time series data.
4.3. Selecting Best Forecasting Model Based on Data Sets Model Building [First Stage]
Based on the use of forecasting method on the first set of data (model building) are obtained the value
of MAPE. Table 2 gives recapitulation MAPE values for nine models based on annual rice supply of
data. The MAPE has received some criticism (see Rob J. Hyndman, 2006).
Table 2 give information that largest MAPE produced by double exponential smoothing models: Holt
Two-Parameter. The smallest MAPE value is given by a model of simple linear regression of 5.15.The
model that most accurately describes the hold out series is then selected for making the actual forecasts
[11]. We have nine candidates. Now how to choose the best model of the nine models? The model will
be used to predict 3 years ahead which 3 is visibility number. We don’t need tracking signal to measure
visibility. Why? Because the test results show we can predict 3 years ahead accurately then we claim 3
to be the visibility number. Why do not we just use a visibility of 3 years and are looking for models
with the smallest MAPE? MAPE is actually getting smaller is not necessarily better. Although we must
be careful to the hold out ground. We need a fact as ground reason that the future is the copy of the hold
out. There is still threat of over-fit. Consideration used is we should not choose a model that has the
error is too small also too big. In other words, not too fit not too loose. This is the ultimate problem in
forecasting. To overcome this difficulty we used control chart concept in statistical quality control.
Control chart can be used to screen out the model without extreme value in error forecast. Following
that mechanism we get the results shows at Figure 2. Of the nine MAPE above are created control chart
to get rid of the extreme value of MAPE.
189
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
30.00
20.00
10.00
0.00
The control chart formed by upper control limit (UCL), Central Limit (CL) and Lower Control Limit
(LCL). The control limits is created using the following formula:
UCL X MAPE MAPE 13,87 6,86 20,74
The control charts shows there are four models with a value of MAPE is outside of control limits ,
namely: Naïve Model, Naive rate of change, Simple Linear Regression , Double Exponential Smoothing
model: Holt Two-Parameter Method. The fourth model is not used for the next stage, while other models
(Double Moving Averages, Single Exponential Smoothing, Double Exponential Smoothing: Brown
Linear Method One-Parameter, the triple Exponential Smoothing Brown one-parameter Quadratic
Method, Chow Adaptive Control Method) is used in the next step
4.4. Forecasting Model with Good Accuracy and Precision (Hidayat Yuyun,2013)
The models passes the selection phase 1 then tested using the hold out data set (data are darkened) as
much as 2 times. Precision assessment is then performed on the 5 models that are within the control
limits. As already stated, forecast precision associated with a bandwidth of the forecast results. Interval
forecasting width indicates low precision. Therefore the narrower the interval forecasting showed better
precision, then we will select the best model to use the smallest bandwidth criteria. Precision
calculations using the concept of interval forecasting are presented in Table 3.
Having obtained the value of the bandwidth of each model, then the model are selected based on the
value of the bandwidth. For this purpose, control charts are used to dispose of forecasting models that
have extreme bandwidth value. Here is the control chart
190
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Based on the control chart above, there are two models that bandwidth is out of control, namely: Single
Exponential Smoothing Model, Triple Exponential Smoothing: One-Parameter Quadratic method of
Brown, this model is not used in the later stages. There are 3 models that can be used to predict the rice
supply to the city of Bandung. Models that qualify are the Double Moving Averages, Exponential
Smoothing Double: One-Parameter Linear Method of Brown, Adaptive Control of Chow. Chow
Adaptive Control is the best model because it has the smallest bandwidth.
Using Chow Adaptive Control Method , forecast results are presented below in the form of intervals .
4.6. Conclusions
What is crises anyway?There is no standard definitions. In this paper crisis is defined as the situation
when the cop can not control the riot caused totally or partrially by rice scarcity. Riot caused by
other factors except rice scarcity is out of concern.
The weaknes of the existing forecasting frame work is that may be we will never seen price hike,
but we may can still see the crises. We may have no natural disasterbut we may still have the crises
caused by miss management.
This paper outlines different approach whereby good forecast are found by screen out the models
using control chart.The advantage is it will avoid from extreme value of error in data test to
overcome overfit problem.The paper also advice to make forecast in interval format to get
precission of the forecasting models.
We found that the Chow Adaptive Control Method showing the best results achieved for both
accuracy and precision.
191
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Acknowledgements
We would like to thank all the people who prepared and revised previous versions of this document.
References
Abraham, Bovas and Ledolter, Johannes. (1983). Statistical Methods For Forecasting, John Wiley &
Sons, Inc. Canada.
ATENEO ECONOMICS ASSOCIATION (AEA). (2008). Analyzing the Rice Crisis in the Philippines
31 may 2008.
Berthelsen J. (2011). Anatomy of a Rice Crisis.Vol.3, No. 2(Global Asia):6.
Cuesta J. (2012).Global Price Trends - Toward New Crisis? April 2012 April 2012.
David Dawe & Tom Slayton. (2010).The Rice Crisis - Markets, Policies and Food Security. The World
Rice Market Crisis of 2007–2008. 2010 (FAO):368.
Dawe D. (2010). The Rice Crisis - Markets, Policies and Food Security. Can the next Rice Crisis be
Prevented. 2010;393(The food and agriculture organization of the united nations and earthscan).
Hidayat Yuyun. (2011). Data Base Produksi Pangan Kota Bandung 2011].
Hidayat Yuyun. (2012). Data Base Produksi Pangan Kota Bandung 2012
Hidayat, Yuyun.(2013). RICE DEMAND FORECASTING WITH sMAPE ERROR MEASURES ,
INTEGRATED PART OF THE FRAMEWORK FOR FORECASTING RICE CRISES TIME
IN BANDUNG-INDONESIA. PROCEEDING:The International Conference on Applied
Statistics , 2013.Department Of Statistics Universitas Padjadjaran, Indonesia
http://oc.its.ac.id, Sawah Digusur Petani Menganggur
http://international.cgdev.orgt, Asian Rice Crisis Puts 10 Million or More at Risk: Q&A with Peter
Timmer, April 21, 2008.
Hanke, John E. and Wichern, Dean, W. (2005). Business Forecasting Eight Edition. Pearson Education,
Inc. New Jersey.
IRRI-INTERNATIONAL RICE RESEARCH INSTITUTE. (2008).Responding to the Rice Crisis.
2008 (IRRI):20.
Martumpal Chandra P.S., Drs. Yuyun Hidayat, MT.,(2013). Menentukan Model Peramalan Terbaik
Untuk Meramal Suplai Beras Kota Bandung.
Makridakis, Spyros., Wheelwright, Steven C., Mcgee, Victor E. (1999). Metode dan Aplikasi
Peramalan, .Edisi Kedua. Jakarta :Binarupa Aksara.
Maximo Torero. (2012). International Food Policy Research Institute [IFRI], Food Security Portal, Rice
Excessive Food Price Variability Early Warning System,2012
Parhusip U. (2012).Supply and demand analysis of rice in Indonesia [1950-1972]. (Departement of
agricultural economics Michigan state university).
Rasmus Rasmussen. (2004).On time series data and optimal parameters, Omega 32 (2004) 111 – 120
Rob J. Hyndman. (2006).Another Look At Forecast-Accuracy Metrics For Intermittent Demand,
FORESIGHT Issue 4June 2006
Saifullah a. (2008). The Rice Crisis - Markets, Policies and Food Security. Indonesia’s rice policy and
price stabilization programme:managing domestic prices during the 2008 crisis. (The food and
agriculture organizationof the united nations and earthscan).
Won W. Koo MHK, Gordon W. Erlandson. (1985). Analysis of Demand and Supply of Rice in
Indonesia.1985 (Dept. of. Agricultural Economics, Nort Dakota Agricultural Experiment Station,
NORTH DACOTA STATE UNIVERSITY):24.
192
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
Abstract: Management and liabilities to be very important for any bank or other financial
institution. Accordingly, banks or other financial institutions should have a system that can
formulate a function that can connect entrepreneurs or owners of capital with the real business
sector. This paper studied and formulated a model that deals with the problem global
optimization of the portfolio under the asset liability models. Formulation of the model was
conducted on the modeling of the distribution of asset returns, equation liabilities, the risk
measure Value-at-Risk, and local and global optimization equation. Furthermore, to find local
and global optimum solution is done by using a genetic algorithm. Formulation results expected
in this research is a model that can be used effectively in the management of assets and
liabilities.
1. Introduction
In general financial institutions, such as banks, insurance companies, mutual fund companies, pension
fund management companies, pawnshops, and other, essentially an intermediary institution, either
directly or indirectly, between the depositors and the owners of capital (shareholders) by the employer
or sector real. Financial institution to receive deposits and or capital from shareholders, and then
distributed again in the form of loans or investments and other, which can turn a profit. The advantage
gained is then partially distributed to depositor and or shareholders in the form of interest and or
dividends. Depositors and or shareholders are willing to give up their money because they believe that
the financial institution is able to choose a professional alternative of investments that can generate
profits quite interesting.
Investment selection process itself should be done carefully, because if there is an error in the
selection of the investment will result in the financial institution cannot meet its obligations to pay
interest and dividends to the depositary or to provide and or shareholders. A common investment by
asset-liability management committee is in the form of shares in the capital market. In investing, asset-
liability management committee will usually dealing with investment risk. To anticipate the movement
of the investment risk, investment risk measurement can be performed with quantile approach, or better
known as Value-at-Risk (VaR). Investment risk cannot be basically eliminated, but can be minimized.
The strategy is often done in minimizing investment risk is by forming portfolios. The essence of the
establishment of a portfolio is allocated on a variety of alternative investment or investment
diversification, so that the overall investment risk can be minimized.
Investment diversification will effectively reduce the risk if the investor is able to form efficient
portfolios. An efficient portfolio is categorized as a portfolio when the portfolio lies on the efficient
surface. Efficient portfolio selection is influenced by the level of risk preferences of each investor, so
that efficient surface along the curve will be many locally optimum of portfolio (individual). But among
local optimum portfolios are portfolios that there will be a global optimum. Therefore, risk is measured
using Value-at-Risk (VaR), formulated in this paper a portfolio that can maximize return and minimize
the level of risk, both locally and globally, under the asset-liability characteristics. Portfolio better,
193
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
known as Mean-VaR portfolio optimization under asset-liability models. To resolve the portfolio
model is done by using a genetic algorithm. As a numerical illustration, the models used to analyze
some of stocks that traded on the capital market in Indonesia. The goal is to get the proportion of funds
allocated to each stock which analyzed.
2. Methodology
This analysis discusses several methods of mathematical models which are very useful for formulation
of Mean-VaR portfolio model under the asset-liability characteristics and find a solution. In this section
the models discussed included: models of calculation of stock returns, stock return distribution model,
asset-liability models, models of Mean-VaR portfolio optimization, and genetic algorithms.
Suppose Pit stock price i ( i 1,..., N with N the number of stocks that analyzed) at time t ( t 1,..., T
with T the number of stock price data that observed). Suppose also rit is the stock return i at time t
can be calculated using the following equation:
rit ln Pit ln Pit 1 (1)
Stock return data will then be analyzed and the distribution model estimated the expected values and
the variance of each as follows.
Suppose Rit random variable of stocks return i ( i 1,..., N with N the number of stocks that
analyzed) at time t ( t 1,..., T with T the number of stock price data that observed) which has
a certain continuous distribution. Function f (rit ) is probability density of a random variable
koninu Rit , defined over the set of all real numbers R , when: 1) f (rit ) 0 for all rit R ; 2)
b
f (rit )drit 1 ; and 3) P(a Rit b) f (rit )drit .
a
Expectation or mean value of Rit is it E[ Rit ] r f (rit )drit , and the variance of Rit is
it
it2 Var[ Rit ] E[(rit it ) 2 ] (rit it ) 2 f (rit )drit . Expectation and the variance has
several important properties in the study, which are: i) E[ pRit qR jt ] pE[ Rit ] qE[R jt ] ; and
ii) when Rit and R jt random variables with joint probability distribution f (rit , r jt ) , then
Var[ pRit qR jt ] p 2Var[ Rit ] q 2Var[ R jt ] 2 pqCov(Rit , R jt ) . Where
Cov(Rit , R jt ) E[(rit it )(r jt jt )] .
Suppose L0 the initial liability and L1 the liability value after one period. Comparison of the growth
of liabilities given by a random variable RL ( L1 L0 ) / L0 . Generally, RL depending on changes of
the structure of interest rates, inflation, and real wages. Suppose also A0 the initial value of assets
liabilities assets liabilities and A1 the value after one period. Assume that all investment opportunities
i 1,..., N are risky. Suppose the investment strategies of funds (deposit) conducted by forming a
portfolio x . Therefore, the value of assets after one period is given by A1 A0[1 RA ( x)] , where
194
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
RA ( x) i 1 xi Ri with xi the weights of assets to liabilities of i (Gerstner et al, [4]; Keel & Muller,
N
It is assumed that the vector of mean assets was μT ( 1,..., N ) , with i E ( Ri ) , i 1,..., N . The
covariance matrix of asset is Σ ( ij ) , i, j 1,..., N with ij Cov( Ri , R j ) . Covariance of asset
liability vectors is γT (1,..., N ) , i 1,..., N , i covariance between stocks with economic index.
It is assumed that the value of f0 1 . Return portfolio weight vector is, xT ( x1,..., xN ) with
i 1 xi 1 or xT e 1, where eT (1,...,1) the unit vector. Based on these assumptions, equation (5)
N
can be expressed as
S μT x L , (6)
and equation (6) as
S2 xT Σx L2 γ T x . (7)
Value-at-Risk ( VaR ) surplus of the portfolio can be formulated as
VaRS ( z S S ) z (xT Σx L γ T x)1 / 2 μT x L , (8)
where z the percentile of the standard normal distribution for the of significance level
(Khindanova & Rachev, [9]; Tsay, [13]).
A surplus of the portfolio S * is said (Mean-VaR) efficient if there is no surplus portfolio S
with S * S and S * S (Panjer et al, [10]; Sukono et al, [11]). Selection of efficient portfolio
i 1 xi 1 or
N
surplus can be performed using the objective function: Max {2S VaRS } with,
Maximize 2 μT x L z (xT Σx L 2γ T x)1 / 2 μT x L
Subject to eT x 1 , (9)
where the factor of risk tolerance (or risk avoidance factor).
195
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
in a chromosome) * (total population). To select the position of the mutated genes is done by generating
a random number between 1 to integer total_gen. If the random number generated is less than the
variable mutation_rate ( ρm ), then select the position as a sub-chromosome mutation. After the
mutation process, meaning it has completed one iteration of the genetic algorithm, also called a
generation. This process will be repeated until a predetermined number of generations, and ultimately
be acquired chromosome as the optimum objective function.
Pseducode genetic algorithms in general are given as shown in Figure-1 below.
3. Illustrations
Data assets are analyzed in this paper is accessible through the website: http://www.finance.go.id//.
Data include six (6) stock and a data of the rupiah exchange rate against the USD dollar, for the period
January 2, 2010 until June 4, 2013. stock data includes the names of the stock: INDF, DEWA, AALI,
LSIP, ASII, and TURB . Name of the stocks respectively given symbols: S1 , S 2 , S 3 , S 4 , S 5 , and
S 6 , while the rupiah exchange rate against the USD dollar given symbols D L . Stock prices is includes
the opening price, highest price, lowest price and closing price, but analyzed only closing prices. In
general, stock prices and the rupiah exchange rate against the USD dollar are analyzed magnitude
fluctuated up and down. In fact, sometimes rising sharply and then down again, sometimes also dropped
sharply and then rose again. Both stocks price data as well as the rupiah exchange rate against the USD
dollar, furthermore determined each the return value by using equation (1). Subsequently, both stock
return data as well as the return of the rupiah exchange rate against the USD dollar, was estimated form
each distribution as discussed in section 3.2 below.
In this section, was estimated distributions of stocks return data: S1 , S 2 , S 3 , S 4 , S 5 , and S 6 , and
the value of the rupiah against rate against the USD dollar D L . Estimates performed using the method
of Maximum Likelihood Estimator. As for the distribution model fit test performed using statistical
Anderson Darling (AD). Based on the form of the distribution estimator can be estimate the values of
mean estimator ̂ i and variance estimator ˆ i ( i 1,..., 6 and L ). Estimation process of distribution,
distribution fit test, and the estimated parameter values ̂ i and ˆ i ( i 1,..., 6 and L ) done with the
197
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
assistance software of Minitab 14. The results of the estimation process and fit test are given in Table-
1 below.
Estimator of parameter values of mean ̂ i and variance ˆ i will then be used to form mean
vector and covariance matrix, the portfolio optimization modeling as discussed in Section 3.3 as
follows.
In this section conducted the Mean-VaR portfolio optimization using genetic algorithms.
Portfolio optimization models which determined the solution was referring to equation (9). First, by
using the data in Table-1 column of mean ̂ i , formed a vector of mean stock
μT (0.004501 0.002873 0.001580 0.002693 0.009728 0.001510) . Secondly, also using the data in Table-
1 column of variance ˆ i , together with the values of the covariance between stocks, the covariance
matrix is formed as follows:
Third, because the number of stock that were analyzed consisted of six stock, then the unit vector
defined as eT (1 1 1 1 1 1) . Fourth, based on the calculation of the covariance between stock returns
with a return value of the rupiah exchange rate against the USD dollar, then the covariance of asset
liability vector is formed as γT (0.000542 0.000371 0.000447 0.000626 0.000812 0.000724) .
Furthermore, substituting the values of the estimator L and L , as well as the vectors of μT
, eT , γ T , and matrix Σ into equation (9), can be used to calculate the composition of the weight vector
xT ( x1 ,...,x6 ) , that can maximize the objective function (9).
Determination process optimization solutions performed using a genetic algorithm, and the results are
given in Table-2 below.
198
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
199
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013
14.8 0.0049 0.0361 0.1478 0.2053 0.0115 0.5944 1.0000 0.0019 0.0160 0.2601 0.1202
15.0 0.0163 0.1952 0.2865 0.1512 0.0031 0.3477 1.0000 0.0020 0.0116 0.2564 0.1760
15.2 0.0274 0.0174 0.4892 0.0142 0.0296 0.4222 1.0000 0.0019 0.0136 0.2649 0.1408
15.4 0.0165 0.0347 0.1858 0.0008 0.0454 0.7168 1.0000 0.0020 0.0190 0.2693 0.1050
15.6 0.0009 0.1823 0.0268 0.0241 0.0420 0.7239 1.0000 0.0021 0.0196 0.2683 0.1090
15.8 0.0174 0.0665 0.4197 0.0383 0.0051 0.4530 1.0000 0.0018 0.0138 0.2788 0.1280
16.0 0.0615 0.0324 0.1890 0.0549 0.0073 0.6550 1.0000 0.0019 0.0175 0.2811 0.1073
16.2 0.0168 0.0383 0.4727 0.0231 0.0173 0.4317 1.0000 0.0018 0.0137 0.2835 0.1323
16.4 0.0691 0.0605 0.3054 0.0137 0.0029 0.5485 1.0000 0.0019 0.0153 0.2862 0.1218
16.6 0.0046 0.1248 0.3112 0.1576 0.0125 0.3894 1.0000 0.0020 0.0119 0.2822 0.1688
16.8 0.0240 0.0677 0.4645 0.0163 0.0264 0.4012 1.0000 0.0019 0.0130 0.2880 0.1500
17.0 0.0279 0.0224 0.5071 0.0915 0.0173 0.3338 1.0000 0.0019 0.0120 0.2916 0.1587
17.2 0.0672 0.0360 0.3427 0.0331 0.0019 0.5190 1.0000 0.0018 0.0147 0.2989 0.1250
17.4 0.0475 0.0199 0.4232 0.0303 0.0381 0.4410 1.0000 0.0021 0.0132 0.2933 0.1558
17.6 0.0264 0.0495 0.4280 0.0132 0.0205 0.4624 1.0000 0.0019 0.0140 0.3035 0.1340
17.8 0.0134 0.2300 0.2525 0.0759 0.0038 0.4244 1.0000 0.0020 0.0135 0.3016 0.1487
18.0 0.0180 0.0311 0.3048 0.1272 0.0352 0.4838 1.0000 0.0021 0.0134 0.3022 0.1542
18.2 0.0430 0.2299 0.2554 0.0048 0.0056 0.4614 1.0000 0.0020 0.0143 0.3075 0.1416
18.4 0.0072 0.0662 0.3502 0.1341 0.0208 0.4214 1.0000 0.0020 0.0124 0.3110 0.1588
18.6 0.0418 0.0575 0.2842 0.1474 0.0012 0.4678 1.0000 0.0019 0.0132 0.3167 0.1455
18.8 0.0220 0.1401 0.3664 0.0746 0.0108 0.3862 1.0000 0.0020 0.0122 0.3172 0.1608
19.0 0.0438 0.1669 0.0734 0.0034 0.0118 0.7007 1.0000 0.0020 0.0190 0.3249 0.1037
19.2 0.0061 0.1532 0.4306 0.0386 0.0268 0.3446 1.0000 0.0020 0.0121 0.3207 0.1687
19.4 0.0161 0.0198 0.4002 0.3096 0.0018 0.2525 1.0000 0.0020 0.0103 0.3242 0.1933
19.6 0.0500 0.0314 0.4100 0.0823 0.0080 0.4184 1.0000 0.0019 0.0128 0.3328 0.1481
19.8 0.0244 0.0842 0.1338 0.1775 0.0161 0.5640 1.0000 0.0020 0.0151 0.3312 0.1357
20.0 0.0624 0.0377 0.3820 0.1233 0.0004 0.3942 1.0000 0.0019 0.0121 0.3373 0.1593
Based on the optimization process whose results are shown in Table-2, it appears that for every value
of different risk tolerances, resulting in a composition of different investment allocation weights. Due
to the weight of the composition of different investment allocation, resulting in acquired return and
Value-at-Riks of portfolio differently. In the portfolio optimization process above the global optimum
of portfolio achieved when the value of risk tolerance = 12.8, with the composition of the allocation
investment funds in S1 , S 2 , S 3 , S 4 , S 5 , and S 6 are 0.0123, 0.0395, 0.3524, 0.3118, 0.0044, and
0.2795 respectively. The global optimum of portfolio produces the expected return of the portfolio value
is 0.0020, with the Value-at-Risk is 0.0103. At the global optimum of portfolio the ratio between
portfolio return with Value-at-Risk of 0.1967 is the largest compared to other ratio values. This course
can be used as a reference for investors in making investment decisions on stocks that were analyzed.
4. Conclusion
In this paper we used the genetic algorithm to find a value of the global optimum for Mean-VaR portfolio
investment under asset liability. On global optimum position, we obtained the expected return value of
portfolio is 0.0020, with the Value-at-Risk is 0.0103. The global optimum obtained, when the portfolio
weights of x1 , x2 , x3 , x4 , x5 and x6 are 0.0123, 0.0395, 0.3524, 0.3118, 0.0044, and 0.2795
respectively, with the risk tolerance is 12.8.
References
Caglayaa, M.O. & Pinter, J.D. (2010). Development and Calibration of Currency Market Strategies by
Global Optimization. Working Paper. Faculty of Economics and Administrative Sciences, 2:
Faculty of Engineering Ozyeğin University, Kusbakisi Caddesi, No.2, 34662 Altunizade,
Istanbul, Turkey. mustafa.caglayan@ozyegin.edu.tr, janos.pinter@ozyegin.edu.tr
Dowd, K.. (2002). An Introduction to Market Risk Measurement, John Wiley & Sons, Inc., New Delhi,
India.
Elton, E.J. & Gruber, M.J. (1991). Modern Portfolio Theory and Investment Analysis, Fourth Edition,
John Wiley & Sons, Inc., New York.
Froot, K.A., Venter, G.G. & Major, J.A. (2007). Capital and Value of Risk Transfer, Working Paper,
New York: Harvard Business School, Boston, MA 02163. Http://www.people.hbs.edu/kfroot/.
(Downloaded in December 2012).
Gerstner, T., Griebel, M. & Holtz, M. (2007). A General Asset-Liability Management Model for the
Efficient Simulatrion of Portfolios of Life Insurance Policies, Working Paper, Institute for
200
Proceedings of the International Conference on Mathematical and Computer Sciences
Jatinangor, October 23rd-24th , 2013