Академический Документы
Профессиональный Документы
Культура Документы
This work is subjected to copyright. All rights are reserved whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, re-use of
illusions, recitation, broadcasting, reproduction on microfilms or in any other way, and
storage in data banks. Duplication of this publication of parts thereof is permitted only
under the provision of the copyright law 1965, in its current version, and permission of
use must always be obtained from UBICC Publishers. Violations are liable to prosecution
under the copy right law.
© UBICC Journal
Printed in South Korea
UBICC Publishers
Guest Editor’s Biography
Dr. Al-Dahoud, is a associated professor at Al-Zaytoonah University, Amman, Jordan.
He took his PhD from La Sabianza1/Italy and Kiev Polytechnic/Ukraine, on 1996.
He worked at Al-Zaytoonah University since 1996 until now. He worked as visiting
professor in many universities in Jordan and Middle East, as supervisor of master and
PhD degrees in computer science. He established the ICIT since 2003 and he is the
program chair of ICIT until now. He was the Vice President of the IT committee in the
ministry of youth/Jordan, 2005, 2006. Al-Dahoud was the General Chair of (ICITST-
2008), June 23–28, 2008, Dublin, Ireland (www.icitst.org).
He has directed and led many projects sponsored by NUFFIC/Netherlands:
- The Tailor-made Training 2007 and On-Line Learning & Learning in an Integrated
Virtual Environment" 2008.
His hobby is conference organization, so he participates in the following conferences as
general chair, program chair, session’s organizer or in the publicity committee:
- ICITs, ICITST, ICITNS, DepCos, ICTA, ACITs, IMCL, WSEAS, and AICCSA
Journals Activities: Al-Dahoud worked as Editor in Chief or guest editor or in the Editorial
board of the following Journals:
Journal of Digital Information Management, IAJIT, Journal of Computer Science, Int. J.
Internet Technology and Secured Transactions, and UBICC.
He published many books and journal papers, and participated as speaker in many
conferences worldwide.
UBICC Journal
Volume 4, Number 3, July 2009
Ivana Berkovic
University of Novi Sad, Technical Faculty “Mihajlo Pupin”, Zrenjanin, Serbia
berkovic@tf.zr.ac.yu
Branko Markoski
University of Novi Sad, Technical Faculty “Mihajlo Pupin”, Zrenjanin, Serbia
markoni@uns.ns.ac.yu
Jovan Setrajcic
University of Novi Sad, Faculty of Sciences, Novi Sad, Serbia
bora@if.ns.ac.yu
Vladimir Brtka
University of Novi Sad, Technical Faculty “Mihajlo Pupin”, Zrenjanin, Serbia
vbrtka@tf.zr.ac.yu
Dalibor Dobrilovic
University of Novi Sad, Technical Faculty “Mihajlo Pupin”, Zrenjanin, Serbia
ddobrilo@tf.zr.ac.yu
ABSTRACT
Within software’s life cycle, program testing is very important, since quality of
specification demand, design and application must be proven. All definitions related
to program testing are based on the same tendency and that is to give answer to the
question: does the program behave in the requested way? One of oldest and best-
known methods used in constructive testing of smaller programs is the symbolic
program execution. One of ways to prove whether given program is written correctly
is to execute it symbolically. Ramified program may be translated into declarative
shape, i.e. into a clause sequence, and this translation may be automated. Method
comprises of transformation part and resolution part.This work gives the description
of the general frame for the investigation of the problem regarding program
correctness, using the method of resolution invalidation.. It is shown how the rules
of program logic can be used in the automatic resolution procedure. The examples of
the realization on the LP prolog language are given (without limitation to Horn's
clauses and without final failure).. The process of Pascal program execution in the
LP system demonstrator is shown.
process that must be executed as systematically as According to [5] testing may be descriptive and
possible in order to provide adequate reliability and prescriptive. In descriptive testing, testing of all test
quality certificate. items is not necessary. Instead, in testing log is
Within software lifespan, program testing is one written whether software is hard to test, is it stable
of most important activities since fulfillment of or not, number of bugs, etc... Prescriptive testing
specification requirements, design and application establishes operative steps helping software control,
must be checked out. According to Mantos [2], big i.e. dividing complex modules in several more
software producers spend about 40% of time for simple ones. There are several tests of complex
program testing. In order to test large and software measurements. Important criterion in
complicated programs, testing must be as measurement selection is equality (harmony) of
systematic as possible. Therefore, from all testing applications. It is popular in commercial software
methods, only one that must not be applied is ad application because it guarantees to user a certain
hoc testing method, since it cannot verify quality level of testing, or possibility of so-called internal
and correctness regarding the specification, action [6]. There is a strong connection between
construction or application. Testing firstly certifies complexity and testing, and methodology of
whether the program performs the job it was structural testing makes this connection explicit [6].
intended to do, and then how it behaves in different Firstly, complexity is the basic source of software
exploitation conditions. Therefore, the key element errors. This is possible in both abstract and concrete
in program testing is its specification, since, by sense. In abstract sense, complexity above certain
definition, testing must be based on it. Testing point exceeds ability of the human mind to do an
strategy includes a set of activities organized in exact mathematical manipulation. Structural
well-planned sequence of steps, which finally programming techniques may push these barriers,
confirms (or refutes) fulfillment of required but may not remove them completely. Other
software quality. Errors are made in all stages of factors, listed in [7], claim that when module is
software development and have a tendency to more complex, it is more probable that it contains
expand. A number of errors revealed may rise an error. In addition, above certain complexity
during designing and then increase several times threshold, probability of the error in the module is
during the coding. According to [3], program- progressively rising. On the basis of this
testing stages cost three to five times more than any information, many software purchasers define a
other stages in a software life span. number of cycles (software module cyclicity,
In large systems, many errors are found at the McCabe [8] 1) in order increase total reliability. On
beginning of testing process, with visible decline in the other hand, complexity may be used directly to
error percent during mending the errors in the distribute testing attempts in input data by
software itself. There are several different connecting complexity and number of errors, in
approaches to program testing. One of our order to aim testing to finding most probable errors
approaches is given in [4]. Testing result may not ("lever" mechanism, [9]). In structural testing
be predicted in advance. On the basis of testing methodology, this distribution means to precisely
results it may be concluded how much more errors determine number of testing paths needed for every
are present in the software. software module being tested, which exactly is the
The usual approach to testing is based on cyclic complexity. Other usual criteria of "white
requests analyse. Specification is being converted box" testing has important flaw that may be
into test items. Apart of the fact that incorrigible fulfilled with small number of tests for arbitrary
errors may occur in programs, specification complexity (using any possible meaning of the
requests are written in much higher level than "complexity") [10].
testing standards. This means that, during testing, The program correctness demonstration and the
attention must be paid to much more details than it programming of correct programs are two similar
is listed in specification itself. Due to lack of time theoretical problems, which are very meaningful in
or money, only parts of the software are being practice [11]. The first is resolved within the
tested, or the parts listed in specification. program analysis and the second within the
Structural testing method belongs to another program synthesis, although because of the
strategy of testing approaches, so-called "white connection that exists between the program analysis
box"" (some authors call it transparent or glass and the program synthesis it is noticed the
box). Criterion of usual "white box" is to execute reciprocal interference of the two processes.
every executive statement during the testing and to Nevertheless, when it is a mater of the automatic
write every result during testing in a testing log. methods that are to prove the correctness and of the
The basic force in all these testings is that complete
code is taken into account during testing, which
makes easier to find errors, even when software 1
details are unclear or incomplete. McCabe, measure based on a number and
structure of the cycle.
methods of automatic program synthesis, the single-dimension sequences and programs within a
difference between them is evident. Pascal program. Number of passages through cyclic
In reference [12] in describes the initial structures must be fixed in advance using counter.
possibility of automatic synthesis of simple During the testing process of given (input) Pascal
programs using the resolution procedure of program both parts are involved, transformation
automatic demonstration of theorems (ADT), more and resolution, in a following way: Transformation
precisely with the resolution procedure of part
deduction of answer to request. The demonstration • ends function by a sequence of clauses, or
that the request that has a form of (∃x)W(x) is the • demands forced termination, depending
logical consequence of the axioms that determinate on input Pascal program.
the predicate W and determinate (elementary) Impossibility of generating a sequence of clauses in
program operators provides that the variable x in transformation part points that a Pascal program has
the response obtains the value that represents the no correct syntax, i.e. that there are mistakes in
requested composition of (elementary) operators, syntax or in logical structure (destructive testing).
i.e. the requested program. The works of Z. Mann, In this case, since axiomatic base was not
observe in detail the problems of program analysis constructed, resolution part is not activated and user
and synthesis using the resolution procedure of is prompted to mend a Pascal program syntax. In
demonstration and deduction of the response. the case that transformation part finishes function
The different research tendency is axiomatic by generating a sequence of clauses, resolution part
definition of the semantics of the program language is activated with following possible outcomes:
Pascal in the form of specific rules of the program Ra) ends function giving a list of symbolic
logic deduction, described in the works [14,15]. outputs and corresponding Pascal
Although the concepts of the two mentioned program routes, or
approaches are different, they have the same Rb) ends by message that id could not generate
characteristic. It is the deductive system on list of outputs and routes, or
predicate language. In fact, it is a mater of Rc) doesn't end function and demands forced
realization in the special predicate computation that termination.
is based on deduction in formal theory. With this, Ra) By comparing symbolic outputs and routes
the problem of program correctness is to be related with specification, the user may
to automatic checkup of (existing) demonstrations • declare a given Pascal program as
regarding mathematical theorems. The two correct, if outputs are in
approaches mentioned above and their accordance to specification
modifications are based on that kind of concept. (constructive testing), or
• if a discrepancy of some symbolic
2. DESCRIPTION OF METHOD FOR ONE expression to specification has
PASSAGE SYMBOLIC TESTING PROGRAM been found, this means that there
is a semantic error in a Pascal
The method is based on transformation of given program (destructive testing) at the
Pascal program, into a sequence of prologue corresponding route.
clauses, which comprise axiomatic base for Rb) Impossibility to generate a list of symbolic
functioning of deductive resolution mechanism in a expressions in resolution part, which means
BASELOG system [10] . For given Pascal program, that there is a logical-structural error in a
by a single passage through resolution procedure of Pascal program (destructive testing).
BASELOG system, all possible outputs in Pascal Rc) Too long function or a (unending) cycle
program are obtained in a symbolic shape, together means that there is a logic and/or semantic
with paths leading to every one of them. Both parts, error in a Pascal program (destructive
transformation and resolution one, are completely testing).
automated and are naturally attached to each other.
When a resolution part has finished, a sequence of In this way, by using this method, user may be
paths and symbolic outputs is reading out for given assured in correctness of a Pascal program or in
input Pascal program. This is a transformation of presence of syntax and/or logic-structure semantic
programming structures and programming errors. As opposite to present methods of symbolic
operators into a sequence of clauses, being realized testing of the programs, important feature of this
by models depending on concrete programming method is single-passage, provided by specific
language. Automation covers branching IF-THEN property of OL – resolution [11] with marked
and IF-THEN-ELSE structures, as well as WHILE- literals, at which a resolution module in BASELOG
DO and REPEAT – UNTIL cyclic structures, system is founded.
which may be mutually nested in each other. This
paper gives review of possibilities in work with
where Im (implication) is a predicate symbol, and A(S) = R(τ) ∪A(τ). This means that derivation of
P, S, R, Q are variables; theorem B within theory τ could be replaced with
derivation within special predicate calculus S,
P2 R⇒P, {P}S{Q}
whose own axioms A(S)= R(τ) ∪A(τ).Axioms of
{R}S{Q}. we write Im(R,P) ∧ K(P,S,Q) ⇒ special predicate calculus S are: A(S)= A(τ)
K(R,S,Q) ∪R(τ).We assume that s is a syntax unit whose
P3 {P}S1{R} , {R}S2{Q} (partial) correctness is being proven for certain
input predicate U and output predicate V.
{P}begin S1; S2 end {Q}
Within theory S is being proved
K(P,S1,R) ∧ K(R,S2,Q) ⇒ K(P,s(S1,S2),Q)
where s is a function symbol, and P, S1, S2, R, q ... ⎥⎯ (∃P)(∃Q)K(P,s,Q)
are variables S
where s is a constant for presentation of a given
P4 {P∧B}S1{Q, {P∧~B}S2{Q} program. Program is written in functional notation
{P}if B then S1 else S2{Q} with symbols: s (sequence), d (assigning), ife (if-
then-else), if (if-then), wh (while), ru (repeat-until).
K(k(P,B),S1,Q)∧K(k(P,n(B)),S2,Q) ⇒ To starting set of axioms A(S), negation of
K(P,ife(B,S1,S2),Q) statement is added: Result of negation using
where k, n, ife are function symbols resolution procedure is as follows: /Im(Xθ,Yθ,)∨
Odgovor(Pθ,Qθ), where Xθ,Yθ,Pθ,Qθ are values for
P5 {P∧B}S{Q} , P∧~B ⇒ Q which successful negation To means that for these
values a proof is found. But this does not mean that
{P} if B then S{Q}
given program is partially correct. It is necessary to
K(k(P,B),S,Q) ∧ Im(k(P,n(B)),Q) ⇒ establish that input and output predicates U, V are
K(P,if(B,S),Q) in accordance with Pθ, Qθ, and also that Im (Xθ,Yθ)
is really fulfilled for domain predicates ant
where k, n, if are function symbols
terms.Accordance means confirmation that .. is
P6 {P∧B} S {P } valid. : U ⇒ Pθ, ∧ Qθ ⇒ V) ∧ ( Xθ ⇒ Yθ).there
are two ways to establish accordance: manually or
{P} while B do S {P∧~B} by automatic resolution procedure. Realization of
K(k(P,B),S,P) ⇒ K(P,wh(B,S),k(P,n(B))) these ways is not possible within theory S, but it is
possible within the new theory, which is defined by
where k, n, wh are function symbols predicates and terms which are part of the program
s and input-output predicates U, V. Within this
P7 {P}S{Q} , Q∧~B ⇒ P theory U, P, Q, V, X, Y are not variables, but
formulae with domain variables, domain terms and
{P}repeat S until B {Q∧B} domain predicates.This method concerns derivation
within special predicate calculus based on
K(P,S,Q) ∧ Im(k(Q,n(B)),P) ⇒ deduction within the formal theory. Thus the
K(P,ru(S,B),k(Q,B)) program's correctness problem is associated with
where k, n, ru are function symbols automatic proving of (existing) proofs of
mathematical theorems.
Transcription of other programming logic rules is
The formal theory τ is determined with the
also possible.
formulation of (S(τ), F(τ), A(τ), R(τ)) where S is
the set of symbols (alphabet) of the theory τ, F is
Axiom A(τ):
A1 K(t(P,Z,Y),d(Z.Y),P) the set of formulas (regular words in the alphabet
assigning axiom S), A is the set of axioms of the theory τ (A⊂F), R
Formal theory τ is given by (α(τ), F(τ), A(τ), R(τ)), is the set of rules of execution of the theory τ.
where α is a set of symbols (alphabet) of theory τ, Deduction (proof) of the formula B in the theory τ
F is a set of formulae (correct words in alphabet α), is the final sequence B1, B2, ... , Bn (Bn is B) of
A is a set of axioms for theory τ(A⊂F), R is a set of formulas of this theory, of that kind that for every
derivation rules for theory τ.B is a theorem within element Bi of that sequence it is valid: Bi is axiom,
theory τ if and only if B is possible to derive within or Bi is deducted with the application of some rules
calculus k from set R(τ) ∪A(τ) (k is a first-order of deduction Ri∈R from some preceding elements
predicate calculus).Let S be special predicate
calculus (first-order theory) with it's own axioms
of that sequence. It is said that B is the theorem of Regarding technique of automatic theorems
the theory τ and we write ⎯ B [17]. proving, most investigations have been done in
τ resolution rules of derivation. Resolution is a very
important derivation rule with completeness
Suppose S(τ) is a set of symbols of predicate property.
computation and F(τ) set of formulas of predicate
computation. In that case, the rules of deduction R( Demonstrate that the mentioned sequence is
τ) can be written in the form: Bi1∧Bi2∧ ... ∧Bik deduction of formula B in theoryτ.
⇒ Bi (Ri) where Bik, Bi are formulas from F(τ).
One way of solving this problem is to verify
Suppose κ predicate computation of first line, than
that the given sequence corresponds to definition of
it is valid:
deduction in theoryτ. The other way is to use (1),
i.e. (2):
R(τ), A(τ) ⎯ B if ⎯ B (1) If we demonstrate that
κ τ
R(τ), A(τ) ⎯ B ∧ B2 ∧ ... ∧ Bn (3)
B is theorem in the theory τ if and only if B is κ
deductible in computation κ from the set R(τ) ∪
A(τ). that is sufficient for conclusion that B1, B2, ... , Bn
is deduction in τ .
Suppose S is a special predicate computation
(theory of first line) with its own axioms: And also it is sufficient to demonstrate that R(τ),
A(S) = R(τ) ∪ A(τ) , (rules of deduction in S are
A(τ) ⎯ Bi , for i = 1,2,...,n, with this it is
rules of deduction of computation κ) then it is valid κ
A(S) ⎯ B if ⎯ B , so that (1) can be written: demonstrated (3).
κ S Demonstration for (3) can be deducted with the
resolution invalidation of the set
R(τ)∪A(τ)∪{~B1∨~B2∨ ... ∨~Bn}, or with n
⎯ B if ⎯ B (2)
S τ invalidations of sets R(τ)∪A(τ)∪{~Bi}.
That means that the deduction of theorem B in Notice that for the conclusion that B1, B2, ..., Bn is
theory τ can be replaced with deduction in special deduction in τ it is not enough to demonstrate R(τ),
predicate computation S, that has its own axioms A(τ) ⎯ (B1 ∧ B 2 ∧ ... ∧ Bn - 1 ⇒ Bn ) ,i.e. it is
A(S) = R(τ) ∪ A(τ). κ
Now we can formulate the following task: not enough to realize resolution invalidation of the
set R(τ)∪A(τ)∪{B1, B2, ... , Bn-1}∪{~Bn},
The sequence of formulas has been given B1, B2, ... because this demonstrate only that Bn is deductible
, Bn (Bn is B, Bi different from B for i<n) of theory
in τ supposing that in τ is deductible B1∧B2∧...∧
τ. Bn-1 .
Implementation of programs for proving Always when B1, B2, ..., Bn is really deduction
theorems was in the beginning only in mathematics
area. When it was seen that other problems could in τ, (B1∧B2∧...∧Bn-1 ⇒ Bn) will be correct, but
be presented as possible theorems which need to be vice versa is not always valid. It can happen that
proven, application possibilities were found for (B1∧B2∧...∧Bn-1 ⇒ Bn) is deductible in τ, but
areas as program correctness, program generating, that B1∧B2∧...∧Bn-1 is not deductible in τ, (see
question languages over relation databases, example 1’).
electronic circuits design.
And also, the demonstration for R(τ), A(τ)
As for formal presentation where theorem is
being proven, it could be statement calculus, first-
order predicate calculus, as well as higher-order ⎯ Bn , that can be realized with resolution
logic. Theorems in statement calculus are simple κ
for contemporary provers, but statement calculus is invalidation of the set R(τ)∪A(τ)∪{~Bn}, means
not expressional enough. Higher-order logic is that Bn is theorem in τ, i.e. that Bn is deductible in
extremely expressional, but they have a number of
practical problems. Therefore a first-order predicate τ, but this is not enough for the conclusion that B1,
calculus is probably the most used one. B2, ..., Bn is deduction in τ (except for the case that
~IM(Y1,V1)~K(X1,U0,Y1)K(X1,U0,V1)& /
consequence rule /~IM(k(V1,b),t(t(V1,p,t2),i,t1))/O(X2,k(V1,ng(b)))/
~O(X1,V1)& / negation addition ~K(X2,s(h,g),V1)~K(X2,h,U1)~K(U1,g,V1)&
0 2.lateral, 2.literal :
0 ~K(Y1,d(i,0),V1)K(Y1,g,V1)&
LEVEL= 8; resolvent:
LP system generates next negation
number of resolvents generated = 10 /~IM(k(V0,b),t(t(V0,p,t2),i,t1))/O(X2,k(V0,ng(b)))/
maximal obtained level = 11 ~K(X2,s(h,g),V0)~K(X2,h,Y1)/~K(Y1,g,V0)~K(Y
DEMONSTRATION IS PRINTED 1,d(i,0),V0)&
level where the empty item is generated = 11 5.lateral, 1.literal :
LEVEL=1; central item K(t(X1,Z1,Y1),d(Z1,Y1),X1)&
:/O(X1,V1)~K(X1,s(h,g),Y1)~K(Y1,w(b,s(d(i,t1),d LEVEL= 9; resolvent:
(p,t2))),V1)&
4.lateral, 2.literal : /~IM(k(X1,b),t(t(X1,p,t2),i,t1))/O(X2,k(X1,ng(b)))/
~K(X2,s(h,g),X1)~K(X2,h,t(X1,i,0))&
~K(k(X1,V2),U0,X1)K(X1,w(V2,U0),k(X1,ng(V2) 1 lateral, 2.literal :
))& ~K(Y1,d(p,x),V1)K(Y1,h,V1)&
LEVEL= 2; resolvent: LEVEL= 10; resolvent:
/O(X1,k(X0,ng(b)))~K(X1,s(h,g),X0)/~K(X0,w(b,s /~IM(k(X1,b),t(t(X1,p,t2),i,t1))/O(Y1,k(X1,ng(b)))/
(d(i,t1),d(p,t2))),k(X0,ng(b)))~K(k(X0,b),s(d(i,t1),d ~K(Y1,s(h,g),X1)/~K(Y1,h,t(X1,i,0))~K(Y1,d(p,x),
(p,t2)),X0)& t(X1,i,0))&
3.lateral, 3.literal : 5.lateras, 1.literal :
K(t(X1,Z1,Y1),d(Z1,Y1),X1)&
~K(X1,Y1,U1)~K(U1,Y2,V1)K(X1,s(Y1,Y2),V1) LEVEL= 11; resolvent:
& /O(Y1,k(X1,ng(b)))/~K(Y1,s(h,g),X1)/~K(Y1,h,t(X
LEVEL= 3; resolvent: 1,i,0))~K(Y1,d(p,x),t(X1,i,0))&
5. lateral, 1.literal :
/O(X1,k(V1,ng(b)))~K(X1,s(h,g),V1)/~K(V1,w(b,s K(t(X1,Z1,Y1),d(Z1,Y1),X1)&
(d(i,t1),d(p,t2))),k(V1,ng(b)))/~K(k(V1,b),s(d(i,t1), LEVEL= 11; resolvent:
d(p,t2)),V1)~K(k(V1,b),d(i,t1),U1)~K(U1,d(p,t2),V DEMONSTRATION IS PRINTED
1)& Now we need to prove compliance, i.e. that there is
5.lateral, 1.literal : in effect:
K(t(X1,Z1,Y1),d(Z1,Y1),X1)& ( Xθ ⇒ YθZθTθ ) ∧ (U ⇒ Pθ, ∧ Qθ ⇒ V) that is at
LEVEL= 4; resolvent: LEVEL= 12; resolvent:
/IM(k(X1,b),t(t(X1,i,t1),p,t2))O(t(t(X1,i,0),p,x),k(X
/O(X1,k(X0,ng(b)))~K(X1,s(h,g),X0)/~K(X0,w(b,s 1,ng(b)))&
(d(i,t1),d(p,t2))),k(X0,ng(b)))/~K(k(X0,b),s(d(i,t1), By getting marks to domain level we obtain:
d(p,t2)),X0)~K(k(X0,b),d(i,t1),t(X0,p,t2))& (X1 ∧ (i<=n) ⇒ X1ii+1 pp/i)i (U⇒X1i 0 px) ∧ (X1 ∧
¬
6.lateral, 3.literal : (i<=n) ⇒ V)7
~IM(X2,Y1)~K(Y1,U0,V1)K(X2,U0,V1)& i
LEVEL= 5; resolvent: Putting X1: p = x ⋅ ∏ ( j − 1) we obtain
j =0
/O(X1,k(X0,ng(b)))~K(X1,s(h,g),X0)/~K(X0,w(b,s following correct implications:
(d(i,t1),d(p,t2))),k(X0,ng(b)))/~K(k(X0,b),s(d(i,t1),
d(p,t2)),X0)/~K(k(X0,b),d(i,t1),t(X0,p,t2))~IM(k(X
0,b),Y1)~K(Y1,d(i,t1),t(X0,p,t2))&
5. lateral, 1.literal :
K(t(X1,Z1,Y1),d(Z1,Y1),X1)&
LEVEL= 6; resolvent:
/~IM(k(X0,b),t(t(X0,p,t2),i,t1))/O(X1,k(X0,ng(b)))
~K(X1,s(h,g),X0)&
3.lateral, 3.literal :
~K(X1,Y1,U1)~K(U1,Y2,V1)K(X1,s(Y1,Y2),V1)
&
LEVEL= 7; resolvent:
be realized. The added axioms describe [15] Hoare C.A.R. “Proof of a program” Find
characteristics of domain predicates and operations Communications of the ACM 14, 39-45.
and represent necessary knowledge that is to be 1971.
communicated to the deductive system. The [16] Hoare C.A.R, Wirth N., “An axiomatic
existing results described above imply that kind of definition of the programming language
knowledge, but this appears to be notable difficulty Pascal “, Acta Informatica 2, pp. 335-355.
in practice. 1983
[17] Markoski B., Hotomski P., Malbaski D.,
ACKNOWELDGEMENTS Obradovic D. “Resolution methods in proving
The work presented in the paper was developed the program correctness “, YUGER, An
within the IT Project “WEB international journal dealing with theoretical
portals for data analysis and consulting,” No. and computational aspects of operations
13013, supported by the research, systems science and menagement
government of Republic of Serbia, 2008. – 2010. science, Beograd, Serbia, 2007,
[18] Myers G.J., “The Art of Software Testing, New
7. REFERENCES York ” , Wiley, 1979.
[19] Chan, F., T. Chen, I. Mak and Y. Yu,
[1] Marks, David M. “Testing very big systems” “Proportional sampling strategy: Guidelines
New York:McGraw-Hill, 1992 for software test practitioners “, Information
[2] Manthos A., Vasilis C., Kostis D. and Software Technology, Vol. 38, No. 12,
“Systematicaly Testing a Real-Time Operating pp. 775-782, 1996.
System“ IEEE Trans. Software Eng., 1995 [20] K. Beck, “Test Driven Development: By
[3] Voas J., Miller W. “Software Testability: The Example”, Addison-Wesley, 2003
New Verification“ IEEE Software 1995 [21] P. Runeson, C. Andersson, and M. Höst, “Test
[4] Perry William E. “Year 2000 Software Testing“ Processes in Software Product Evolution—A
New York: John Wiley& SONS 1999 Qualitative Survey on the State of Practice”,
[5] Whittaker J.A., Whittaker, Agrawal K. J. Software Maintenance and Evolution, vol.
“A case study in software reliability 15, no. 1, 2003, pp. 41–59.
measurement“ Proceedinga of Quality Week, [22] G. Rothermel et al., “On Test Suite
paper no.2A2, San Francisko, USA 1995 Composition and Cost-Effective Regression
[6] Zeller A. “Yesterday, my program worked, Testing”, ACM Trans. Software Eng. and
Today, it does not. Why?”Passau Germany, Methodology, vol. 13, no. 3, 2004, pp. 277–33
2000 [23] N. Tillmann and W. Schulte, “Parameterized
[7] Markoski B., Hotomski P., Malbaski D., Unit Tests”, Proc. 10th European Software
Bogicevic N. “Testing the integration and the Eng. Conf., ACM Press, 2005, pp. 253–262.
system“, International ZEMAK symposium, [24] Nathaniel Charlton “Program verification
Ohrid, FR Macedonia, 2004. with interacting analysis plugins” Formal
[8] McCabe, Thomas J, &Butler, Charles W. Aspects of Computing. London: Aug 2007.
“Design Complexity Measurement and Testing Vol. 19, Iss. 3; p. 375
“Communications of the ACM 32, 1992
[9] Markoski B., Hotomski P., Malbaski D.
“Testing the complex software“, International
ZEMAK symposium, Ohrid, FR Macedonia,
2004.
[10] Chidamber, S. and C. Kemerer, “Towards a
Metrics Suite for Object Oriented Designe”,
Proceedings of OOPSLA, July 2001
[11] J.A. Whittaker, “What is Software Testing?
And Why Is It So Hard?” IEEE Software, vol.
17, no. 1, 2000,
[12] Nilsson N., “Problem-Solving Methods in
Artificial Intelligence “, McGraw-Hill, 1980
[13] Manna Z., “Mathematical Theory of
Computation “, McGraw-Hill, 1978
[14] Floyd. R.W., “Assigning meanings to
programs “, In: Proc. Sym. in Applied
Math.Vol.19, Mathematical Aspects of
Computer Science, Amer. Math. Soc., pp. 19-
32., 1980.
Ahid Al-Shbail
Al al-bayt University, Jordan
ahid_shbail@yahoo.com
Adnan M. Al-Smadi
Al al-bayt University, Jordan
smadi98@aabu.edu.jo
ABSTRACT
Detection tools such as virus scanners have performed poorly, particularly when
facing previously unknown virus or novel variants of existing ones. This study
proposes an efficient and novel method based on arbitrary length of control flow
graphs (ALCFG) and similarity of the aligned ALCFG matrix. The metamorphic
viruses are generated by two tools; namely: next generation virus creation kit
(NGVCK0.30) and virus creation lab for Windows 32 (VCL32). The results show
that all the generated metamorphic viruses can be detected by using the suggested
approach, while less than 62% are detected by well-known antivirus software.
permutation methods[6]. In 2006 Rodelio and others Output: op matrix of size n×4 (this matrix contains
use code transformation method for undoing the the jump instructions and the labels)
previous transformations done by the virus. Code 1- Load the matrix op[n][4] from the file x. Where
transformation is used to convert mutated the opcode i, is stored at the row i, the column
instructions into their simplest form, where the op[i][1] will be used to store the labels (for
combinations of instructions are transformed to an simplicity we will consider each label as an
equivalent but simple form [7]. Mohamed and others opcode), the column op[i][2] will be used to
use engine-specific scoring procedure that scans a store the instructions (mov ,jmp, add,…). The
piece of code to determine the likelihood [8]. column op[i][3] will be used to store the first
Bruschi, Martignoni, and Monga proposed a operand, the column op[i][4] will be used to
detection method control flow graph matching. mark the rows that are processed, assume that
Mutations are eliminated through code normalization default value is 0.
and the problem of detecting viral code inside an 2- Delete the rows that do not contain label or jump
executable is reduced to a simpler problem[9]. Wong instructions (jump instructions such as call, ret,
and Stamp experimented with Hidden Markov jmp, ja, jz, je…). In this step a special action must
models to try to detect metamorphic malware. They be consider if the "ret" instruction is preceded
concluded that in order to avoid detection, directly by push instruction, in this case "ret" is
metamorphic viruses also need a degree of similarity replaced by "jmp" and its operand is replaced by
with normal programs and this is something very the value which has pushed.
challenging for the virus writer[10]. 3- Rename all the conditional jump instructions to
the names in the Table 1.
4- Add to the end of the matrix a row contains
3. THE PROPOSED METHOD op[n+1][2]="end"
5- Delete the rows that contain inaccessible label
This section introduces new procedures to extract (this means that op[i][3] does not equal to this
partial control flow graph of any binary file. Two label for all i)
main points are considered during the development 6- Delete the rows that contain unreachable operand
of the suggested algorithms, first point is to reorder (this means that op[i][1] does not equal to this
the flow of the code by handling "jmp" and "call" operand for all i)
instructions, and second point is to use one symbol
for all alternatives and equivalent instructions. The Algorithm 3: Prepare the matrices Labels and
output of Algorithm 1 is stored in the matrix ALCFG JumpTo
and contains arbitrary number of the nodes. Input: op matrix of size n×4
Moreover the sequence of the nodes is represented Output: The matrix Labels of size c×2 and the
by using symbols to be used in the similarity matrix JumpTo of size e×3
measurement. Do the following while count <= m
If op[j][4]=1 then
Algorithm 1: Construction of Arbitrary length of stack2.pop j
Control Flow Graph (ALCFG) if j = -1 then stack1.pop j
Input: Disassembled portable executable file (x), the if j= -1 then break
number of the file lines (n), the start else if op[j][2]="call" then
location ( j), the required number of the stack1.push j+1; j=z+1 where
nodes (m). op[z][1]= op[j][3]
Output: ALCFG m×m matrix and node sequence else if op[j][2]="ret" then
array NodeSeq contains m nodes stack1.pop j
Steps: else if op[j][2]= "jmp" then
1- Call prepare op matrix (the size of op matrix is j=z+1 where op[z][1]= op[j][3]
n×4) else if op[j][2]="A" ,"N", .. or "L" then
2- Call prepare the matrices Labels and JumpTo (the stack2.push z ,where op[z][1]= op[j][3]
size is c×2 and e×3) JumpTo [e][1]= op[j][3];
3- Call Construct the matrix ALCFG JumpTo [e][2]= m;
JumpTo [e][3]= op[j][2]
Algorithm 2: Prepare op matrix (the size of op m=m+1;e=e+1; j=j+1
matrix is n×4) else if op[j][1] <> "null" then //label
Input: Disassembled portable executable file (x), the Labels[c][1]= op[j][1];
number of the file lines (n), the start Labels[c][2]= m
location (j), the required number of the c=c+1; m=m+1; j=j+1
nodes (m). else if op[j][2]="end"and m<=count then
stack2.pop j
The following is the skeleton signature of Z0mbie III Definition 5: The program P is infected by the virus
which is consist from the sequence of the first 10 V if and only if ϕ ( ALCFGs , ALCFGv ) = c , where
nodes NodeSeq and the matrix ALCFG: ALCFGs p ALCFGp.
NAHEKKKKAA For simplicity we will focus on viruses that use
simple entry point infection, therefore i=0. However
1 1
1 1
our approach can be applied to any obfuscated entry
1
point
1
1
ALCFG 10 ×10 = Algorithm 5: Check whether the program P is
1
1 infected by the virus V or not.
1 Input: The program P, the matrix ALCFGV and a
1
1 threshold T, where V is a virus in the
database
Output: yes if infected or no if the program is not
4. SIMILARITY MEASURE FUNCTION infected
1- Disassemble the program P (In this study the
To detect the metamorphic viruses that preserve software IDA Pro 4.8 is used, but this process can
its control flow graph during the propagation, we can be implemented and embedded in one software)
simply compare ALCFG matrices, but if the control 2- Call Algorithm 1 to find ALCFGp and NodeSeqp
flow graph is changed during the propagation then a (in this study the first sub block is processed
similarity measure function must be used. which is equivalent to the simple entry point.
Unfortunately the current similarity measurement However to check all the possible entry points we
functions such as Euclidean distance, Canberra have to process all m× m sub block in the matrix
distance or even measurements based on neural ALCFGp)
network can not be used; the reason is the random 3- Call Algorithm 6 to find The Percentage c and
insertion and deletion in the nodes sequence of the the sequence A
generated control flow graph. In this section we 4- If c ≥ T then
propose a new similarity measure function to detect Call algorithm 7 to Delete the mismatch nodes
the metamorphic viruses. Consider the following and compare the matrices
definitions: If algorithm 7 retrun 1 then
Return "Yes"
Definition 2: The diagonal sub-block of size m× m Else
of the matrix ALCFG which has the size n× n is the Return "No"
matrix A and denoted by A p ALCFG, where the Else
first row and column start at i+1<n, the last row and Return "No"
column end at i+m<=n and i is any integer number
less than n. Algorithm 6: The Alignment of two sequences.
Alignment ( , )
Definition 3: Let ALCFGp denotes to ALCFG matrix Input: The sequences NodeSeqS and NodeSeqV
of size n× n of the program P and ALCFGV denotes Output: The Percentage c and the sequence A, where
to ALCFG matrix of size m× m of the virus V. c represents the percentage of the match
node to the total number of the nodes and
Definition 4: The matrices ALCFGS and ALCFGV A contains the index of the mismatched
are similar if the following conditions are satisfied: nodes
1- Alignment(NodeSeqS, NodeSeqV)= c ≥T 1- Apply Needleman-Wunsch-Sellers algorithm
2- DelMis&Comp(ALCFGS, ALCFGV)=1 on the sequences NodeSeqS and NodeSeqV
2- Store the index of mismatch nodes in the array
We will denote to the similarity measure function by A
ϕ such that: 3- Find c= number of matched nodes*100/ total
number of nodes
c if satisfied Algorithm 7: Delete the mismatch nodes and
ϕ ( ALCFG s , ALCFGv ) = compare. DelMis&Comp(, )
0 else Input: ALCFGS, ALCFGV and the mismatched
sequence A.
Output: 0 or 1
1- If mismatch with gab then delete the row i and
the column i from the matrix ALCFGS for all i in
the mismatched nodes, and delete the last rows the program P is infected by a modified version of
and columns from ALCFGV where the number of Z0mbie III and ϕ ( ALCFGs , ALCFGv ) = 90% .
the deleted rows and columns equal to the
number of the gabs 5. IMPLEMENTATION
2- If mismatch with symbol then delete the row i
and the column i from the matrices ALCFGS and The metamorphic viruses are taken from VX
ALCFGV for all i in the mismatched nodes. Heavens search engine and generated by two tools;
3- Rename the matrices to ALCFGs d and namely: Next Generation Virus Creation Kit
(NGVCK0.30) and Virus Creation Lab for Windows
ALCFGv d . 32 (VCL32) [11]. Since the output of the kits was
4- If ALCFGs d = ALCFGv d then already in the asm format, we used Turbo Assembler
Return 1 (TASM 5.0) for compiling and linking the files to
Else generate exe’s, which are later disassembled using
Return 0 IDA pro 4.9 Freeware Version. Algorithm 4 is
implemented by using MATLAB 7.0. The
The most expensive step in the previous algorithms NGVCK0.30 has advanced assembly source-
is Needleman-Wunsch-Sellers algorithm which can morphing engine, and all variants of the viruses
be implemented in m2 operation, and the total generated by NGVCK will have the same
complexity of all procedures is O(n)+O(m2). functionality, but they have different signatures. In
Therefore the suggested method is much faster than this study; 100 metamorphic viruses are generated by
the previous methods; for example the cost of using (NGVCK). 40 viruses are used for analyzing
finding the isomorphic sub graph in [9] is well and 60 viruses are used for testing, let us call the first
known NP-complete problem. group A1 and the second group T1. After applying
the suggested procedures on A1 we note that all the
To illustrate the suggested similarity measure viruses in A1 have just seven different skeleton
function, assume that we like to the check weather signatures when T=100 and m=20 and have four
the program P is infected by the virus Z0mbie III or different skeletons when T=80 and m=20 and have
not. Assume that the threshold T=70 and m=10 (note three different skeletons when T=70 and m=20. T1
that: to reduce the false positive we must increase the group is tested by using 7 antivirus software; the
threshold and the number of the processed nodes), results are obtained by using the on-line service [12].
the first 10 nodes that are extracted from P and the 100% of the generated viruses are recognized by the
ALCFG matrix are (the skeleton signature of P): proposed method and by McAfee, but none of the
viruses are detected by using the rest software.
NAHAEKKKKA Another 100 viruses are generated by using VCL32,
where all of them are obfuscated manually by
1 1 inserting dead code, transposition the code,
1 1 reassigning the registers and substituting the
1 instructions. The generated viruses are divided into
1 1 two groups, A2 and T2, A2 contains 40 viruses for
1
ALCFG s = analyzing and T2 contains 60 viruses for testing.
1
1 Again 100% of the generated viruses are detected by
1 the proposed method, 84% are detected by Norman,
1 23% are detected by McAfee and 0% are detected by
the rest software. Figure 5 describes the average
detection percentage of the metamorphic viruses in
By using algorithm 6 the nodes of P aligned with the T1 and T2.
nodes of Z0mbie III as following:
NAHAEKKKK A - 6. CONCLUSION
NAH -EKKKKA A
The antivirus software trying to detect the viruses by
c= number of matched nodes*100/ total number of using variant static and dynamic methods. However;
nodes=9*100/10=90 >T. all the existing methods are not adequate. To develop
new reliable antivirus software some problems must
The mismatch occur with gabs; therefore column 4 be fixed. This paper suggested new procedures to
and row 4 must be deleted from ALCFGS, column 10 detect the metamorphic viruses by using arbitrary
and row 10 must be deleted from ALCFGV. Since length of control flow graphs and nodes alignment.
matrices after deletion are identical, we conclude that The suspected files are disassembled, the opcode
encoded, the control flow analyzed, and the
similarity of the matrices is measured by using a new [8] M. R. Chouchane and A. Lakhotia: Using engine
similarity measurement. The implementation of the signature to detect metamorphic malware, In
suggested approach show that all the generated WORM '06: Proceedings of the 4th ACM
metamorphic viruses can be detected while less than workshop on Recurring malcode, New York,
62% are detected by other well known antivirus NY, USA, pp. 73-78, (2006).
software. [9] D. Bruschi, L. Martignoni, and M.Monga:
Detecting self-mutating malware using control
flow graph matching, In DIMVA, pp. 129-143,
120 (2006).
[10] W. Wong and M. Stamp: Hunting for
100 metamorphic engines, Journal in Computer
Virology, vol 2 (3), pp. 211-229, (2006).
[11] http://vx.netlux.org/ last access March (2009).
80
[12] http://www.virustotal.com/ last access March
(2009).
60
40
20
0
G
AV
an
ee
ft
ed
AV
y
nt e
so
rm
os
am
er s
cA
o
ma
No
op
ic r
Cl
sp
M
Sy
Pr
M
Ka
REFERENCES
[1] M. Christodorescu, J. Kinder, S. Jha, S.
Katzenbeisser, and H. Veith: Malware
Normalization, Technical Report # 1539 at the
Department of Computer Sciences, University
of Wisconsin, Madison, (2005).
[2] F. Perriot: Striking Similarities: Win32/Simile
and Metamorphic Virus Code, Symantec
Corporation (2003).
[3] E. Konstantinou: Metamorphic Virus: Analysis
and Detection Technical Report, RHUL-MA-
2008-02 Department of Mathematics Royal
Holloway, University of London, (2008).
[4] A. Lakhotia, A. Kapoor, and E. U. Kumar: Are
metamorphic computer viruses really invisible?,
part 1. Virus Bulletin, pp 5-7, (2004).
[5] P. Szor: The Art of Computer Virus Research and
Defense, Addison Wesley Professional, 1
edition, pp. 10-33 (2005).
[6] R. Ando, N. A. Quynh, and Y. Takefuji:
Resolution based metamorphic computer virus
detection using redundancy control strategy, In
WSEAS Conference, Tenerife, Canary Islands,
Spain, Dec. pp. 16-18. (2005).
[7] R. G. Finones and R. T. Fernande: Solving the
metamorphic puzzle, Virus Bulletin, pp. 14-19,
(2006).
ABSTRACT
Reliability designers often try to achieve a high reliability level of systems. The
problem of system reliability optimization where complex system is considered.
The system reliability maximization subject to component‟s criticality and cost
constraints is introduced as reliability optimization problem (ROP). A procedure,
which determines the maximal reliability of non series–non parallel system
topologies is proposed. In this procedure, system components are chosen to be
maximized according to it‟s criticalities. To evaluate the systems reliability, an
adapting approach is used by the ant colony algorithm (ACA) to determine the
optimal system reliability. The algorithm has been thoroughly tested on bench mark
problems from literature. Our numerical experiences show that our approach is
promising especially for complex systems. The proposed model proves to be robust
with respect to its parameters.
Key Words: System reliability, Complex system, Ant colony, Component‟s criticality.
the reliability of each component or adding solving integer-programming problems such as the
redundant components [8]. Of course, the second system reliability design problem. The algorithm is
method is more expensive than the first. Our paper based on function evaluations and a search limited
considers the first method. The aim of this paper is to the boundary of resources. In the nonlinear
to obtain the optimal system reliability design with programming approach, Hwang, Tillman and Kuo
the following constrains. : [14] use the generalized Lagrangian function
1: Basic linear-cost-reliability relation used for method and the generalized reduced gradient
each component [7]. method to solve nonlinear optimization problems
2: Criticality of components [9]. The designer for reliability of a complex system. They first
should take this in to account before building a maximize complex-system reliability with a tangent
reliable system and according to criticality of cost-function and then minimize the cost with a
component increasing reliabilities will go toward the minimum system reliability. The same authors also
most critical component. Components‟ criticality present a mixed integer programming approach to
can be derived from its failure effects to system solve the reliability problem [15]. They maximize
reliability failure. Which the position of a the system reliability as a function of component
component will play an important role for its reliability level and the number of components at
criticality which we called it the index of criticality. each stage. Using a genetic algorithm (GA)
approach, Coit and Smith [16], [17], [18] provide a
2 SYSTEM RELIABILITY PROBLEM competitive and robust algorithm to solve the
system reliability problem. The authors use a
2.1 Literature view penalty guided algorithm which searches over
Many methods have been reported to feasible and infeasible regions to identify a final,
improve system reliability. Tillman, Hwang, and feasible optimal, or near optimal, solution. The
Kuo [10] provide survey of optimal system penalty function is adaptive and responds to the
reliability. They divided optimal system reliability search history. The GA performs very well on two
models into series, parallel, series-parallel, parallel- types of problems: redundancy allocation as
series, standby, and complex classes. They also originally proposed by Fyffe, et al., and randomly
categorized optimization methods into integer generated problems with more complex
programming, dynamic programming, linear configurations. For a fixed design configuration and
programming, geometric programming, generalized known incremental decreases in component failure
Lagrangian functions, and heuristic approaches. The rates and their associated costs, Painton and
authors concluded that many algorithms have been Campbell [19] also used a GA based algorithm to
proposed but only a few have been demonstrated to find a maximum reliability solution to satisfy
be effective when applied to large-scale nonlinear specific cost constraints. They formulate a flexible
programming problems. Also, none has proven to be algorithm to optimize the 5th percentile of the mean
generally superior. Fyffe, Hines, and Lee [11] time-between-failure distribution. In this paper ant
provide a dynamic programming algorithm for colony optimization will be modified and adapted,
solving the system reliability allocation problem. As which will consider the measure of criticality will
the number of constraints in a given reliability gives a guidance to the ants for its nest and ranking
problem increases, the computation required for of critical components will be taken into
solving the problem increases exponentially. In consideration to choose the most reliable
order to overcome these computational difficulties, components which then will be improved till reach
the authors introduce the Lagrange multiplier to the optimal system‟s components reliability value.
reduce the dimensionality of the problem. To
illustrate their computational procedure, the authors 2.2 Ant colony optimization approach
use a hypothetical system reliability allocation Ant colony optimization (ACO) algorithm [20,
problem, which consists of fourteen functional units 21], which imitate foraging behavior of real life
connected in series. While their formulation ants, is a cooperative population-based search
provides a selection of components, the search algorithm. While traveling, Ants deposit an amount
space is restricted to consider only solutions where of pheromone (a chemical substance). When other
the same component type is used in parallel. ants find pheromone trails, they decide to follow the
Nakagawa and Miyazaki [12] proposed a more trail with more pheromone, and while following a
efficient algorithm. In their algorithm, the authors specific trail, their own pheromone reinforces the
use surrogate constraints obtained by combining followed trail. Therefore, the continuous deposit of
multiple constraints into one constraint. In order to pheromone on a trail shall maximize the probability
demonstrate the efficiency of their algorithm, they of selecting that trail by next ants. Moreover, ants
also solve 33 variations of the Fyffe problem. Of the shall use short paths to food source shall return to
33 problems, their algorithm produces optimal nest sooner and therefore, quickly mark their paths
solutions for 30 of them. Misra and Sharma [13] twice, before other ants return. As more ants
presented a simple and efficient technique for complete shorter paths, pheromone accumulates
faster on shorter paths and longer paths are less be additive in term of cost at constitute
reinforced. Pheromone evaporation is a process of components. See Fig. (1).
decreasing the intensities of pheromone trails over Rs
time. This process is used to avoid locally
convergence (old pheromone strong influence is
avoided to prevent premature solution stagnation), 1
to explore more search space and to decrease the
probability of using longer paths. Because ACO has Pi min
been proposed to solve many optimization problems
[22],[23], our proposed idea is also to adapt this Cost
algorithm to optimize system reliability and Ci Ct
specially complex system
Figure 1: cost-reliability curve
3 METHODOLOGY As show in Fig 1. and by equaling the slopes of two
triangles we can derive equation number (1) as
3.1 Problem definition following:
3.1 .1 Notation
In this section, we define all parameters used in p 1 - p(i)min p 2 - p(i)min
Cc Ct Ct ...n . (1)
our model. 1 - p(i)min 1 - p(i)min
Rs : Reliability of system
Pi : Reliability of components i.
qi : probability of failure of components (i). 3: In [9] calculation of ICRi and ISTi derivation
Qn : Probability of failure to system equation s (2) and (3) for each components from its
n : Total number of components. structural measure, which given by,
ICRi : Index of criticality measure.
ICRp : index of criticality for path to destination
ISTi : Index of structure measure. (2)
Ct : Total cost of components. Where,
Ci : Cost of component
Cc : Cost for improvement
P(i)min: Minimum accepted reliability value (3)
ACO
:start node for ant,
: next node chosen. 4-Every ICRi must be lower than initial value ai.
τi :initial pheromone trail intensity This value is a minimum accepted level of criticality
measure to every component.
τi(old) :pheromone trail intensity of combination
before update of 5-After the complex system presented
mathematically, a set of paths will be available from
τi(new) :pheromone trail intensity of combination
after update specified source to destination. those paths will be
:problem-specific heuristic of combination ranked each one according to its components
criticalities.
η ij : relative importance of the pheromone trail
intensity
3.2 Formulation of the problem:
: relative importance of the problem-
The objective function in general, has the form :
specific heuristic for global solution
:index for component choices from set AC
Maximize, Rs= f (P1,P2,P3,....Pn).
trail persistence for local solution
subject to the following constrains,
:number of best solutions chosen for offline 1. ICRi : i =1,2,…n
pheromone update index 2. To ensure that the total cost of components not
3.1.2 Assumption more than proposed cost value the following
In this section, we present the assumptions equation number (4) can be used:
under which formulation of our model is presented.
1: There are many different methods used to derive
the expression of total reliability of complex system, :Pi(min) > 0 (4)
which are derived in a certain system topology, we
state our system expressions according to the
methods of papers [3-5]. Note that this set of constrains permits only
2: We used a cost-reliability curve [7] to derive an positive components cost.
equation to express each cost components according
to its reliability and then the total system cost will
4 MODEL CONSTRUCTION
Subject to:
3
P(1)min=0.9, P (2)min=0.9,
P(3)min=0.8, P (4)min=0.7, p(5)min=0.8.
ants reached
NO destination?
3. We choose the cost-reliability curve to
Yes
permit distribution of cost depending on ranking of
Get optimized values
components according to there criticality. The
model was built in such a way that reduce the fail of
Figure 2: Flow diagram adapted ant system the most critical components, this is done by
increasing the reliability of the most critical
4. Eq. (6): update the pheromone according to the
components, which tend to maximizes the over all
criticality measure. Which can be calculate
reliability what is our goal. We summarized our
product of components criticalities‟ value
results in the following Table (1) and Table
3
p2 0.9 3
p3 0.8 4
p4 0.9998 2 Figure 4: Delta system
p5 0.8 5
Rs 0.9999 Using the same procedures as in bridge problem
we obtain the following optimization problem for
Table 2: Costs of the Bridge system . delta system given in Fig 4.
cost Value in units
Max .Rs= P1+ P1.P2 - P1.P2.P3
C1 9.9988
C2 8.8888 Subject to
C3 7.7777 1. ICRi calculated for i=1, 2,3.
C4 9.9978 3
5
parameters such as initial pheromone in a delta case
and biased to component 2 the results will become
6 7 as shown in Table 8. The reliably for components
was P1=0.2, P2=0.3 and P3=0.3 and values of
Figure 5: Mesh system =10 , =2 and =10
This system have more components and large and
The objective Function for the mesh system is:
Table 8: Effects of Ant colony parameters
Max. Rs=(p6*p7)+(p1*p2*p3* Cost values
(1-p6))+(p1*p2*p3*p6*(1-p7))+(p1*p4*p7* C1 0.7777
(1-p2)*(1-p6))+(p1*p4*p7*p2*(1-p6)*(1-
C2 0.9997
p3))+(p3*p5*p6*(1-p7)*(1p1))+(p3*p5*p6*p1*(1-
p7)*(1-p2))(p1*p2*p5*p7*(1-p3)*(1-p4)*(1-p6))- C3 0.999
(p2*p3*p4*p6*(1-p1)*(1-p5)*(1- Ct 14.777
p7))+(p1*p3*p4*p5*(1-p2)*(1-p6)*(1-p7)); Computed ICRi Rank
value
Subject to, P1 0.3 1
1. ICRi calculated for i=1,2,..n...
7 P2 0.9999 2
Ci * (Pi) 6.6 P3 0..9999 3
i 1
Rs 0.9999
P(i)min=0.5 i=1,2,3..
It is clear that the solution biased to the components
2 and 3 path rather than component one, because of
Table 6: Reliabilities of the Mesh system
there initial pheromone values.
Reliabiliti New ICRi rank
es values
P1 0.5 5 6 CONCLUSION
P2 0.5 4
P3 0.5 3 We propose a new effective algorithm for
P4 0.5 7 general reliability optimization problem. Using ant
colony. The ant colony algorithm is a promising
P5 0.5 6
heuristic method for solving complex combinatorial
P6 0.9999 1 problems.
P7 0.9999 2 To solve complex system design problem:
Rs 0.9997 1. We must formulate a system, that is correctly
Table 7: Costs of the Mesh system representing the real system with all paths from
cost Value in units source to destination by choose an efficient
reliability estimation method.
C1 0.4444
2. To the best of maximization of total reliability
C2 0.4444 and minimization of the total cost of a system take
C3 0.4444 in to consideration the components according to its
C4 0.4444 criticality, then arrange the most critical components
gradually.
C5 0.4444 3. Index of criticality achieve maximum system
C6 0.9998 reliability with minimum cost according to
reliability of system topology
C7 0.9997
4. resolve model without index of criticality
Ct 4.22 maximum reliability and minimum cost but this
method ignore the topology of the system.
5. The ant colony algorithm improved by the ElAlem: " An Application of Reliability
previous experience which was given by the index Engineering in Complex Computer System
of criticality which gives to ant an experience to and Its Solution Using Trust Region
deposit of pheromone on a trail which will Method", WSES , software and hardware
maximize the probability of selecting that trail by Engineering for 21st century book,
next ants. Moreover, ants shall use more reliable pp261,(1999).
paths. Our numerical experiences show that our
approach is promising especially for complex [10] ATillman,C.Hwang,,K.Way : “Optimization
systems. Techniques for System Reliability with
Redundancy,A Review”, IEEE Transactions
7 REFERENCES on Reliability, vol. R-26, no. 3, , pp. 148-
155. August (1977).
[1] A. Lisnianski,. H. Ben-Haim, and D.
Elmakis: “Multistate System Reliability [11] E. David. Fyffe, W. William. K. L Hines,
optimization: an Application”, Levitin, Nam: “System Reliability Allocation And
Gregory book , USA, pp.1-20. ISBN a Computational Algorithm”, IEEE
9812383069. (2004) Transactions on Reliability, vol. R-17, no. 2,
, pp. 64-69. June (1968).
[2] S. Krishnamurthy, AP. Mathur.: On the
estimation of reliability of a software [12] Y. Nakagawa, S. Miyazaki: “Surrogate
system using reliabilities of its components Constraints Algorithm for Reliability
.In: Proceedings of the ninth international Optimization Problems with Two
symposium on software reliability Constraints”, IEEE Transactions on
engineering(ISSRE„97).Albuquerque;.p.146. Reliability, vol. R-30, no. 2, , pp. 175-180.
(1997) June (1981).
[3] T. Coyle, RG. Arno, PS.: Hale. [13] K. Behari Misra, U. Sharma: “An Efficient
Application of the minimal cut set Algorithm to Solve Integer-Programming
reliability analysis methodology to the gold Problems Arising in System-Reliability
book standard network. In the commercial Design ”,IEEE Transactions on Reliability,
and power systems technical conference;. vol. 40, no. 1, , pp. 81 91. April (1991).
p. 82–93. industrial (2002)
[14] C. Lai Hwang, A. Frank Tillman, W. Kuo, :
[4] K. Fant, Brandt S. : Null convention logic, “Reliability Optimization by Generalized
a complete and consistent logic for Lagrangian - Function and Reduced-
asynchronous digital circuit synthesis. In: Gradient Methods”, IEEE Transactions on
the international conference on application Reliability, vol. R-28, no. 4, pp. 316-319.
specific systems, architectures, and October (1979).
processors (ASAP ‟96); p. 261–73. (1996).
[15] A. Frank Tillman, C.Hwang, W Kuo, :
[5] C. Gopal H, Nader A.: A new approach to “Determining Component Reliability and
system reliability. IEEE Trans Redundancy for Optimum System
Reliab;50(1):75–84. (2001). Reliability”, IEEE Transactions on
Reliability, vol. R-26, no. 3, pp. 162- 165.
[6] Y. Chen, Z. hongshi:" : Bounds on the August (1977).
Reliability of Systems With Unreliable
Nodes & Components". IEEE, Trans. on [16] D. Coit, Alice E.Smith, “Reliability
reliability, vol.53, No. 2, June.(2004). Optimization of Series-Parallel Systems
Using a Genetic Algorithm”, IEEE
[7] B. A. Ayyoub.:” An application of reliability Transactions on Reliability, vol. 45, no. 2, ,
engineering in computer networks pp. 254-260 June,(1996 ).
communication” AAST and MT Thesis,
p.p17Sep.(1999). [17] W. David. Coit, Alice E. Smith: “Penalty
Guided Genetic Search for Reliability
[8] S. Magdy, R.d Schinzinger: "On Measures Design Optimization”, Computers and
of computer systems Reliability and Critical Industrial Engineering, vol. 30, no. 4, pp.
Components", IEEE, Trans. on Reliability 95-904. (1996).
(1988).
[18] W. David Coit, E. Alice Smith, M. David
[9] B. A. Ayyoub. M. Baith Mohamed,
ABSTRACT
An imposing number of lossy compression techniques used in medicine, represents
a challenge for the developers of a Picture Archiving and Communication System
(PACS). How to choose an appropriate lossy medical image compression
technique for PACS? The question is not anymore whether to compress medical
images in lossless or lossy way, but rather which type of lossy compression to use.
The number of quality evaluations and criteria used for evaluation of a lossy
compression technique is enormous. The mainstream quality evaluations and
criteria can be broadly divided in two categories: objective and subjective. They
evaluate the presentation (display) quality of a lossy compressed medical image.
Also, there are few quality evaluations which measure technical characteristics of a
lossy compression technique. In our opinion, technical evaluations represent an
independent and invaluable category of quality evaluations. The conclusion is that
quality evaluations from each category measure only one quality aspect of a
medical image compression technique. Therefore, it is necessary to apply a
representative(s) of each group to acquire the complete evaluation of lossy medical
image compression technique for a PACS. Furthermore, a correlation function
between the quality evaluation categories would simplify the overall evaluation of
compression techniques. This would enable the use of medical images of highest
quality while engaging the optimal processing, storage, and presentation resources.
The paper represents a preliminary work, an introduction to future research and
work aiming at developing a comprehensive quality evaluation system.
quality evaluation of different compression 2. the entire medical image is compressed lossy
techniques from PACS point of view. targeting the “visually lossless” threshold.
During our work on a PACS for a lung hospital, The first group offers selective lossy compression of
we tried to adopt image compression for medical medical images. Parts of the image containing
images which achieves highest compression ratio diagnostically crucial information (ROI) are
with minimal distortion within decompressed image. compressed in a lossless way, whereas the rest of the
Also, we needed image compression suitable for image containing unimportant data is compressed
telemedicine purposes. We consulted the technical lossy. This approach enables considerable higher
studies in search for quality evaluation of image compression ratio than ordinary lossy compression
compression technique. The sheer amount of studies [18, 19]. Larger regions of the medical image contain
is overwhelming [14, 15]. There is no unique quality unimportant data which can be compressed at higher
evaluation which is suitable for various compression rates [19]. Downfall of this approach is
techniques and different applications of image computational complexity (an element of technical-
compression [16, 17]. In most cases the studies are objective evaluation). Each ROI has to be marked
focused only on presentation (display) quality of the before compression. Even for images of the same
lossy compressed medical image. Technical features modality, ROIs are rarely in the same place. ROIs
of compression technique are usually ignored. are identified either manually by qualified medical
This paper represents a preliminary research. Its specialist or automated based on a region-detection
purpose is to identify all the elements needed to algorithm [20]. The goal is to find a perfect
evaluate the quality of a compression technique for combination of automated ROI detection algorithms
PACS. We identified three categories of quality and selective compression technique.
evaluations and criteria: presentation-objective, Over the years various solutions for ROI
presentation-subjective, and technical-objective. compression of medical images emerged which
Overview of technical studies led us to conclusion differ in image modalities used, ROI definitions,
that quality evaluations from each category measure coding shames and compression goals [20]. Some of
only one quality aspect of an image compression them are: a ROI-based compression technique with
technique. To perform the complete evaluation of two multi-resolution coding schemes reported by
medical image compression technique for PACS, it Strom [19], a block based JPEG ROI compression
is necessary to apply a representative of each and a importance schema coding based on wavelets
category. A correlation function between the reported by Bruckmann [18], a motion compensated
representatives of each category would simplify the ROI coding for colon CT images reported by
overall evaluation of compression techniques. A 3D Bokturk [21], a region based discrete wavelet
evaluation space introduced by the paper is a 3D transform reported by Penedo [22], a JPEG2000 ROI
space defined by this correlation function and quality coding reported by Anastassopoulos [23].
evaluations used. Our goal is to develop an The second group of lossy compression
evaluation tool based on the 3D evaluation space techniques applies lossy compression over entire
which is expected for 2011. All the elements of the medical image. Considerable efforts have been made
quality evaluation system are identified in the paper. in finding and applying the visual lossless threshold.
The organization of the paper is as follows: Over the years various solutions emerged which
section 2 gives the short overview of the lossy differ in goals imposed on a compression technique
compression techniques used in medical domain; (for particular medical modality or for a group of
section 3 describes the quality evaluations used to modalities), and in compression techniques used
measure the quality of compression techniques; 3D (industry standards or proprietary compression
evaluation space is discussed in section 4; section 5 techniques).
concludes the paper. Some of the solutions presented over the years
are: a compression using predictive pruned tree-
2 LOSSY COMPRESSION OF MEDICAL structured vector quantization reported by Cosman
IMAGES [17], a wavelet coder based on Set Partitioning in
Hierarchical Trees (SPIHT) reported by Lu [24], a
Over the past decades an imposing number of wavelet coder exploiting Human Visual System
lossy compression techniques have been tested and reported by Kai [25], a JPEG coder and wavelet-
used in medical domain. Industry approved standards based trelliscoded quantization (WTCQ) reported by
have been used as often as the proprietary Slone [10], a JPEG2000 coder reported by Bilgin
compressions. On the part of the image affected, they [26].
can be categorized in two groups: Although the substantial effort has been made to
1. medical image regions of interest (ROI) are develop a selective lossy compression of medical
compressed losslessly while the rest of the image images, the industry standards that apply lossy
background is compressed lossy, compression on the entire medical image are
commonly used in PACS.
3 QUALITY EVALUATIONS (2 b ) 2
(3)
MSE
The significant effort has been made to solve the
problem of measuring digital image quality with
These measures fail to measure local
limited amount of success [13, 14]. Various studies
degradations and do not provide precise descriptions
tried to develop new metrics or to adopt existing
of image degradations [5, 27]. Still, many studies use
ones for medical imaging [5, 6, 7, 8, 17]. The quality
this quality evaluations to rate their implementations
evaluations used can be broadly categorized as
of lossy medical image compression techniques.
[5, 17]:
Quality of the lossy compressions studied in [9,
• objective quality evaluations – based on a
24, 25, 26, 28] was measured by these numerical
mathematical or a statistical model, which is easy
distortion evaluations. For example, Chen [9] used
to compute and rate,
PSNR to evaluate propriety DCT based SPIHT
• subjective quality evaluations – based on a compression, original SPIHT and JPEG2000. The
subjective observer evaluation of restored image, DCT based compression achieved highest PSNR
or questionnaires with numerical ratings. values for the tested medical images, which indicated
These categories can be further sub-categorized, but that it is more suitable for medical imaging then the
this falls out of the scope of the paper [5, 17]. other two compression techniques.
The quality evaluations proposed measure Beside scalar numerical evaluations, graphical
presentation (display) quality of the lossy evaluations such as Hosaka plots and Eskicioglu
compressed medical image. Therefore, they can be charts, and evaluations based on HVS model have
categorized as presentation-objective and been used [14, 15, 29]. Their applicability in medical
presentation-subjective quality evaluation. Although, domain has been reported in [6, 27].
these quality evaluations have been devised for Also, a hybrid presentation-objective metrics
image quality measurement, they can be also used have been studied for medical domain.
for evaluation of lossy compression techniques. The Przelaskowski [27] proposed a vector quality
quality of the reconstructed image should not be the measure reflecting diagnostic accuracy, Eq. (4).
only criteria for adoption of a compression technique
for PACS. The quality evaluation of medical image 6
compressions for PACS is inseparable from technical
aspects of the system. The lossy compression can
HVM = ∑α iVi (4)
i =1
uphold remarkable presentational quality (objective
and subjective) of medical images but with high The values Vi represents one presentation-
technical demands. In some cases these technical objective measure. The vector measure was designed
demands are not achievable and in most cases they to include the formation of a diagnostic quality
are too expensive. In many countries this will impose pattern based on the subjective ratings of local image
too high price for PACS. Evaluations measuring features. This quality measure represents a way of
image compression quality from technical point of combining presentation-objective and presentation-
view can be categorized as technical-objective subjective evaluations. Evaluation of lossy
quality evaluations. JPEG2000 compressed medical images found that
compression ration of 20:1 is diagnostically
3.1 Presentation-objective evaluations acceptable.
Presentation-objective evaluations represent the
most desirable way to measure image quality. They 3.2 Presentation-subjective evaluations
are based on a mathematical model, and are usually Presentation-subjective evaluations have been
easy to compute. Their main advantage is objectivity used to evaluate lossy compressed medical images
[27]. The numerical distortion evaluations like mean more often than presentation-objective [30].
squared error (MSE), Eq. (1), signal-to-noise-ratio Presentation-subjective evaluations are based on
(SNR), Eq. (2), or peak-signal-to-noise-ratio (PSNR), observer’s subjective perception of reconstructed
Eq. (3), are commonly used [6]. image quality [5]. The subjective quality of a
reconstructed medical image can be rated in many
m n
ways [5]. In some studies, observer is presented with
∑∑ [ f (i, j ) − f ′(i, j )]2 / m ⋅ n (1) several reconstructed versions of the same image.
i =1 j =1
The observer has to guess the image compression
level and to order the sample images in order from
n m
the least compressed to the most compressed [5, 31].
σ x2 1 m n ∑∑ f (i, j ) If the difference between original image and
∑∑ ( f (i, j) −
i =1 j =1
; σ x2 = ) (2) reconstructed image at some level of compression is
MSE m⋅n i =1 j =1 m⋅n not distinguishable, then that level of compression is
process only the limited number of medical images This time has impact on overall performance of
and images of limited size [2]. Also, these devices the system because it can cause data transmission
usually use wireless networks which have delay. Better PACS performance is achieved if
capabilities far beneath connected ones [2]. decompression time is minimized, because
Therefore, to view a medical image on these devices decompression occurs more often than
it is necessary to have images scaled for the display compression. Therefore, the retrieval oriented
size of the mobile device. This could have a negative compression techniques are common for medical
impact on PACS storage space [36], but it is imaging.
minimized when the scaled medical images are • Memory and processor power used [40]. The
acquired from the same image codestream as the study measured the amount of memory and
original sized image i.e. when streaming of medical processor power used during compression and
images is used [37, 38]. Image streaming is a process decompression process. The values measured
of gradual buildup of an image by resolution or by inform about the overall complexity of
pixel accuracy [28]. It enables extraction of a lower- compression technique which influence overall
resolution image from the codestream. cost of the system. High requirements influence
The architecture of modern PACS is described higher cost.
by Fig. 2. Beside high class hospitals, the system • Compression ratio [40]. The influence of
contains less equipped hospital in rural areas and compression technique on storage requirements is
medical mobile devices. expressed as achievable compression ratio. It is
These are all reasons for adopting lossy measured in respect to image presentation quality,
compression of medical images for a PACS, but they like numerical distortion measures, section 3.A.
are also restrictions which one developing a PACS Storage requirements influence the overall cost of
system should consider. They represent technical- PACS.
objective criteria for evaluating a medical image • Functionalities of a compression technique [41].
compression technique for a PACS. The parameters Most applications require other features beside
of the criteria are overall cost of the system quality and coding efficiency of the compression
equipment, storage and network requirements, the technique. The technical-objective quality
cost for implementation of the compression evaluations in consulted studies did not evaluate
technique, compression/decompression speed, functionalities of a compression technique
streaming possibility of the compression technique, numerically. Rather they use some method of
image modalities suitable for the compression, and description. Santa-Cruz [41] provided a
compression ratio achieved under certain quality functionality matrix that indicated the supported
assumption. Technical studies comparing different features in each compression technique and an
compression techniques evaluated several things appreciation of how well they are fulfilled, Table
[39, 40, 41, 42]: 1. They compared JPEG2000, JPEG-LS, JPEG,
• Compression speed [39, 40, 41, 42]. The studies MEPG-4 VTC, and PNG compression techniques.
measured the time elapsed while the sample A set of features (functionality) is included in
image was compressed to target compression Table 1. A “+” mark indicates whether the
ratio, and the time elapsed during decompression. functionality is supported. The more “+” marks,
Table 1: Functionality matrix – various functionalities of different compression techniques are compared [41].
the more efficiently or better is the functionality elements of a medical image compression technique.
supported by compression technique. When evaluating medical image compression
• Error resilience [40, 41]. It is important to techniques it is important to measure the quality of
measure the error resilience of compressed the restored images. Presentation evaluations
images sent over network transmission channels. (objective and subjective) measure the presentation
This is tested by transmitting the compressed data (perceptual) quality of a restored image. The values
over simulated noise channel. obtained are used to compare the quality of the
The quality evaluation of medical image compressed images and to observe which
streaming has not been studied in the consulted compression technique achieves higher compression
literature. Streaming of medical images is important ratio under the same quality assumption. It is easier
issue for PACS trying to achieve mobile health (and to compute the presentation-objective measure which
ubiquitous healthcare, also) and it should be is usually presented as a scalar or a vector. These
considered during quality evaluation. Because it is values are comparable and it is easy to obtain which
supported by limited number of compression compression technique is better – the one heaving
techniques, quality evaluation should indicate bigger value. They fail to measure precise (local)
whether the streaming is supported or not. If characteristics of the restored image i.e. they do not
compression techniques support image streaming, consider the medical application of the compression
the quality of extracted low-resolution images should technique. On the contrary, the presentation-
be evaluated. subjective evaluations consider the medical
An important issue considering technical aspects application of the compression technique, but they
of medical image compression techniques is weather are harder to obtain and cost more than presentation-
to use industry wide standards or to develop a objective evaluations. The presentation-subjective
proprietary compression technique [43]. The second evaluations are harder to interpret and compare, and
approach could lead to more efficient compression they are dependable of observer’s knowledge,
techniques, but in long term, it would show more experience and perception. The advantage of
costly. It could compromise PACS communication presentation-subjective evaluations is that they are
with equipment and networks not supporting the recommended by official medical organizations
proprietary compression technique [43]. The long which consider compression of medical images (like
term archives of medical images could be CAR).
compromised if the system transgresses to another The presentation quality evaluations fail to
compression technique. The use of industry approved measure technical aspects of a compression
standards can reduce the cost and risk of using technique. Beside restored image quality, it is
compression. necessary to obtain technical information about
compression technique, like: efficiency (compression
4 3D QUALITY EVALUATION SPACE speed, achievable ratio, transmission possibilities),
error resilience, features (image streaming), and
Quality evaluations of lossy compression implementation cost and maintenance. The technical-
techniques differ in many ways. They differ in way objective quality evaluations measure technical
whether they consider the application for which the elements of a compression technique. There are
compressed image has been used. Some quality several important technical-objective evaluations
evaluations measure only the performance of the measuring different features of a compression
compression technique while other measure only the technique. The issue is how to correlate them to one
presentation quality of the restored image. Overall, value.
there is no quality evaluation which measures all the
To obtain the complete quality evaluation of a objective evaluation, ROC analysis [6] (as it is
medical image compression technique, it is necessary subjective measure used most often) for
to use all three previously described quality presentation-subjective evaluation, and functionality
evaluations: presentation-objective, presentation- matrix [41] (being the most comprehensive technical
subjective, and technical-objective. Only then will evaluation) for technical-objective evaluation. The
the observers adopting a medical image compression proposed combination is still under review.
for PACS have a complete insight of a given
compression technique. This will present the medical 5 CONCLUSION
staff with highest quality medical images while
engaging the optimal processing, storage, and This paper represents a preliminary research. We
presentational resources. identified three categories of quality evaluations and
The complete quality evaluation could be criteria: presentation-objective, presentation-
improved if there is a correlation function between subjective and technical-objective. To obtain the
the quality evaluations used, such as the one complete quality evaluation of a medical image
described by Eq. (5). compression technique, it is necessary to use all three
categories of quality evaluations.
ev = f (a ⋅ po, b ⋅ ps, c ⋅ to ) (5) The development of a comprehensive evaluation
of all the aspects of a compression technique would
Variables po, ps, and, to represent values ease the task of adopting a medical image
obtained by applying presentation-objective, compression for a PACS. Our future research will
presentation-subjective, and technical-objective include devising a technical-objective correlation
quality evaluations. Factors a, b, and c are weighting function which will uniformly present the results of
factors ranging from 0 to 1 used to define the technical-objective quality evaluations. The major
influence of a particular quality evaluation. Value of focus of our future research will be devising a
0 cancels the influence of a particular evaluation. correlation function between all the groups of quality
Ideally, the result of the correlation function evaluations. We strive to achieve quality evaluation
should be a scalar which should define the quality of space like the one described by the Fig. 3 which
a compression technique in a simple and a would represent an environment for simple and
comparable way. A higher value indicates a better comprehensive evaluation of medical image
quality. Unfortunately, it is more realistic to expect compression techniques for PACS.
that the result of the correlation function would be a In the case of PACS for the lung hospital we did
vector which defines the quality of a compression not have time to wait for development of 3D quality
technique in a space defined by presentation- evaluation space. Therefore, we adopted the
objective, presentation-subjective, and technical- compression technique that in our opinion (which
objective evaluations, Fig. 3. Higher vector intensity was drawn from numerous technical studies) offered
indicates a better quality. the most - JPEG2000 compression [36, 37, 38, 39, 40,
One possible combination for 3D quality 41, 42]. It would be interesting to see if this decision
evaluation space would be the use of PSNR [6] (as it correlates with the results in the 3D quality
is the one most often used), Eq. (3), or Przelaskowski evaluation space for compression of medical images.
[27] vector measure (because combines it several
objective measures), Eq. (4), for presentation-
ABSTRACT
The possibility of failure is a fundamental characteristic of distributed applications.
The research community in fault tolerance has developed several solutions mainly
based on the concept of replication. In this paper, we propose a fault tolerant
hybrid approach in multi-agent systems. We have based our strategy on two main
concepts: replication and teamwork. Through this work, we have to calculate the
criticality of each agent, and then we divide the system into two groups that use
two different replication strategies (active, passive). In order to determine the agent
criticality, we introduce a multi-level method for criticality evaluation using agent
plans and dependence relations between agents.
Keywords: agent local criticality, agent external criticality, hybrid approach, the
decision agent, the action criticality.
.
Here we review some important works dealing Agents are subject of failure that can cause the
with fault tolerance in multi-agent systems. whole system failure. We propose an approach to
Hagg [2] proposes a strategy for fault tolerance using introduce fault tolerance in dynamic multi-agent
sentinels. The sentinel agents listen to all broadcast systems by the use of two main concepts which are:
communications, interact with other agents, and use replication and teamwork. Under our approach the
timers to detect agent crashes and communicate link two replication strategies are used (active and
failure. So, sentinels are guardian agents which passive). Since we deal with dynamic multi-agent
protect the multi-agent system from failing in systems, we will use the dynamic replication, which
undesirable states. They have the authority to means that agents are not duplicated at the same time
monitor the communications in order to react to fault. and within the same manner. The question that
The main problem within this approach is that arises, therefore, is which are the agents to be
sentinels also are subject of faults. replicated?
Kumar and al [1] introduce a strategy based on
Adaptive Agent Architecture. This strategy uses the 4 THE CRITICALITY EVALUATION
teamwork to cover a multi-agent system from broker
failures. This approach does not deal completely The agent criticality denoted CX is defined as the
with agent failures since only some agents (the impact of a local failure of the agent X on the
brokers) or part of them can be replicated. dysfunction of the whole system. An agent that
A strategy based on transparent replication is causes a total failure of the system will have a strong
proposed by [3]. All messages going to and from a criticality.
replicated group are funneled through the replicate The criticality evaluation in our approach is realized
group message proxy. This work uses the passive at two main levels:
replication strategy. • The local level: here we determine the agent
These several approaches apply the replication criticality using its plan of actions.
mechanism according to the static strategy which • The external level: In order to achieve its
allows replication at design time. But recent current goal the agent does not only use its own data
applications and mainly those which use the multi- but it relies on other agents. So, we try to evaluate
agent systems are very dynamic the fact that makes it the agent external criticality using the relations
too difficult to determine the critical agents at the between agents.
design time. There are other proposed works that
other use the dynamic replication strategy such as:
Guesssoum and al [4] introduce an automatic and
4.1 Agent Local Criticality
dynamic replication mechanism. They determine the In order to calculate the agent local criticality,
criticality of an agent using various data such as: we defined an agent according to the model proposed
time processing, the role taken by an agent in the by [12]. Each agent is composed of the following
system… This mechanism is specified for adaptive elements:
multi-agent systems. They focus their work the
• Goals: the goals an agent wants to achieve.
platform DIMA [8].
• Actions: the actions the agent is able to
Almeida A. and al [9] propose a method to calculate
perform.
the criticality of an agent in a cooperative system.
They use agent plan as the basic concept in order to • Resources: the resources an agent has
determine critical agent. This work uses the control on.
framework DARX [10]. • Plans: the plan represents the sequence of
These two works use the dynamic replication that actions that the agent has to execute in order to
allows replication at the processing time. This achieve a certain goal.
strategy requires the criticality calculation. The agent The title should be typed in capital letters, using
criticality is defined as the impact of a local failure Times New Roman type face with 14 points in size,
of an agent on the whole system [11]. The dynamic bold. It should be centered on the first page
strategy is more important than the static one when beginning on the 6th line.
dealing within dynamic applications, but it must use 4.1.1 Agent Plan
a mechanism able to determine when it is necessary We conceder that each agent knows the actions
to duplicate agents. sequence that he has to execute in order to achieve
its current goal. Therefore, we propose the use of a
graph to represent the sequence of actions called
agent's plan. These plans are established for short
terms because the environment considered is
dynamic. The graph that we use in this work is
inspired from that proposed by [9]. The agent plan is • The number of necessary resources that
represented by a graph where the nodes represent are required for the execution of an action can be
actions and edges represent relations between also a factor to determine the initial criticality of an
actions. These relations are the logical functions action. When an action requires many resources to be
AND and OR. A node n which is connected to k executed, it introduces a strong criticality.
other nodes (n1, n2... nk) using AND edges • Hardware data influence, also, the action
represents an action that will be achieved only if all initial criticality.
its following actions are executed. However, a node • Finally, according to the application field,
n connected to its k followers using OR edges the designer can determine semantic information that
represents an action that is achieved if only one can define the initial criticality of an action.
following action is executed. The work proposed in Thus, at the design time each action A has a value
[5] uses a different description concerning the agent called the initial criticality denoted CIA.
plan and it proposes the existence of internal and 4.1.4 Action Dynamic Criticality
external actions. However, we are interested to The dynamic criticality of an action denoted CD
actions which are executed by the agent (local is defined as the value attributed to an action
actions), Thus, according to our description an agent according to its position in the agent plan. There is
X will be represented as follows (Figure 1): one factor that can influence the action criticality
which is the set of its following actions.
Agent X We use the function MULTIPLICATION to
represent the following actions influence on the
A considered action when they are connected using
AND edges. Since we have indicated that when an
AND action A connected to its followers (B1, B2,…, Bk) by
AND AND edges, the achievement of A implies that all its
AND following actions are achieved. If we represent the
B1 B2 Bk actions with a group of sets we will have the
following result:
OR
OR A= (B1 B2 ... Bk ).
CA = CIA + (CB1 * CB2 *...* CBK)
CONT: The Controller. agent replication using the passive strategy. This
1: The Criticality Evaluation. agent verifies and detects failure among its group's
2: Pass the Criticality C. agents using the same technique employed by the
3: Decision. supervisor. Since the detection of failure, the passive
4: GT= 1. replica will be active and an other passive replica
5: Establish contract with the Supervisor. will be added. The controller's sequence diagram is
6: GT= 2. represented as follows (Figure 6):
7: Establish contract with the Controller.
The Controller Non critical agent Passive replica
5.2 The Supervisor 1
This agent allows the active replication. During
execution time, the critical agent transmits 2
periodically its current state to the supervisor, this 3
latter gives permission messages in order to validate
the replica's execution.
The supervisor allows also failure detection. This 4
service makes it possible to detect if an agent is still
alive and that it does not function in a synchronous 5
6
environment [14]. The supervisor achieves this 8 7
service within the use of a clock that initializes the 9
control messages sent to the critical agents. Each
activated (critical replica) has a failure – timer which
gives the max time used by the agent to answer. If
Figure 6. The sequence diagram for the controller.
the agent does not give an answer a failure is
detected.
1: Establish contract.
Since the failure detection, the supervisor creates a
2: Passive replication process.
replica and the follower takes up the failed agent.
3: Current state's message.
The supervisor's services are represented by the
4: Controlling message.
following diagram (Figure 5).
5: Yes.
6: Answer.
The supervisor Critical agent Active replica
7: No.
1 8: T > Max Time.
2 9: replica activated + Agent recovering.
3 6 CONCLUSION
4
This article proposes a rich approach for fault
5
resistance in dynamic multi-agent systems based on
6 replication and teamwork. We use the two strategies
7
9 8 (active and passive) within the existence of one
strong replica at one time; this fact allows the
10
decreasing of charges. In order to guarantee failure
detection and system controlling three other agents
are added.
Figure5. The sequence diagram for the supervisor In further work, we are interesting to propose a more
formal model for criticality calculation and to
1: Establish contract. validate our approach trough implementation.
2: Active replication process.
3: Current state's message. 7 REFERENCES
4: Permission message.
5: Controlling message. [1] S.Kumar, P. R Cohen., H.J. Levesque:The
6: Yes. adaptive agent architecture: achieving fault-
7: Answer. tolerance using persistent broker teams. , The
8: No. Fourth International Conference on Multi-Agent
9: T > Max Time. Systems (ICMAS 2000), Boston, MA, USA,
10: Agent recovering. July 7-12, 2000.
5.3 The Controller [2] S. Hagg : A sentinel Approach to Fault Handling
Is the no critical agent group's manager it allows
ABSTRACT
This paper presents a modified Partition fusion technique for multifocus images for improved
image quality. In the conventional partition fusion technique image sub blocks are selected for
fused image based on their clearity measures. The clearity measure of an image sub block can be
determined by second order derivative of the sub image. The performance of these clearity
measures is insufficient in noisy environment. In the modified technique, before dividing the
image into sub images, it is filtered through linear phase 2-D FIR low pass digital filter to
overcome the effect of noise. The modified technique uses choose max selection rule to select the
clearer image block from the differently focused source images. Performance of the modified
technique is tested by calculating the value of RMSE. It is found that EOL gives lowest RMSE
with unequal block sizes while SF gives lowest RMSE with equal block sizes when used as
clearity measure in modified partition fusion technique.
Where
fxx +fyy =−f(x −1, y −1) −4f(x −1, y) −f(x −1, y +1) −4f(x, y −1) Fig-1 (D), (H)
+20f(x, y) −4f(x, y +1) −f(x +1, y −1) −4f(x +1, y) −f(x +1, y +1)
4. Spatial frequency (SF): Strictly speaking
frequency is not a focus measure. It is a modified
version of the Energy of image gradient (EOG).
Spatial frequency is defined as:
SF = RF2 + CF2 Fig-1 the performance of second order derivative in
Where RF and CF are row and column frequencies presence of various degree of noise.
respectivly:
Fig-1(A) shows ramp edges profile of an image Variance, Energy of Gradient, Energy of Laplacian,
separating black region and white region. The entire Spatial frequency are computed.
transition from black to white represents a single
edge. In fig-1(A) image is free of noise and its grey
level profile is sharp and smooth.Fig-1(B-D) are
corrupted by additive Gaussian noise with zero mean
and standard deviation of 0.1, 1.0 and 10.0 intensity
levels respectively and their respective grey level
profile shows noise added on the ramp by ripple
effects. The images in the second column are the
second derivatives of the images on the left. Fig-1(E)
shows two impulses representing presence of edge in Fig.2.Perspective plot of linear phase 2-D FIR
the image.Fig-1(F-H) shows that as the noise Lowpass digital filter
increases in the image the detection of impulses
becomes difficult making it nearly impossible to Setup for proposed algorithms
detect the edge in the image. This shows that the A schematic diagram for proposed image fusion
focus measure using the second order derivative also method is shown in Fig-3.The paper proposes
fails to decide about the best focused image in noisy modification for obtaining best focus measure in
environment. Thus for selection of best focused noisy environment by use of filter at step -2 in the
image removal of noise is essential before applying existing algorithms used by Li et. al [8].
fusion technique to obtain best focused image. The fusion method consists of the following steps:
The proposed focusing technique uses the linear- Step 1. Decompose the differently focused source
phase 2-D FIR low pass digital filter to remove the images into blocks. Denote the ith image block of
noise from the differently focused images. Filter uses source images by Ai and Bi respectively.
Parks-McClellan algorithm [19], [20].The Parks-
McClellan algorithm uses filter with Equiripple or Step 2. Filter the images through a 2D FIR low pass
least squares approach over sub-bands of the filter for removal of noise.
frequency range and Chebyshev approximation
theory to design filters with an optimal fit between Step 3. Compute the focus measure of each block,
the desired and actual frequency responses. The and denote the results of Ai and Bi by MiA and ,MiB
filters are optimal in the sense that the maximum respectively.
error between the desired frequency response and the
actual frequency response is minimized. Filters Step 4. Compare the focus measure of two
designed this way exhibit an equiripple behavior in corresponding blocks Ai , and Bi and construct the ith
their frequency responses and are sometimes called block Di of the composite image as
equiripple filters. Filters exhibit discontinuities at the ⎧A Mi > Mi
A B
head and tail of its impulse response due to this Di = ⎨ i
Mi > Mi
B A
equiripple nature.These filters are used in existing ⎩ Bi
fusion algorithm before partitioning the image as
shown in fig-3. The source images are passed through
Step 5. Compute root mean square error (RMSE) for
2D FIR low pass filter of order 4 and having
the composite image with a reference image
characteristic as shown in fig-2. For these low pass
filtered images conventional focus measure such as
A FIR LPF
Activity
level
Combining
measure
Portitioned by choose
images max
Fig.3: Schematic diagram for evaluating proposed focusing technique in Multi-focus image fusion
x =1 y =1
with proposed modified algorithm of partition fusion
RMSE= with 2-D low pass filter.
M×N
Where R and F are reference image and composite 5. CONCLUSION:
image respectively, with size M × N pixels. In this paper modified method of image fusion was
Table-1 shows the RMSE of fused images using used considering various focus measure capabilities
different focus measures and for equal block size of of distinguishing clear image blocks form blurred
images. Table-2 shows the RMSE of fused images image blocks. Experimental results show that
for unequal block size of source images. Table-1 preprocessed, 2-D FIR low pass filtered image in
shows that fused images using SF as focus measures modified method provide better performance in terms
gives lowest RMSE values and Table-2 shows that of low RMSE than the previous methods of
for unequal block size of images EOL perform better information fusion. Also from the results it is
then other clearity measures when used in modified concluded that performance of the image fusion
partition fusion technique. The analysis of Table-1 method depends on block size taken during the
shows that RMSE of fused image decreases with partitioning of source images. The experiment shows
increase in the block size of sub image only with SF. that EOL gives low RMSE with unequal block sizes
Analysis of Table-2 shows that RMSE of fused while SF gives low RMSE with equal block sizes.
image decreases with increase in the block size of sub This is an issue that will be investigated in future on
image for all clearity measures because the larger adoption methods for choosing the image block size.
image block gives more information for measuring
Table-1
Evaluation of different focus measures with equal block sizes on basis of RMSE
Block Focus measure
size
Partition fusion method Modified Partition fusion Method
Variance EOG EOL SF of
variance EOG EOL SF VI of LPF of LPF of LPF LPF
images images images images
4×4 4.5814 3.9383 3.6437 3.9383 4.1383 0.9514 0.9514 0.9301 0.9514
8×8 4.3658 4.0264 3.1466 3.9292 4.2110 0.9606 0.9373 0.8686 0.9373
16×16 4.7037 4.7720 3.4659 4.0517 3.9574 1.1872 1.1820 1.1561 0.8827
32×32 4.4221 4.6485 3.0888 3.8506 3.6183 1.2043 1.1382 1.1531 0.8949
64×64 4.6588 4.6000 3.8727 3.8927 3.4368 1.2194 1.2248 1.2013 0.8744
Numbers in bold and italic indicate the lowest RMSE obtained over different block sizes
Table-2
Evaluation of different focus measures with unequal block sizes on basis of RMSE
Fig. 4 left focused image Fig. 5 right focused image Fig. 6 middle focused image
Fig.7.All focused image Fig .8. Fused images Formed from Fig.9.Fused images formed
(reference image) variance From EOG(16×32)
Vanja Luković
Information technology
Danijela Milošević
Information technology
Goran Devedžić
Information technology
ABSTRACT
This paper analyses a broad scope of research papers dealing with the process of integrating biomedical
ontology with the FMA reference ontology. Namely, we want to investigate the capability of this process
appliance in development of the OBR-Scolio application ontology for the pathology domain of spine, rather
the scoliosis domen. Such ontology is one of the many objectives in the realization of the project named:
“Ontological modeling in bioengineering” in the domain of orthopedics and physical medicine.
Hence, the domain of the FMA is anatomy of pathological entity. The class Material anatomical
the idealized human body. FMA uses the hierarchy entity is subdivided into classes: Anatomical
of classes of anatomical entities (anatomical structure and Portion of canonical body substance,
universals) which exist in reality through their on the basis of the possession or non-possession of
instances. The root of the FMA’s anatomy inherent 3D shape. Within the class anatomical
taxonomy (AT) is Anatomical entity and its structure OBR ontology make a distinction between
dominant class is Anatomical structure. Anatomical canonical anatomical structures, which exist in the
structure is defined as a material entity which has idealized organism, and variant anatomical
its own inherent 3D shape and which has been structures, which result from an altered expression
generated by the coordinated expression of the pattern of normal structural genes, without health
organism’s own structural genes. This class related consequences for the organism. The class
includes material objects that range in size and Material pathological entity is subdivided into
complexity from biological macromolecules to classes: Pathological structure and Portion of
whole organisms. The dominant role of Anatomical pathological body substance, on the basis of the
structure is reflected by the fact that non-material possession or non-possession of inherent 3D shape,
physical anatomical entities (spaces, surfaces, lines too. Pathological structures are result from an
and points) and body are conceptualized in the altered expression pattern of normal structural
FMA, in terms of their relationship to anatomical genes, with negative health consequences for the
structures. organism.
The class Dependent organismal continuant is
4. OBR ONTOLOGY subdivided into classes: Immaterial anatomical
continuant, Immaterial pathological continuant and
Physiological continuant. Although the existence of
immaterial anatomical and pathological spaces and
surfaces and anatomical lines and points depends
on corresponding independent continuant entities,
they are dependent continuants. Besides them
classes: Function, Physiological state and
Physiological role and classes: Malfunction,
Pathological state and Pathological role also
belongs to Dependent organismal continuant,
because their entities do not exist without
corresponding independent continuant entities.
Functions are certain sorts of potentials of
Figure 1. Ontology of Biomedical Reality OBR independent anatomical continuants for
engagement and participation in one or more
The root of OBR is the universal Biological processes through which the potential becomes
entity (Fig. 1). A distinction is then drawn between realized. Тhe function is a continuant, since it
the classes: Biological continuant and Biological endures through time and exists even during those
occurrent, the definitions of which are inherited times when it is not being realized.
from BFO [3]. The class Biological continuant is Whether or not a function becomes realized
subdivided into classes: Оrganismal continuant, depends on the physiological or pathological state
which includes entities that range over single of the associated independent anatomical
organisms and their parts and Extra-organismal continuant. Thereat, physiological and pathological
continuant, which includes entities that range over state is a certain enduring constellation of values of
aggregates of organisms. Accordingly, the class an independent continuant’s aggregate physical
Biological occurrent is subdivided into classes: properties. These physical properties are
Оrganismal occurent and Extra-organismal represented in the Ontology of Physical Attributes
occurent, which include processes associated with (OPA), which provides the values for the physical
single organisms and their parts i.e. processes properties of organismal continuants. Namely, the
associated with aggregates of organisms. states of these continuants can be specified in terms
The class Organismal continuant is subdivided of specific ranges of attribute values.
into classes: Independent organismal continuant The independent continuants that participate in
and Dependent organismal continuant. a physiological or pathological process may play
Extrapolating from the FMA’s principles, different roles in the process (e.g. as agent, co-
Independent organismal continuants have mass and factor, catalyst, etc.). Such a process may transform
are material, whereas Dependent organismal one state into another (for example a physiological
continuant are immaterial and do not have mass. into another physiological, or into a pathological
OBR ontology distinguishes anatomical state).
(normal) from pathological (abnormal) material The class Organismal occurent is subdivided
entities. Accordingly, the class Independent into classes: Physiological process and
organismal continuant is subdivided into classes: Pathological process. Physiological process
Material anatomical entity and Material courses transformations of one physiological state
5. RADLEX TERMINOLOGY
de novo approach, would involve a series of The hierarchical tree of the OBR ontology
deletion and addition of links (Figure 3, left) from class Pathological structure and also its subclasses:
the FMA reference ontology. For example, the is_a Subdivision of pathological organ system,
link of the class Anatomical structure is deleted Subdivision of pathological skeletal system and
from Material anatomical entity and then added Subdivision of pathological axial skeletal system,
directly to Anatomical entity. Both Physical from which all subclasses which are not relevant
anatomical entity and Material anatomical entity for the pathological domen of spine are deleted, are
are then deleted from the FMA taxonomy. Beside illustrated in Fig. 5, Fig. 6, Fig. 7 and Fig. 8.
that, FMA types representing microscopic entities
which are not relevant to radiology such as Cell,
Cardinal cell part, Biological macromolecule,
Cardinal tissue part, are also deleted from
Anatomical structure. These operations can be
carried out in all levels of the hierarchical tree.
8. REFERENCES
[2] http://www.loucnr.it/DOLCE.html
Ontologies for Bioinformatics: Principles and [13] Rubin DL 2007: Creating and curating a
Practice, pp 59-117, New York: Springer. terminology for Radiology: Ontology Modeling and
Analysis, J Digit Imaging.
[11] Online аvailable at:
http://www.rsna.org/radlex. [14] Marwede D, Fielding M and Kahn T. 2007
RadiO: A Prototype Application Ontology for
[12] Langlotz CP: RadLex 2006. a new method for Radiology Reporting Tasks, Proc AMIA 2007,
indexing online educational materials, Chicago. IL, pp 513-517.
Radiographics 26:1595–1597.
[15] FMA Online аvailable at:
http://fma.biostr.washington.edu.