Вы находитесь на странице: 1из 256

v

Milos Arsenijevic, Zarko Mijajlovic (eds.)

COLLECTED WORKS IN LOGIC OF


ALEKSANDAR КRON
On the occasion of the 80th Anniversary of his birth

Beograd 20 17
SRPSKO
FILOZOFSKO
DRUŠTVO
COLLECTED WORKS IN LOGIC OF ALEKSANDAR KRON
On the occasion of the 80th anniversary of his birth

SABRANI SPISI IZ LOGIKE ALEKSANDRA KRONA


povodom 80 godina rodjenja

Editors / Urednici
Miloš Arsenijević, Žarko Mijajlović

Publichers / Izdavači

The Serbian philosophical society


Srpsko filozofsko društvo
Studentski trg 1, Beograd, Srbija

Za izdavača
Slobodan Kanjevac,
Predsednik Srpskog filozofskog društva
Priprema za štampu
e-izdanje: Žarko Mijajlović

Štampa
Dijamantprint, Beograd, strana 254, 24cm2

Tiraž
300

ISBN 978-86-81349-42-7
Miloš Arsenijević, Žarko Mijajlović (eds.)

COLLECTED WORKS IN LOGIC OF


ALEKSANDAR KRON
On the Occasion of the 80th Anniversary of his Birth

Beograd 2017
Aleksandar Kron
(1937 - 2000)
CONTENT

Foreword 9
Miloš Arsenijević, Žarko Mijajlović

Uvodna reč 11
Miloš Arsenijević, Žarko Mijajlović

Aleksandar Kron, 13
Miodrag Kapetanović

Aleksandar Kron in relevance logic 15


Slobodan Vujošević

Papers of Aleksandar Kron

A note on E 21
Notre Dame Journal of Formal Logic (NDJFAM)
Volume XIII, Number 3, July, 424-426, 1972 .

Deduction theorems for relevant logics 25


Zeitsdr. f. math. Logik und Grundhgeit d. Math
Bd. 19, S. 66-92, 1973.

Preference and choice 31


Theory and Decision 6, 185-196, 1975 .
(joint work with Veselin Milovanović)

Deduction theorems for T, E and R reconsidered 45


Zeitsdr. f. math. Logik und Grundhgeit d. Math
Bd. 22, S. 261-264, 1976.

An analysis of causality 49
Manninen and Tuomela (eds.), Essays on Explanation and Understanding, 159-82, 1976.
D. Reidel Publishing Company, Dordrecht-Holland.

Logika i vreme 73
Kultura (36/37), 296-310 (1977)

Gentzen formulations of two positive relevance logics 89


Studia Logica XXXIX, 4, 381-403, 1980.

Entailment and quantum logic 113


(joint work with Zvonko Marić and Slobodan Vujošević)
Enrico G. Beltrametti and Bas C. van Fraassen (eds),
Current Issues in Quantum Logic, Plenum Press, N. Y., pp 193-207, 1981.

A constructive proof of a theorem in relevance logic 129


Zeitsehr. f. math. Logik und Grundlagen d. Math. Bd. 31, s. 423 -430, 1985.

Four relevant Gentzen systems 137


(joint work with Steve Giambrone)
Studia Logica XLVI, 1, 55-71, 1985.
Temporal modalities and modal tense operators 155
Pavković (ed.), Contemporary Yugoslav Philosophy: The Analytic Approach, 175-183. 1988.
Kluwer Academic Publishers.

Decidability and interpolation for a first-order relevance logic 165


in P. Schroeder-Heister and K. Došen eds., Substructural Logics,
Oxford University Press Inc., New York, 153-177, 1993.

Identity and permutation 191


Publications de l'institut mathematique
Nouvelle serie, tome 57 (71),, 165-178, 1995.

Identity, permutation and binary trees 205


FILOMAT (Niš) ,Algebra, Logic & Discrete Mathematics, 9:3, 765–781, 1995 .

The law of assertion and the rule of restricted permutation 223


Logical investigations Vol. V 139, 33-35, 1998.
Institute of Philosophy, Russian Academy of Science

Between TW and RW 227


Publications de l'institut mathematique
Nouvelle serie, tome 63 (77), 9-20, 1998.

A semantics for the first quartet by T. S. Eliot 239


Algebra and Logic, Vol. 38, No. 4, 209-222, 1999.

Classifications, titles translations to Serbian and Internet


addresses assigned to papers 253
FOREWORD

Eighty years have passed since the birth of a distinguished Serbian philosopher and
logician Aleksandar Kron (1937-2000). On the occasion of this event and in his honour, the
Serbian Philosophical Society and the Mathematical Institute of the Serbian Academy of
Science and Arts (SASA) decided to publish his collected papers on mathematical logic.

Professor Kron has a special place in Serbian philosophy and mathematical logic. He was
the first to teach mathematical logic continuously at the Department of Philosophy,
University of Belgrade, where he was Professor of Logic since 1967 until the end of his life.
Educated as a philosopher at the University of Belgrade, he focused on mathematical logic
very early. In the late 1960s and the beginning of 1970s he established close connections with
some of the leading world logicians, first in Amsterdam, with E. W. Beth and A. Heyting,
and then in Pittsburgh, with A. R. Anderson and N. D. Belnap. In Pittsburgh, he acquired a
lifelong interest for relevant logic, though his interest in the whole field of logic remained
always rather broad, encompassing, in particular, quantum logic, the analysis of causality, the
decision making logic and temporal logic. All this is well illustrated by the papers collected
in this book.

In the late 1960s the outstanding Serbian mathematician and logician Slaviša Prešić
founded, with several other Belgrade mathematicians and philosophers, the Seminar for
Mathematical Logic at the Mathematical Institute of SASA. Aleksandar Kron has been
remembered as one of the leading members of the Seminar. Outgiving and informal in direct
communication, as he was, Kron, though still quite young at those days, was embraced as a
highly respectful teacher by his not much younger colleagues. He created in the Seminar an
enthusiastic atmosphere full of scientific curiosity and excitement. As a professor, Kron
established several graduate courses and special seminars in mathematical logic, especially in
proof theory and non-classical logics, which had a crucial impact on many later university
professors, including Kosta Došen, Milan Božić, Zoran Marković, Slobodan Vujošević,
Đorđe Vukomanović and both editors of this book. Practically every undergraduate or
graduate student interested in mathematical logic was influenced by him, even if he was not
their formal adviser. His influence was so strong that we believe that some of us from this
generation would not have become logicians if we had not met him. He did not leave direct
successors in the areas of logic he was interested in, but in some sense, very many of us from
this generation are his followers.

Kron founded and was the first President of the Society for Pure and Applied Logic in
Belgrade. He died relatively young in Debrecen, where he was Professor of Logic in
Computer science at the Lajos Kossuth University, on June 25, 2000, at the age of 62, still in
full stride of his remarkable career.
This book contains all Kron's papers in mathematical logic. Papers on relevance logic
make the most important part of this collection. We remind the interested reader that
relevance logic, also called relevant logic, is a category of non-classical logic requiring the
antecedent and consequent of implications to be relevantly related. Therefore, relevance logic

9
aims to grasp aspects of implication ignored by the material implication in classical logic,
specifically the notion of relevance between antecedent and consequent of a true implication.
This type of logic is considered as a family of substructural or modal logics.
Thanks to Miloš Adžić, who helped us in collecting the articles, the reader will have an
opportunity to read these works in their original form, as this book is a phototype edition. The
book also contains a biographical note on Aleksandar Kron written by Miodrag Kapetanović,
while Slobodan Vujošević wrote a thorough overview of the logic of relevance, which Kron's
papers are about. The papers are arranged in the order they were originally published.

Editors

Beograd, 2017

10
UVODNA REČ

Ove godine se navršilo 80 godina od rođenja Aleksandra Krona (1937-2000), izuzetnog


srpskog filozofa i logičara. Tim povodom, Srpsko filozofsko društvo i Matematički institut
SANU objavljuju ovaj zbornik njegovih članaka iz matematičke logike.

Profesor Kron u mnogo čemu zauzima specijalno mesto u srpskoj logici. Bio je prvi koji
je matematičku logiku, kao poseban predmet, kontinuirano predavao na Odeljenju za
filozofiju Filozofskog fakulteta u Beogradu, od 1967. godine do iznenadne smrti 2000.
Završivši filozofiju na Filozofskom fakultetu u Beogradu, usredsredio se na matematičku
logiku vrlo rano. Krajem šezdesetih i početkom sedamdesetih godina prošlog veka uspostavio
je bliske odnose s vodećim svetskim logičarima, prvo u Amsterdamu, sa E.W. Betom (Beth) i
A. Hejtingom (Heyting), a potom u Pitsburgu, sa A.R. Andersonom i N.D. Belnapom. Za
vreme jednogodišnjeg boravka u Pitsburgu razvio je trajno interesovanje za relevantnu logiku
(logic of relevance), iako je polje njegovog zanimanja u logici ostalo vrlo široko,
obuhvatajući kvantnu logiku, analizu kauzaliteta, teoriju odlučivanja i vremensku logiku. a
što sve ilustruju tekstovi sakupljeni u ovoj knjizi.

Krajem šezdesetih godina prošlog veka, ugledan srpski matematičar Slaviša Prešić
osnovao je u Matematičkom institutu SANU, zajedno s nekolicinom kolega matematičara i
filozofa, Seminar za matematičku logiku, u kome je Aleksandar Kron ostao zapamćen kao
jedan od njegovih glavnih učesnika. Otvoren i neformalan u neposrednom komuniciranju,
kakav je uvek bio, brzo prigrljen kao učitelj od strane ne mnogo mlađih kolega, Kron je umeo
da stvori atmosferu punu radoznalosti i entuzijazma. Kao profesor, oblikovao je više kurseva
i seminara iz matematičke logike za postdiplomce, pre svega iz teorije dokaza i ne-klasičnih
logika, što je bilo od ključne važnosti za buduće logičare i profesore iz ove oblasti,
uključujući Kostu Došena, Milana Božića, Zorana Markovića, Slobodana Vujoševića, Đorđa
Vukomanovića i oba urednika ove knjige. Praktično je svaki student zainteresovan za
matematičku logiku, i ako mu Kron nije bio mentor, bio pod njegovim uticajem. Taj uticaj je
bio takav, da se mnogi od nas ne bi bavili logikom, ili bar ne na način na koji to činimo, da se
nismo sreli s Kronom. Mada nije ostavio neposredog nalsednika u oblasti za koju je bio pre
svega zainteresovan, u izvesnom smislu se mnogi od nas mogu smatrati njegovim
sledbenicima.

Kron je zaslužan za osnivanje Društva za čistu i primenjenu logiku, kojeg je bio i prvi
predsednik. Umro je iznenada, na vrhuncu karijere, 25. juna 2000. godine u šezdesettrećoj
godini života u Debrecinu, gde je boravo kao profesor Logike u Departmanu za kompjuterske
nauke na univerzitetu Lajoš Košut.

Ova knjiga sadrži sve naučne Kronove tekstove iz matematičke logike. Najznačajniji deo
čine tekstovi iz relevantne logike, koja spada u ne-klasične, strukturalne logike, a za koju je
najkarakterističnije to što se, za razliku od materijalne implikacije, zahteva da postoji
relevantna a ne samo određena istinosno-vrednosna veza između antecedensa i konsekvensa.

11
Zahvaljujući fototipskom izdanju tekstova, za čije smo prikupljanje zahvalni Milošu
Adžiću, čitalac će moći da ih čita u originalnoj formi. Knjiga sadrži i biografsku belešku,
koju je napisao Miodrag Kapetanović, kao i širi pregled relevantne logike koji je napravio
Slobodan Vujošević, a koji se bavi onim o čemu Kron piše u većini tekstova. Svi članci su
poređani hronološki, prema godini publikovanja.

Miloš Arsenijević i Žarko Mijajlović, urednici

Beograd 2017

12
ALEKSANDAR KRON

Miodrag Kapetanović

Aleksandar Kron was born in 1937 in Vršac, a small town in Vojvodina (an autonomous
region of Serbia). Kron finished grammar school in Zrenjanin in 1956, came to Belgrade to
study philosophy, graduated in 1960 and became an assistant at the Department of
Philosophy. He received Ph.D. in philosophy in 1965 and was appointed an assistant
professor in logic two years later. Kron taught mathematical logic at the Department
continuously until 1999 (as a full professor eventually), when he left for Debrecen to succeed
the late A. Dragalin. There he died there in summer of 2000 of a heart attack. As a visiting
scholar Kron stayed in Oxford, Berkeley, Bloomington etc, and was most welcome all over
former Yugoslavia as well. Beside logic Kron also taught philosophy and methodology of
science. His pedagogical work is clearly reflected in the university textbooks Logic and
Elementary Set Theory and he is a coauthor of a textbook in logic for grammar schools.

Although well known as a philosopher and a teacher of modern logic in particular, Kron
made a name in the field of mathematical logic. His Ph.D. thesis deals with connections
between many-valued logic and probability theory. He spent the 1963/64 academic year at the
University of Amsterdam, supervised by E.W. Beth and A. Heyting and learned about
intuitionism among other things. An important event in his intellectual life was the visit to the
University of Pitsburgh (as a postdoctoral fellow) in 1970/71, where he met A. R. Anderson
and N. Belnap, inventors of relevant logics, systems that lack the structural rule weakening.
Kron appreciated the significance of these logics and the result was a number of papers over
the years containing original ideas and techniques on the subject. On the other hand he studied
properties of Grishin logic, a predicate logic lacking another structural rule called contraction.
All this can be seen as an anticipation of the systematic study of substructural logics such as
linear logic.

Another subject of interest for Kron was quantum logic, the study of logical systems
related to quantum mechanics and the problem of its axiomatization in particular. Papers in
decision theory, preference and choice, analysis of causality, logic of probability and
philosophy of science should also be mentioned, especially because they clearly illustrate
Kron’s intellectual curiosity and wide range of interests.

Working in the Faculty of Philosophy Kron was nevertheless very much engaged in the
Mathematical Institute in Belgrade. Thus he was an active member of the Seminar in
Mathematical Logic (founded in the sixties by S. B. Prešić) from the very beginning. In 1989
Kron became the chairman of the seminar and led it (with Đ. Vukomanović) until his death in
2000. He was the president of the Serbian Philosophical Society and the founder of the
Association for Pure and Applied logic of Serbia.

Kron married three times and is succeeded by the daughter Ana from his first marriage.
He possessed a superb collection of classical music and was an admirer of J. S. Bach. He
expressed himself as a writer: a fine collection of his short stories Triumphal arch of death

13
appeared in 1996. A communist in his youth, Kron ended as an active opponent of the regime
of Slobodan Milošević.

The author is much indebted to Djordje Vukomanović for providing relevant articles and data.

14
ALEKSANDAR KRON IN RELEVANCE LOGIC
Slobodan Vujošević

The most important event in the career of Aleksandar Kron (1937–2000), logician and
professor of philosophy at the University of Belgrade, was the visit to the University of
Pittsburgh in the early 1970’s. There he met Alan Ross Anderson and Nuel Belnap during
the time when their "Entailment: The Logic of Relevance and Necessity” was in
preparation, so that Kron was present when relevance logic was in full growth. In
Pitsburgh he entered the subject and remained in it up to the end of his life. This can be
seen not only through the number of his papers in relevance logic, but through all other
papers of his wide range of interests - the idea of relevancy is incorporated in his works in
quantum logic, analysis of causality and decision theory.

Relevance logic is invented in order to avoid the paradoxes of material implication,


such as p → (q → p), and the paradoxes of strict implication, such as p → (q ∨ ¬q).
The modal logic and in particular the notion of the strict implication were made on the
same grounds. What is wrong with these paradoxes is that antecedent and consequent of a
true implication can have completely different contents. Since the logical system has nothing
to do with the notion of content, it is necessary to find the formal aspects of a true
implication that are ignored by the material and the strict implication. The formal principle
of relevance logic, that forces theorems and inferences to preserve the content, is the
variable sharing principle. This means that no implication can be proved in a propositional
relevance logic if its antecedent and consequent do not have at least one propositional
variable in common. No inference can be shown valid if the premises and conclusion do
not share at least one propositional variable.

The implementation of variable sharing in a formal system requires a radical departure


from the semantics of classical logic. Also, this principle is only the necessary, but not the
sufficient condition for the formal system to be understood as a relevance logic. It does not
eliminate all of the paradoxes of material and strict implications. So relevance logic goes in
a different way and provides a notion of relevant proof in terms of the real use of
premises.

Relevance logics are proof-theoretically oriented so that it is easier to present their


syntax in the form of natural deduction systems or sequent calculi, than in the form of a
Hilbert style systems. Relevancy can be ensured by imposing certain restrictions on the
rules of a natural deduction system or by removing the rules that allow the introduction
of arbitrary formulae on the right or on the left side of the sequent in Gentzen calculus.
The central systems of relevance logic are the logic R of relevant implication and the logic
of relevant entailment [Anderson, Belnap, 1975]. The logic R had been known long before
the work of Anderson and Belnap, but they gave it the definite form and invented the logic
E, and a bulk of other relevance logics close to both R and E. Among the theorems that
are provable in the logic R there are, for example:

15
Identity: A → A,
Suffxing: (A → B) → ((B → C ) → (A → C )),
Assertion: A → ((A → B) → B),
Contraction: (A → (A → B)) → (A → B),
Distribution: (A ∧(B ∨C )) → ((A ∧B) ∨(A ∧C )),
Contraposition: (A → ¬B) → (B → ¬A),
Double negation: ¬¬A → A.
In the logic E of relevant entailment the implication was supposed to be a strict
relevant implication. To achieve that, Anderson and Belnap defined the logic E such that
in this logic one can prove the theorem of
Entailment: ((A → A) → B) → B.
Entailment is not provable in R so that R and E are different logics. If we present
them as Hilbert-style systems, with modus ponens A → B, A ├─ B and adjunction
A, B ├─ A ∧B as rules, then the logic E is the logic R minus assertion plus entailment.
The logics R and E are variable sharing and do satisfy conditions of the real use of
premises so that their implications are relevant. If we add the necessity operator □ to
R, together with axioms □ (A → B) → (□A → □B) and (□A ∧ □B) → □(A ∧ B),
and necessitation rule A ├─ □A, we obtain the modal logic of strict relevant implication
NR. But the logic NR is different from E since, in some natural translation, there are
theorems of NR that are not theorems of E, so that there are at least two logics that are
candidates for the proper logic of strict relevant implication.

Among the problems with the logics R and E, we shall first mention the problem
of the deduction theorem. After a long discussion in their Entailment, Anderson and
Belnap concluded that, for the logics E and T (the relevance logic different from R and
E), there is no deduction theorem in standard form. Kron [1973, 1976], found what is
known as the best form of deduction theorem for the relevance logics T, E and R. The
deduction theorem is crucial for the implication in every logical system so that this is
one of Kron’s most notable achievements in relevance logic. Charles Kielkopf in [1977]
named certain derivations Kron derivations and deduction theorems, for relevance logics
Kron Deduction theorems.

Another unusual property of the propositional logics R and E is that they are un-
decidable [Urquhart 1984]. These facts motivated Krons investigation of the subsystems
of R and E as systems without distribution, without contraction or without involutive
negation. Since the axiom of distribution is behind the failure of decidability of R and
E, and because of the natural proof-theoretical way in which relevant distributionless
logics arise, Došen [1993] believes that the central relevant logic should be R minus
distribution and that its modal extension NR minus distribution, and without the axiom
(□A ∧ □B) → □(A ∧ B), is a better candidate for the logic of strict relevance
implication than the logic E. But there are semantical arguments in favor of distributive
- the semantics evaluates sentences as true or false in each world and treats conjunction
and disjunction extensionally [Belnap 1993]. Kron believed in distributivity and tried to
overcome undecidability of relevance logic by the removal of the contraction axiom. The
presence of contraction corresponds to allowing premises to be used more than once. In
the system of natural deduction with subscripts for relevance logic, subscripts serve to
control the relevant use of premises and are finite sets. In the absence of contraction

16
subscripts also serve to count the number of occurrences of premises, so that subcripts
are now becoming finite multisets. These systems are not at all easy and Kron was
careful and patient enough to develop and improve the specific subscript techniques
for Getzen formulations of several relevant logics in the neighborhood of positive
fragments of the logics R and T without contraction [Kron 1978, 1980; Giambrone
and Kron 1987]. For these systems Kron proved the cut elimination theorems and
obtained the decidability procedures.

Similar subscript techniques Kron used to solve the identity problem for the ”most
weak relevance logic” posed by Belnap in early 1960’s [Kron 1985]. The logic was
formulated in the language that contains propositional variables, parentheses and
implication, has modus ponens as a rule, and has the following axioms

Prefixing: (B → C) → ((A → B) → (A → C )),


Suffxing: (A → B) → ((B → C ) → (A → C )).

Belnap asked for the proof that no instance of the identity axiom A → A is a
theorem in this logic or to say in different way, if A → B and B → A are provable in
the logic with identity, prefixing and suffixing, then A and B are the same formula. The
problem remained open for about twenty years, and was solved in 1983 by R. K. Meyer
and E. Martin, using the algebraic and semantical methods. Kron solved the problem
constructively in the sequent system, that contains the most weak relevance logic.

Among the arguments in favor of weaker relevance logics is not only the
decidability. There is another feature that makes the relevance logic without contraction
attractive - in the first-order predicate version of this logic, one can formulate a
consistent naive set theory (with unrestricted comprehension axiom). The motivation for
this logic came from the logic known as Grishins logic - the first-order predicate calculus
without contraction axiom [Grishin, 1974]. Grishins logic is decidable and the
distributivity is not provable in it. Kron defined a relevance variant of Grishins logic.
This variant is close to quantificational R without contraction and, in contrast to Grishins
logic, it is distributive. Kron was not willing to leave the distributivity. The logic has a
Gentzen-style formulation and it is a decidable first-order logic with the Craig - Lindon
interpolation property [Kron 1993].

The semantics of relevance logic is a Kripke-style semantics, but with the ternary
accessibility relation R between worlds, and with the semantic clause for implication:
A → B holds in the world a if and only if, for all the worlds b and c,
if R(a, b, c) and A holds at b, then B holds in the world c.
But the use of the ternary relation is not sufficient to avoid all the paradoxes of
strict implication. This brings us to the non-classical clause for negation. It requires an
operator * on the worlds such that
¬A is true in a if and only if A is false in the world a∗ .
Once again, it is difficult to interpret this quite formal clause.

17
Kron did not study semantics, but he stimulated the investigation in this direction.
Kosta Došen and Milan Božić worked with a ternary relation R such that R(a, b, c) means
a·b ≤ c, where · is a binary operation on worlds that corresponds to intensional conjunction
and where ≤ is a partial ordering on worlds, so that the semantic clause for implication
was:
A → B holds at the world b if and only if for every world
a, if A holds at a, then B holds at the world a · b.

Using this interpretation of the relevant implication, Božić [1983] gave a


transparent classification of the semantics of relevance logic - promising for some model
theory of this logic.

References

Anderson, Alan Ross and Belnap, Nuel D. Jr


1975 Entailment: The Logic of Relevance and Necessity, Vol. I, Princeton University Press.
Belnap, Nuel D. Jr
1993 Life in the undistributed middle, in P. Schroeder-Heister and K. Došen eds.,
Substructural Logics, Oxford University Press Inc., New York.
Božić, Milan
1983 Contribution to the semantics of relevance logics, Doctoral thesis. University of
Belgrade, Belgrade.
Došen, Kosta
1993 A historical introduction to substructural logics, in P. Schroeder-Heister and K.
Došen eds., Substructural Logics, Oxford University Press Inc., New York.
Giambrone, Steve and Kron, Aleksandar
1987 Four relevant Gentzen systems, Studia Logica, Vol. 46, No. 1, pp.
Grishin Vyacheslav Nikolaevich
1974 A nonstandard logic and its application to set theory, in Studies in Formalized
Languages and Nonclassical Logics (in Rusian), Nauka, Moscow, pp. 135–171.
Kielkopf, Charles
1977 Formal Sentential Entailment, University Press of America.
Kron, Aleksandar
1973 Deduction theorems for relevant logics, Zeitschrift für Mathematische Logik
und Grundlagen der Mathematik, Bd. 19, Heft 1, pp. 85–92.
1976 Deduction theorems for T, E and R reconsidered, Zeitschrift für Mathematische Logik
und Grundlagen der Mathematik, Bd.22, pp. 261–264.
1978 Decision procedure for two positive relevance logics, Reports on Mathematical Logic,
Vol. 10, pp. 61–78.
1980 Gentzen formulations of two positive relevance logic, Studia Logica, Vol. 39, No 4,
pp. 381–403.
1985 A constructive proof of a theorem in relevance logic, Zeitschrift für Mathematische
Logik und Grundlagen der Mathematik, Bd. 31, pp. 423–430.
1993 Decidability and interpolation for a first-order relevance logic, in P. Schroeder-
Heister and K. Došen eds., Substructural Logics, Oxford University Press Inc., N.Y.
Urquhart, Alasdair
1984 The undecidability of entailment and relevant implication, The Journal of Symbolic
Logic, 49, pp. 1059–1073.

18
COLLECTED WORKS IN LOGIC OF
ALEKSANDAR KRON

19
20
424
Notre Dame Journal of Formal Logic
Volume XIII, Number 3, July 1972
NDJFAM

A NOTE ON E

ALEKSANDAR KRON

Since there is no characteristic matrix for E so far, there is no


possibility of investigating whether E has the finite model property in the
sense of [1]. The aim of this note is to prove that for any wff D of E there
is a finite set of wffs having properties similar to some properties of a
finite model.
I shall suppose that E is formulated as in [2] or [3], but I shall write Ί
for negation instead of ~~. Let Xl9 X2, . . . b e the sequence of all finite
non-empty sets of wffs of E. If Xi = {A1? . . . , Aw}, i = 1, 2, . . . , then X{
shall denote the wff Ax & . . . & An. Let us write X instead of Xi. X will be
called consistent iff HβΊX; X is inconsistent, iff h £ i l , Clearly, if X is
consistent, then for no wff B \-EX -> B & IB.
Lemma 1. For any X, B and C, if X is consistent and \-EX—> BvC,
then either X u {B} or X U {C} is consistent.
Proof* Suppose that the contrary is the case. Then we have both
^E1(X & B) andJ-Eπ(X& C). By adjunction we obtain \-El(X & £)_& l(X & C)
and thus h β i ( l & β v l & C). But then we easily derive \-BΊ(X & (BvC))
and h £ i I v i ( 5 v C ) . Since h £ X - * 5 v C , we have \-Ei(BvC) - ^ l ΐ . There-
fore, \-ElX, contrary to the assumption of the lemma.
Lemma 2. For all X, B, C and D, if y-EX'-* BvC and -\EX -» D, then
either -\EX & B -> D or H E χ & C -• D.
Proof. Suppose that both \-E~X & B -> D and H £ X & C -» -D. We first
easily obtain \-E(X & B) v (X & C) -+ D and then h E X & (B v C) -» Z). Since
h β l - ^ ΰ v C , we have ί- E X-^X& (BvC) and thus f- E X-^Z), contrary to
the hypothesis of the lemma.
Let D be an arbitrary wff of E, let P+(D) be the set of all subformulae
of D, let ?~(D) be the set of all negations of the wffs of ?+(D) and let
P(D) = P+(J9) U P"(/>). Furthermore, let X(Z>) = {c ; vΊC, : C/ e P+(D)}, for all
1 < j < r , where r is the number of subformulae of D. In the sequel I shall
consider only the members Yu . . . , F 2 r of the sequence X 1 ? X2, . . .
satisfying the following two conditions:

Received June 26, 1971

21
A NOTE ON E 425

(1) X(D) C Yk
(2) YkQ?{D),

1 < k ^ 2 2 r . If Ym c Fw, then Yn is called an extension of F w . Thus, every


Yk is an extension of X(D). Let us write Y instead of Yk, 1 ^ k ^ 2 2 r and C
instead of C7 , 1 ^ j ^ 2r, and let us introduce F', Y", Z, etc., for the same
purpose.
A set Y will be called D-normal iff it is consistent and for every
Ce?+(D) either Ce F o r ΊCe F.
Lemma 3. For any consistent Y there is a D-normal extension Z.
Proof. Since X(D) c F, we have \-Eψ -*C vΊC, for all CeP+(D). By
Lemma 1, either F ' = F U {c} or F " = 7U {ΊC} is consistent. Since X(D) is
finite, repeating the same argument we could show that there is a D-normal
extension Z of F .
I shall note that the preceding lemma states only the existence of a
D -normal extension Z of F; it does not provide a construction of Z given F.
Let MD be the set of all normal extensions of X(D). Obviously, MD is
not empty. Let us say that Ce ?(D) is valid in MD iff Ce Y for all Ye MD; it
is refutable in MD iff there is an F such that C$ Y.
Lemma 4. For αZZ Ce?{D), if HβC, ίften C is refutable in MD.
Proof. If HEC, then HEX"CD) — C. But hEX*(Z)) — CvΊC. Therefore,
by Lemma 2, πE"χ(D) &ΊC-* C. I have to show that X(D) U {ΊC} is con-
sistent. Suppose that the contrary is the case. Then hE~i >((£>) vΊΊC and by
the rule γ (see [4]), since \-EX(D), we have i-E~nC and thus h E C, contrary
to the hypothesis that HEC. By Lemma 3 there is a /^-normal extension of
X{D) U {ΊC}. Therefore, there is an FeM D such that C<j F, and C is thus
refutable in MD
Corollary. If Ce? (D) is valid in MD , then \-EC.

Lemma 5. For all C e P(£>), if i-βC, then C is valid in MΌ.

Proof. Suppose that C is not valid in MΌ. Then there is an YeMD such
that ΊCeJ. Obviously, H E F->1C. But i-£ F — ΊC — .C ->1F and thus
ι-£C-»ΊF. Now if h £ c , we have H β ΊF and F is inconsistent, which is
impossible, since YeMD. Therefore, -\EC, and this proves the lemma.

REFERENCES

[1] Harrop, R., "On the existence of finite models and decision procedures for
propositional calculi," Proceedings of the Cambridge Philosophical Society,
vol. 54 (1958), pp. 1-13.
[2] Anderson, A. R., and N. D. Belnap, J r . , Entailment, to appear.

22
426 ALEKSANDAR KRON

[3] Belnap, N. D., J r . , "Intensional models for first degree formulas," The Journal
of Symbolic Logic, vol. 32 (1967), pp. 1-22.
[4] Meyer, R. K., and J. M. Dunn, " E , R a n d γ , " The Journal of Symbolic Logic,
vol. 34 (1967), pp. 460-474.

University of Beograd
Beograd, Yugoslavia

23
24
Zeitsdr. f. math. Logik und Grundhgeit d . math
Bd. 19, S. 66-92 (1913)

DEDUCTION THEOREMS FOR RELEVANT LOGICS


KRONin Belgrade (Yugoslavia)
by ALERSANDAR
I n this paper we define adequate concepts of a proof of a wff B from hypotheses
A , , . . ., A , in systems E, P and R of relevant logic, and we prove corresponding
deduction theorems in the following form: if there is a proof of a wff B from hypo-
theses A l , . . ., A , (in E, P or R), then A , -+.A , -+ * . * +. A,, + B is a theorem
(of E , P or R respectively). Such a deduction theorem has been proved for R (cf. [4]
and [ 5 ] ) but not for E or P.
1. Let us start with E. We shall assume that E is given axiomatically as in [2]
or [3]. I n writing wffs of E, P or R we shall omit parentheses and use dots; in
restoring omitted parentheses association shall be t o the left.
A proof of B from hypotheses A l , . . .,A , is a list of wffs B 1 ,. . .,B,, where B
is B , and for any 1 6 i 5 m , Biis one of n (not necessarily distinct) hypotheses
A , , . . .,A , or else an axiom of E or else 8 consequence of predecessors by adjunc-
tion or modus ponens such that conditions (i), (ii),(iii) and (iv) are satisfied.
(i) Finite (ordered) t-tuples of numerals b l , . . ., b, may be prefixed to the steps
B,, . . .,B , of the proof so as to satisfy the following rules:
(a) if Bi is the I-th hypothesis in the list A , , . . ., A,, 1 1 n , then ( I ) is pre-
fixed t o Bi;
(b) if Bi is a n axiom which is not a hypothesis, then the 0-tuple ( ) is prefixed
to Bi;
( c ) if B j is a consequence of Bj and Bk by adjunction and (cl, . . ., c,) is prefixed
both t o Bj and Bk, then (c,, . . .,c,) is prefixed to Bi as well;
(d) if Biis a consequence of Bj and Bk by modus ponens and (a,, . . .,d,) and
(h,, . . ., h,) are prefixed t o Bj and Bk respectively, then (cl, . . ., c,) is prefixed
to B,, where {c,, . . ., c,} = { d l , . . ., d,} w {h,, . . ., h,} and c1 < c:, < * * < c,.
(ii) If (a,), . . ., (a,) are prefixed t o A l , . . ., A , respectively, then (a,, . . ., a,)
is prefixed t o BnL.
(iii)The application of adjunction is not permitted in the derivation of Bifrom
Bj and Bk if Bj and Bk have different t-tuples prefixed.
(iv) Suppose that Bj is Bk + Bi and that (d,, . . .,d,) and (hl, . . .,h,) are pre-
fixed t o Bj and Bk respectively. The application of modus ponens is n o t permitted
if the following four conditions are satisfied:
(e) 0 < p , (f) 0 < q , (g) h, .c d,, (h) Bk is not of the form F + a.
If there is a proof of B from hypotheses A , , . . ., A,, then we write
A l , . . ., A , I-E B . If n = 0 , B1, . . ., B, is a proof of B , in the ordinary sense.
I n the sequel we shall need the following lemma, the proof of which is by a n
easy induction and therefore left t o the reader.

25
86 ALEKSANDAR KRON

L e m m a 1.1. Let Bi be a wff occurring in a proof of B from hypotheses A l , . . ., A ,


and let (cl, . . ., c,) be prefixed to B i , r 2 1 ; then ( 1 . 1 . 1 ) for each c w , 1 4 w 6 r ,
.
there is a unique hypothesis Al in the list A l , . ., A,, such that (c,) is prefixed to Al;
and (1.1.2) c1 < c2 < * < c,.
Now we shall prove
..
T h e o r e m 1.2. Let A l , . , A , F E B and let Bi be a wff occurring i n the given
.
proof of B from A l , . . . , A , , . If (cl, . . ., c,) i s prefixed to B i , r 2 1 and (cl), . ., (c,)
.
are prefixed respectively to C1, . ., C, occurring among A l p. . .,A , , then FE Cl +.
-
+. C2 + * * +, C, --+ Bi; i f r = 0 , then FEBi.
Proof. We shall proceed by induction on i . Suppose that the theorem holds
for all wffs preceding Bi in the given proof B 1 , .. ., B , and let us consider Bi,
14igrn.
(0) If r = 0 , the O-tuple ( ) is prefixed to B i . Hence, either Bi is an axiom of E
and thus FEBi or it is obtained by a rule of inference from the preceding wffs of
the proof having the prefix ( ). By induction hypothesis these wffs are theorems
of E ; hence, so is Bi .
Suppose that r 2 1 . We can distinguish several cases.
.
( 1 ) Bi is the 1-th hypothesis among Al , . ., A,. Then the theorem follows since
FEBi + Bi.
( 2 ) If B i is a consequence of Bj and Bk by adjunction and (cl, - . ., c,) is prefixed
to Bi, ..
then by (iii) (el, .,6,) is prefixed both to Bj and Bk. By induction hypo-
thesis and (1.1.1) (uniqueness condition), if (cl), ..
., (c,) are prefixed respectively
..
to 4 , .. ., C, occurring among A l , . , A , , then F E C1 +. C2 * . 4. C, + Bj -+

--
and F E c1-+.C, + * +. c, + Bk. It is a matter of routbe to prove that
FE +. c2 + * * +. c, +.Bj & Bk.
(3) If Biis a consequence of Bj and Bk by modus ponens and Bj is & + Bi, then
we can distinguish four subcases:
(3e) p = 0, (3f) q = 0 , (3g) hq 2 d p , (3h) Bk is of the form P + a.
(3e) Since p = 0 , ( ) is prefixed to Bj and by ( 0 ) , F E B,. By (d) of (i),
.
{cl, . . ., c,} = {hl, . ., hq} and by (1.1.2) both c1 < c2 < * * * < c, and
hl < h, < * .- . ..
< hq; hence (cl,. ., c,) = ( h l , .,h q ) . Suppose that (cl), . ., (c,) .
are prefixed respectively to C1,. . ., C, among A l , . . . , A , . By induction hypo-
-
thesis and (1.1.1), k E c1+. c2+ * * +. c, + &. using transitivity (prefixing) (cf.
[ l ] , p. 42 or [ 2 ] , 8 4 . 2 ) , it is easy to prove that F E Cl +. C2 + . . * +. C, + Bi.
(3f) Since q = 0 , ( ) is prefixed to B k . By (d) of (i), {cl, . , ., c,} = { d l , . ., d p }.
and by ( L 1 . 2 ) both c1 < c2 < - -
< c, and dl < d, < * c d p ; hence (cl,. . .,c,)
= (&, . . ., d p ) . Suppose that (el), . . ., (c,) are prefixed respectively to C,, . . ., C,
occurring among A , , . . .,A , . By induction hypothesis and (1.1. l ) ,
+. C2 + +. C, +. Bk + Bi.
F E C, * *.

On the other hand FEBk + Bi +. Bk + Bi and kEBk by (0).Hence, by the rule


(d), which is provable in E (cf. [l], p. 39), FEBk + Bi + B i . Using transitivity
(prefixing) it is easy to prove F E C1 +. C, + * * +. Cr + Bi. -

26
DEDUCTION THEORENS FOR RELEVANT LOGIOS 87

(3g) Without loss of generality we may assume that p > 0 and q > 0 . Suppose
. .
that (al,.. ., a,), ( h l , . ., h,) and (cl,. ., c,) are prefixed to B j , Bk and Bi re-
..
spectively. We know that {cl, . , c,} = { d l , . - .,d,} v {h,, . .,h,}, ~1 < CZ < * * . -
-
< c,, dl < d2 < * < d p and hl < hz <
0 .- -
< h,. Suppose furthermore that
..
(al),. . ., (d,), (hi), . . ., (h,), (cl), . . ., (cr) are prefixed to D1, ., D,, H i , . . ., H , ,
.
C1,. .,C, respectively, occurring among A l , . . . , A , . By (l.l.l),if c, = d,,
then C, = D,, and if c, = h,, then C, = H , , 1 5 u 5 p , 1 v 5 q , 1 I -wI - r.
By induction hypothesis,
FED1 3.Dz-+ - -- 4. D, +. Bk -+ Bi and kEHI +. H z -+ - * * 4.H , -+ Bk.
We want to prove that k E C1 4. Cz + * +. C, -- -+ Bi.
We shall show that k E T ,where T is the wff
(Dl +. Dz 4* * * -+.D p -+.Bk + Bi) -+.
-+. ( H i +. HZ + * -+.H , * -+ Bk) 4.GI-+. Cz --* * * * +. C, 4 Bi.
Since C, = max(d,, h,) and h, >= d,, either c, = h, and d, < h, or cr = d p = h,.
Hence either
(1) FEBk -+ B i 4 .H , -+ Bk +. c, + Bi
or
(11) FEDp -+ (BI 4 Bi) -+. Hq + Bk -+.C, -+ Bi
respectively. Let us consider C, -1, r > 1 .
If c, = h, and d, < h,, then either c,-~ = d p and hqw1< d p or c , - ~ = hq-l and
d, < h,..l or c , - ~ = d, = hq-l, Hence, we have the following cases: either c, = h,,
c,-~ = d, and hq-l < d, or c, = h,, c,-~ = h,-l and d p < hqF1or c, = h, and
c,-l = a, = h+.
In the first case we start with (I); using restricted permutation (cf. [l],p. 42,
or [2], 0 4.2) we obtain k E H q+ Bk +. Bk + Bi -+.G, 4 Bi. Then using transi-
tivity (both prefixing and suffixing; cf. ibidem) and restricted permutation again
we obtain F E D p + (Bk -+ Bi) 4. Hq -+ Bk -+- Cr-1 4. Cr -+ Bi-
I n the second case we start with (I); using both forms of transitivity we obtain
kEBk -+ Bi (H, -+ Bk) -+.Cr-l -+.C, -+ Bi.
+. Hq-l -+
I n the third case we start with (I) again; using both forms of transitivity and
self-distribution we obtain
FED, -+ (Bk -+ B,) -+. Hq-l + ( H , -+ &) -+.Cr-l +. C, + Bi.
If c, = d p = h,, then either c,-~ = and hq-l < or c,-~ = hq-l and
< hq-l or c,-, = = hq-l. Hence we have the following cases: either
C, = d, = h,, c,-~ = and hq-l < or C, = d p = h,, c,-~ = hq-l and
< h,-l or c, = d p = hq and c , - ~ = d p - l = hq-l. I n all of them we proceed as
before, using (11)instead of (I),and we obtain either
FEDe-, -+ (D, -+.Bk + B J -+.Hq -+ Bk 4.Cr-l -+.C, 4 Bi
or
FED, 4 (Bk + Bi) 4.Hq-1 -+ (Hq -+ Bk) + * Cr-1 -+* Cr -+ Bi

27
88 ALEESANDAR KRON

or
tEDp-l + ( D p +. Bk + Bi) (Hq + Bh)+. Cr-l +. C, + Bi,
+. Hq-I +
respectively. I n a similar way we than consider C, - 2 , . . ., C,; eventually we obtain
t E T . (A precise formulation of the claim that t E T as well as an official inductive
proof of it are straightforward).
Returning to the proof of Theorem 1.2, we only note that
t E C1 +. C2 + ' . +. C, + Bi
follows from the induction hypothesis and bET.
(3h) Without loss of generality we may assume that p > 0 , q > 0 and h, < d,.
.
Suppose t h a t (al, . .,d p ) , (hl, . . .,h,) and ( c I , . . .,c,) are prefixed t o Bj,B,,. and
Bi respectively. We know that { c l r . . ., c,} = { d l , . . ., d p } u {hl, . . ., h,},
- --
c1 < c2 < * < c,, dl < d, < * * < d, and hl < h2 < * < hq. Suppose further-
more that ( & ) , . . ., (d,), &), . . ., (h,), ( c l ) , . . ., (c,) are prefixed respectively t o
unique D 1 , . . ., D,, H 1 , . . ., H , , C1,. . .,C, occurring among A l , . . . , A r 3 . By
( l . l . l ) ,if c, = d,, then C, = D,, and if c, = h,, then C, = H , , 1 5 u 5 p ,
1 5 v 5 q , 1 w 5 T . By induction hypothesis,
FEDl +.Dz + * +. Dp +.F + G + Bi and t H I +. H2 + * * * +. H , +. F + 0.
We want to prove that t E CI +. Cz + Bi.
* * +. C, +
We shall show that t E T , where T is as in (3g) and Bk is F + a.
Since c, = max(dp, h,) and h, < d,, c, = d,. Hence, by restricted permutation
(111) kEDp-+( F + G -+ Bi)+.P + G - + . C , + Bi.
Let us consider Cr-l, r > 1. Since c, = d,, we have either c,-1 = d,-l and
h, < or c , - ~ = h, and < h, or crP1 = = h,.
I n the first case using restricted permutation we obtain from (111)
t E F + G +. D, -+ (F + G --* Bi)+. C, + Bi
and then using both forms of transitivity we derive
k E F + G +. Dp-l + ( D , + . P + G + Bi)+. C,-i +. C, + Bi.

Finally, using again restricted permutation,


tEDp-l + ( D p - + F
. +G 4 Bi)+. P + G +. Cr-l +. C, + Bi.
I n the second case, using transitivity, we easily obtain from (111)
FEDp + ( F + G + Bi)+. H , + ( F -+ G) +. 1 7 - 1 +. C, -+ Bi.
I n the third case, using transitivity (prefixing), we obtain
tEDp-l + (0,+. F + G + Bi)+. H, +. F + G +. C, + Bi.
But by self-distribution
t E H q -+ ( F + G -+.C, + Bi)+. Hq + ( F + C)+. C,.-l+. C, + Bi
and hence by transitivity (suffixing)
tEDp-l + (Dp+. F + a + Bi)+. Hq + (P+ G ) +. C r - l + - C, + Bi.

28
DEDUCTION THEOREMS FOR RELEVANT LOGICS 89

I n a similar way we then consider Cr-z, . . ., G , ; eventually we obtain t ET.


This proves the theorem for the case (3h). Again, a precise formulation of the claim
that FET and the corresponding inductive proof are left to the reader.
This completes the proof of Theorem 1.2. Since (ii), we have
C o r r o l a r y 1.2.1. If A , , . . . , A , t~ B , then FEA14. A, + -- * +.A, +B.

Using modus ponens we prove


C o r r o l a r y 1.2.2. If A 1 , . . ., A , t E B and n 2 1, then A , , . . ., A,-l t E A , -+ B .
2. We now turn to P. It is known that P is obtained from E by omitting axiom
schemata A -+ A + B -+ B and ( A + A + A ) & ( B + B -+ B ) ( A& B) + +.
-+ ( A & B ) -+ ( A & B) and by adjoining schemata A -+ A and B + C +. A +
4 B + . A + C .

A proof of B from hypotheses A , , . . ., A,, is a list of wffs B1, . . ., B,, where B


is B , and for any 1 i 5 m , Bi is one of n (not necessarily distinct) hypotheses
A , , . . .,A,, or else a n axiom of P or else a consequence of predecessors by adjunc-
tion or modus ponens such that conditions (i),(ii)and (iii)of Section 1 are satisfied
and (iv) is replaced by
(iv’) Suppose that Bj is Bk 3 Bi and that (dl, . . ., d p ) and (h,, . ., h4) are pre- .
fixed t o Bj and Bk.If p = 0 , the application of modus ponens is permitted without
restrictions. If p > 0 , the application of modus ponens is permitted only if q > 0
and the following two conditions are satisfied:
(iv’l) d , 5 h,,
(iv‘2) either (iv’2.1) for all w , either there is an u such that c, = d , and there
is no v such that c, = h, or there are u and v such that c, = d, = h,; or (iv‘2.2)
for all w , either there is a v such that c, = h, and there is no u such that c, = d,
or there are u and v such that c, = d, = h,, where 1 5 u 5 p , 1 I - v < q and
16wcr.
If there is a proof of B from hypotheses A l , ...,A,, then we write Al ,...,A , t pB .
If n = 0 , B 1 , . . .,B , is a proof of B in the ordinary sense.
It is clear that Lemma 1.1 holds for P. Now we can prove
T h e o r e m 2.1. Let A , , . . ., A , t pB and let Bi be a wff occurring in the given
proof of B from A , , . . ., A,. If (cl, . . .,c,) i s prefixed to B i , r 2 1 and (cl), . . ., (cr)
are prefixed respectively to C , , . . ., C, occurring among A l , . . .,A , , then tp C, +.
4. -
Cz4 * * +. C, + B i ; if r = 0 , then kPBi.
P r o o f . We proceed by induction on i. Suppose that the theorem holds for all
..
wffs preceding Bi in the given proof of B from A1, ., A , and let us consider B i ,
l s i s m .
If r = 0 , the theorem is proved as Theorem 1.2.
If r 2 1 , we have (1)and (2) exactly as in the proof of Theorem 1.2, with P and
t p instead of E and t E . Instead of (3) we have

29
90 ALEKSANDAR KRON

(3’) Suppose that Bi is a consequence of Bj and Bk by modus ponens, that Bj


.
is Bk + B i and that (al, , ., a,), (hl, . . ., h,) and (cl, . . ., c,) are prefixed to B j ,
Bk and B i respectively. We know that {CI, . . .,c,) = { d l , . . .,d,) u {hl, . .,h,), .
c1 < C, < --- < c,, d l < d, < ..-
c d p and h1 < h2 < -.
< h,. Suppose further-
.
more that (al),. . ., (a,), ( h , ) , . ., (h,), (cl), . . ., (c,) are prefixed to D I , . . ,D,, .
.
H 1 , . . ., H , , C1, . ., C, respectively, occurring among A l , . . ., A,. By (l.l.l),if
c, = d,, then C, = D,, and if c, = h,, then C, = H , , 1 5 u 5 p , 1 5 ZJ 5 q
and 1 5 w 5 r . By induction hypothesis,
tpDl +.D2 -+ ---
-+.Dp -+.Bk + Bi and t p H l Ha + * * +. Hq + Bk. -+. -
-
We want to prove that t p C1 +. C, + * -+. C, -+ Bi. If p = 0 , the theorem is
proved easily, using both forms of transitivity. Suppose that p > 0 , B > 0 and
that (iv‘l) and (iv’2) are satisfied. If we can show that either tpTl or tpT2, where
T 1 and T2are respectively
(HI -+.HZ --+ * *. +.Hq + Bk) +.
+. (Dl +. D2 -+ * * +. Dp +. Bp -+ Bi) -+. C1 +.CZ -+ * * * +. C, + Bi
and
(D1+. D, + * * . +. D, +. Bk + Bi) +.
-+. (HI +.H2 -+ . * * +. H , +Bk) +. C1 +. C2 -+ * . * +. C, 3 Bi,
then the theorem is proved. Let us show that either I- T1or t T2.There are two
cases :
(3’.1) (iv’l) and (iv’2.l) are satisfied; (3’.2) (iv’l) and (iv’2.2) are satisfied.
(3’.1) Since C, = max(dp,h,) and d, 5 h,, either c, = h, and d p < hq or
c, = d, = h,. Hence, either

(IV) FpH, -+ Bk +. Bk + Bi +. C, + Bi

or
(V) kpHq + Bk +. D, + (Bk + Bi) -+.C, + Bi
respectively. Let us consider C r - l , T > 1. If (iv’2.1), we have one of the following
subcases:
(3’.1.1) c, = h,, c,-~ = d p and hq-l < d p ,
(3l.1.2) c, = h, and c,-~ = d , = hq-l,
(3I.1.3) C, = d, = h,, c,-l = dp-l and hq-l < dp-l,
(3’.1.4) C, = d, = h, and c,-~ = = hq-l.
If (3’.1.1),we have tpBk --f Bi -+ (C, --L Bi) +. D, + (Bk + Bi) -+.C,-l+. C, -+ Bi
and hence using (IV) and transitivity (suffixing)
tpHq + Bk +. Dp + (Bk -+ Bi)+. Cr-3 +. Cr + Bi.
If (3’.1.2), then
t p H , + Bk + (Bk + Bi -+.C , -+ Bi) -+. Hq-l + ( H , + Bk) +. Dp +. Bk + Bi +. C, + Bi

30
DEDUCTION THEOREMS FOR RELEVANT LOGICS 91

and henceusing (IV) t P H q - l + (Hq -+ Bk) +. Dp +. Bk -+ Bi +. c, + Bi. Usingself-


distribution and transitivity (suffixing), we obtain
t p H q - l + (Hq -+ Bk) -+.Dp + (Bk + Bi) +. Cr-1 -+- Cr -+ Bi-
If (3‘.1.3), then
t p D p + (Bk + Bi) + (Cr -+ Bi) +. ( D p - l + . Dp +. Bk + Bi) -+.C r - l + . Cr + Bi
and hence using (V) and transitivity (suffixing)
I-pH, --t Bk +. Dp-l -+ ( D , +. Bk + Bi) +. cr-l-+.c, + Bi.
If (3‘.1.4), then using (V) and transitivity (prefixing) we first easily obtain
I-pHq-l 3 (Hq + Bk) 3 .Dp-l -+. (0,+. Bk + Bi) -+.C, -+ Bi. Then Using self-
distribution and transitivity (suffixing), we derive
I-pHq-l + (Hq + Bk) -+.Dp-1 -+ (Dp +. Bk + Bi) +. C r - l + . cr -+ Bi.
We then consider C r - 2 , . . ., C1. It is clear that each time we consider one of
these wffs, we have t o increase either the number of D’s or the number of both
D’s and H’s. This can be done as in our consideration of C,-l. Eventually, we
obtain kPT1.
(3l.2) We easily conclude that either c, = h, and d, < h, or c, = d p = hq. Hence,
either
(VI) tpBk -+ Bi +. Hq + Bk +. C, + Bi
or
(VII) I - p D p + (Bk -+ Bi) +. H,, + BI,+. C, + Bi

respectively. Let us consider r > 1 , Since (iv‘2.2), we have one of the follow-
ing cases:
(3‘.2.1) c, = h,, c , - ~= hq-l and d, < h g - l ,
(3’.2.2) c, = hq and c , - ~= d, = hq-l,
(3‘.2.3) cr = d, = hq, c , - ~= hq-l and d P p l < hq-l,
(3l.2.4) c, = d, = h, and c,-~ = = hq-,.
If (3‘.2.1), then we have
I-pHq + Bk -+ (C, -+ Bi) -+.Hq-l + ( H , + Bk) +. Cr-l +. C, + Bi
and hence, using (VI) and transitivity (suffixing)
FpBk + Bi +. Hq-l + ( H , + Bk) +. Cr-l +. C, -+ Bi.
If (3’.2.2), then using (VI) and transitivity (prefixing) we obtain
I-pDp+ (Bk + Bi) -+. Hq-l +. Hq + Bk +. C, + Bi.
Now using self-distribution and transitivity (suffixing) we have
I-pDp + (Bk + Bi) -+.
Hq-1 --+ (Hq + Bk) Cr-1 Cr + Bi.
+a -+a

If (3’.2.3), we have
I-pHq-+ Bk -+ (C, + Bi) +.Hq-1 + (Hq --t Bi<)+. Cr-I +. cr --t Bi

31
92 ALEKSANDAR KRON

and hence using (VII) and transitivity (suffixing)


tpDp 4 ( B k + Bi)+. H,-1 + ( H , + B k ) -+.Cr-i +. Cr + Bi.
If (3j.2.4)) then using (VII) and transitivity (prefixing) we obtain
tpDp-l + (0,-+.B k + Bi) -P. H,-1 4. H, + & +. C, + Bi;
using self-distribution and transitivity (suffixing) we obtain
tpDp-l + (Dp-+.Bk + Bi) +. Hq-l + ( H , + Bk) +. Cr-l +. C, + Bt.
We then consider Crv2,. . ., C , . Since (iv’2.2), each time we consider one of
these wffs, we have to increase either the number of H’s or the number of both
D’s and H’s. This can be done as in our consideration of Cr-l. Eventually, we
obtain tpTZ.
(A precise formulation of the claim that either f- T1or t T2and the correspond-
ing inductive proof are left t o the reader.)
This completes the proof of Theorem 2.1.
C o r r o l a r y 2.1.1. I f A l , . . ., A, tp B , then t p A 1 + * * +. A, + B
C o r r o l a r y 2.1.2. If A l , . . ., A , tp B , then A l , . . .,A,-1 t p A, + B.
3. Finally, we consider R. It is known that R is obtained from E by adjoining
the axiom schema A +. A + B + B .
A proof of B from hypotheses A l , . . . , A , is a list of wffs B1,. . ., B,, B is B,,
and for any 1 5 i rn, Bi is one of n (not necessarily distinct) hypotheses
Al , . . ., A , or else a n axiom of R or else a consequence of predecessors by adjunc-
tion or modus ponens such that conditions (i), (ii) and (iii)of Section 1 are satisfied.
We can prove Theorem 3.1 which is as Theorem 1.2 with R and t Rinstead of E
and t E .The proof of Theorem 3.1 is similar to that of Theorem 1.2., although easier,
since t R A + (B + C)+. B + . A -+ C .
This completes our considerations of deduction theorems for E, P and R.

References
[l] ANDERSON, A. R., and N. D. BELNAP,
JR.,The pure.calculus of entailment. J. Symb. Logic
27 (1962), 19-52.
[2] ANDERSON, A. R., and N. D. BELNAP,JR.,Entailment to appear.
[3] N. D. BELNAP,JR.,Intensional models for first degree formulas. J. Symb. Logic 82 (1967),
1-22.
[4] CHURCH,A., The weak theory of implication. Kontrolliertes Denken, Miinchen 1951.
rq MOE SHAW-KWEI, The deduction theorems and two new logical systems. Methodos 2 (1950),
56-75.

(Eingegangen am 17. Januar 1972)

32
ALEKSANDAR KRON AND VESELIN MILOVANOVI~

PREFERENCE AND CHOICE*

ABSTRACT. A theory of preference and choice based on that of yon Wright is developed
whereby the choice of a state of affairs is determined by preferences between pairs of
them. The method used is letting preferences eliminate states of affairs from the choice
set according to axiomatized rules. Formal properties of extensions of yon Wright's
preference logic are investigated.

1. I N T R O D U C T I O N

This paper is a result of the authors' work on an engineering problem -


how to design an organizational role. For solving this problem it was
necessary to analyze the decision-making process, and in order to do that
we had to give a formal description of it.
A description of decision-making processes includes descriptions of
goals which the decision maker wants to attain, and of the process of
choosing among the alternatives available for attaining these goals.
The goals can be described by means of preferences. We assume that
the reader is familiar with [1] and [2], where preferences between alter-
native states of affairs have been considered. The present authors have
extended von Wright's system by adjoining additional principles con-
cerning the choice between such states. The resulting systems, especially
the basic W0, are discussed in this paper. From the purely formal stand-
point it is obvious that any other preference logic could have been used
instead of von Wright's.

2. P R E F E R E N C E A N D C H O I C E

It is quite clear that having some preferences fin fact, having as many
preferences as you like) does not mean that any choice between states of
affairs has actually been made. On the other hand, it is trivial that
preferences have something to do with choice. Firstly, if our choice
between alternative states of affairs depends on our goals and these goals
can be expressed by our preferences, then our goals ought to be viewed as

Theory and Decision 6 (1975) 185-196. All Rights Reserved


Copyright 9 1975 by D. Reidel Publishing 33
Company, Dordreeht-Holland
186 ALEKSANDAR KRON AND VESELIN MILOVANOVI~

states of affairs preferred to some other states of affairs. Secondly, prefer-


ences are not sufficient for making a choice; we must be in a position to
choose, we must be forced to choose or we must be willing to choose.
Thirdly, in everyday circumstances our choice can be based on our
preferences; we choose states of affairs which we prefer to other states o f
affairs at our disposal in these circumstances. Fourthly, to prefer a state
o f affairs to another state of affairs means to be ready (on some occasions)
to perform an action in order to bring about the preferred state of affairs
rather than the other one. Fifthly, as a direct consequence of the pre-
ceding remark, the state of affairs which we want to bring about must be
feasible: there must be a possibility for us to actualize it. Hence, in every
decision-making process the feasibility of states of affairs is taken into
consideration. Nevertheless, feasibility is out of the scope of this paper;
it will be analyzed elsewhere. Bearing in mind these trivialities, we pro-
pose to axiomatize yon Wright's logic of preference and to extend it by
adding new axioms, in accordance with these principles.

3. FORMULAE

Let Wo denote the basic axiomatic system under discussion. Wo is defined


on the classical propositional calculus L, which we specify briefly. We
write ~ , A, V, -7, -~ for material implication, conjunction, disjunction,
negation and equivalence respectively, and p, q, r,... for propositional
variables. The set of wffs of L is defined as usual. A, B, C .... range over
the set o f wffs of L. The theorems of L are given by the foUowing simple
clause: every tautology is a theorem of L.
The reader is assumed to be familiar with the distinguished (perfect)
disjunctive normal form (ddnf), as well as with some elementary facts
about it. In particular, he should know that every non-contradiction has a
ddnf and that a contradiction has no ddnf. We write dA for the ddnf o f A,
if it exists. We also use some pieces of set-theoretic notation. For all A
and B, if A is a disjunct of dB, we write AedB; if all disjuncts of dA are
among the disjuncts o f dB, we write dA ~_dB ; if both dA ~_dB and dB ~_dA,
we write dA=dB; if A~dB, then by d B - A we denote the result of
deleting A from dB; if dB~_dA, then by d A - d B we denote the result of
deleting all members of dB from dA.
Now we define W0. The primitives of Wo are: binary predicates P and

34
PREFERENCE AND CHOICE 187

o, propositional connectives of L and parentheses. (Note that we use the


same symbols for connectives both in L and Wo; no confusion will arise
therefrom.) The set o f wffs o f Wo is defined as follows: A P B is a wit of
Wo; if dB ~_dA, then o (dA, dB) is a wff of Wo (we require that dA exists;
if dB does not exist, o(dA, ) is a wff of Wo); if X i s a wff of Wo, so is
7 (X) (if X is A P B, we write A P B for -1 (X)); if X and Y are wffs of
Wo, so are ( X ~ Y), ( X ^ Y), ( X v Y) and (X-~- Y); nothing else is a wff
o f Wo. Let X, Y, Z,... range over the set of wffs of Wo.

4. INTENDED INTERPRETATION

Needless to say, dA is a disjunction o f states of affairs or, if the reader


prefers, a disjunction of state descriptions. We can think of dA as of a set
o f possible alternative states of affairs, one of which may already be
actualized.
Quite naturally, A P B is intended to mean "A is preferred to B".
According to the principle o f conjunction (or the axiom schema W3 of
Wo), this means (A ^ --nB) P ( ~ A ^ B). If A ^ -TB and --hA ^ B are non-
contradictions, then we can show that A P B means d(An ~ B ) P
d(-nA^ B). Thus A P B means that a disjunction o f states o f affairs is
preferred to another disjunction of states o f affairs. Furthermore, the
principle of distribution gives us an exact meaning of the phrase " t o
prefer a disjunction o f states of affairs to another disjunction o f states of
affairs". Now, what if either A A "-nB or --hA A B is a contradiction? We
have no intuitive understanding of what it could mean to prefer a con-
tradiction to something else or to prefer a state o f affairs to a contradic-
tion. This problem is left open here.
Suppose that in a period of time to, we consider n logically independent
states of affairs p~, 1 <~i<~n, each of which will or will not be actualized
in some period tx later than to (ifp~ is not actualized in tl, we say that
-np~ is). Obviously, then 2" compound states of affairs are logically
possible, exactly one of which will be actualized in t t.
Let Aa, ..., A2, be all logically possible states in tl. Suppose that it
depends on our choice in t o (and on our action) which state will be
actualized in q . Decision making is nothing but choosing a state which is
intended to be actualized in ta through our action. It may happen that
some of A~,..., A2, we are not able to bring about for various reasons ;

35
188 ALEKSANDAR KRON AND VESELIN MI LO V A N O V I ~

for example, they might be physically or economically non-feasible. Thus


our first task will be to eliminate from further consideration all the non-
feasible Aj's, 1 ~<j<2 n. The remaining states are members of the disjunc-
tion dA we are confronted with in to, one o f which we have to choose as
our goal to be attained in t 1.
The process o f choosing takes place in the period to and our aim is to
choose the most preferred among the members of dA, if such a member
exists. This is done by considering pairs of members of dA. Let Ak and A t
be such a pair; if Ak is preferred to Al(Ak P As), As is eliminated from
further consideration. Consequently, tile process of choosing is an
elimination process at the end of which we expect to have a single member
of dA left - namely, the most preferred one. We assume that neither our
preferences nor our feasibility judgements change during to and that they
will remain the same during tl.
It may happen that we have no preferences for eliminating some alter-
natives; hence, it is possible for the result o f our choice to be a disjuc-
tion of states o f affairs. Such a situation could be called an incomplete
choice. (We may have no preference whatsoever to eliminate any alter-
native ;hence, the result o f our choice would be the disjunction we started
with.) Also, we can imagine that we have eliminated all members of dA.
All this is reflected in our notation; we write o(dA, dB) to show that
we were confronted with a disjunction dA of feasible states of affairs
and that the result o f the elimination process is dB, dB ~ dA. We have
o (dA, dA) after the non-feasible states are eliminated but the elimination
by means of preferences is not started. Also, there is no difficulty to
interpret o(dA, dA) as saying that so far we had no preferences which
would eliminate any disjunct of dA.,

5. AXIOMS
The axioms of W0 are given by axiom schemata. F o r propositional
connectives standing between wits of Wo we assume axioms of the classi-
cal propositional calculus. Proper axioms o f W0 are given by axiom
schemata as well. We first have schemata analogous to yon Wright's
principles. They are:
Wl APB~BPA
W2 APB^BPC=~APC

36
PREFERENCE A N D CHOICE 189

W3 (~vB) P(CvD)~.,(AA-ICATD)P('~AA-1BAC)A
(A A - I C A - 1 D ) P ( ~ A A ~ B A D) A
(BA-7 CA--ID) P(--IA A - 3 B A (7) A
(B A-'I C A " q D ) P(-'q A A ' I B A D)
W4 A P B'c~ (A A oO P (B A oO A (A A "q oO P (B A -"I oO,
where ~ is a propositional variable which does not occur in A or in B.
The first two schemata represent asymmetry and transitivity of P. W3
expresses the distributivity of P, typical for preferences not involving
risk; it corresponds to the operation of conjunction, too. W4 corre-
sponds to the principle of amplification, showing the unconditionality of
preferences.
The additional axoim schema concerning elimination is:

ol o(dA, dB) A At P Bj ~ o(dA, dC),


where Ai~dA, BjedB and dC is d B - B j .
ox is quite in accordance with our previous considerations. If we have
decided to bring about a feasible state of affairs described by a member of
dA, and if we have already eliminated dA - dB, dB ~ dA, using preferences,
then using the preference At P Bj we eliminate Bj.
The rules of inference are modus ponens (from Xand X=~ Yto infer Y)
and the following substitution rule: (a) let A ~ B be a tautology of L;
(b) let B contain no variable which does not occur in A; (c) let A P C(C P A)
occur in X; (d) let Y be obtained from X by substituting B P C(C P B)
for A P C(C P A) at that particular occurrence of A P C(C P A) in X;
then Xr Y is a theorem of Wo.
The reader familiar with [1] will notice that we have no schema
corresponding directly to von Wright's principle of conjunction. This
would be
W3' A P B.c~(A A "q B) P(T A A B).
But W3' is easily derived using W3, since A P B.~,(A v A ) P(BvB) is a
theorem of Wo.
We shall now prove the consistency of Wo.
Let f be a map from the set of all wffs of Wo onto {0, 1} such that
5.1 f(A PB)=0
5.2 f ( o ( d A , riB) = 0, whether or not dB exists.

37
190 A L E K S A N D A R KRON AND VESELI/q MILOVANOVI{~

5.3 f ( ~ (X)) = 1 - f (X)


5.4 If * is a binary connective, t h e n f (X* Y) is defined according
to the usual truth tables.
It is now easily proved that if X is a theorem of Wo, t h e n f Of)= 1, for
every valuation of subformulae of the form A P B of X. Hence, X and
--1 (X) cannot both be theorems of W0.

6. EXAMPLES

How does W 0 operate? To be quite precise, we assume the the reader is


familiar with the concept of a proof from hypotheses. Let us use Wo in
an analysis of an example. Suppose that a bachelor is making plans for
tomorrow evening. He takes into consideration three components: to
spend the evening with Jane (p), to spend the evening with Helen (q), to
watch television (r). But he prefers spending the evening with Jane to
watching television; hence p P r. Also he prefers q to r. On the other
hand, he knows that it would be catastrophic to spend the evening with
both Jane and Helen; briefly, --1 (p A q) P(p A q). Furthermore, suppose
that after combining p, q, r and their negations he sees that every possible
state of affairs is feasible. (Let us write down all of them in the usual
order, given in any textbook of elementary logic: p A q A r, ..., ~ p A -'1 q A
A "-1r, and let 1, 2, . . , 8 denote them respectively.) Finally, suppose that
he knows Wo and wants to use it in making his choice. Since he did not
choose as yet, he has the following hypotheses:
Hypl o(1 v . - . v 8 , 1 v...v8)
Hyp2 p Pr
Hyp 3 q Pr
Hyp4 7 ( p ^ q ) P(p A q ) .
Now using W1-W4 and the inference rules, from Hyp 2-4 he easily
obtains the following preferences between states of affairs:
2 P 3 , 2 P5, 3 P 1,4 P2, 4 P7, 5 P I, 6 P2, 6 P7, 7 P1
and 8 P 2.
Using ol and modus ponens several times, he derives
o(1 v.-.v 8 , 4 v 6 v 8).

38
PREFERENCE AND CHOICE 191

Thus, W0 "suggests" to our bachelor the conclusion


0(1 V . . . V 8 , ( p A ' 7 q A T r ) V(TpAqATr)V
v (T p A'TqATr)).
"Well", the bachelor may think, "Wo is a clever system: in any case it
suggests not to watch television! But what if I prefer watching television
to watching no television, i.e. r P-7 r ?" Then the bachelor will be sur-
prised, since adjoining the new hypothesis r P -7 r to Hyp 1-4 leads to a
contradiction, because r P -7 r is 1 P 2 A 3 P4 A 5 P 6 A 7 P 8. In this case his
conclusion would be
0(1 v . . . v 8, ).
The new hypotheses r P(p ^ q) does not help either, since
(p P r) v (q P r) =*. r-P(p ^ q)
is a theorem of W o.
If he wants to obtain the conclusion

0(1 v . . - v 8, 4 v 6)
which he might be expecting, he may adjoin the hypothesis
((p ^ T q ) v ( ~ p ^ p ) ) P ( 7 p ^ ~ q) t o H y p 1-4.

Another obvious way of obtaining the desired result would be to take


Hyp 1' instead of Hyp 1, where

Hyp 1': o(1 v . . . v 7, 1 v . . . v 7).

We give another example. Let p, q, r be as before, but suppose that our


bachelor has a different set of preferences. He prefers to spend the evening
with Jane than to spend the evening without Jane, he prefers to spend the
evening with Helen than to spend the evening without Helen, although he
prefers watching television to spending the evening with both Jane and
Helen. Hence, his hypotheses are:

Hyp 1 o(1 v . . . v S , l v - . . v 8 )
H y p 2 ' p P-n p
Hyp 3' q P 7 q
H y p 4 ' r P(p ^ q).

39
192 ALEKSANDAR KRON AND VESELIN MILOVANOVII~

Using W l - W 4 and the inference rules, he easily obtains

1 P3, 1 P5, 2 P 4 , 2 P 6 , 3 P2, 3 P7, 4 P 8 , 5 P2, 5 P 7 , 6 P 8


and 7 P 2.

Using O1 and modus ponens several times, he then derives

o(l v...v 8, 1),


namely,
o(1 v . . . v 8, p A q A r).

Thus, W o suggests him to spend the evening with both Jane and Helen
and to watch television! But suppose that the bachelor has no intention
to spend the evening with both Jane and Helen; consequently, he does
not take into consideration either p ^ q ^ r or p ^ q A "-1 r. Hence, in-
stead of Hyp 1 he takes

Hyp 1" o(3 v 4 v . . . v 8, 3 v 4 v . . . v 8).

His conclusion is now

o(3 v . . . v 8, 3 v 4 v 5 v 6).

Moreover, suppose that if he spends the evening with either Jane or


Helen, he does not watch television. In such a case he would take
Hypl" o(4v6v7v8,4v6v7 8)

and his conclusion would be

o(4v6v7v8,4v6 7).

7. TrlE SYSTEMS W 1

In a conversation with the first of the present authors, G. H. von Wright


suggested that W o should have an axiom schema which states explicitly
that feasible alternatives can be eliminated only by means o f preferences.
Let W1 be the system obtained from Wo by deleting o 1 and adjoining
axiom schemata of the form

02 o(dA, dB) =*. [(A i P B 1 v -.- v A k PB1) A ... ^


^ (A 1 P B z v ... v Ak P B 3 <~" o(dA, dC)],

40
PREFERENCE AND CHOICE 193

for all k and/, k>/1 and 1 <~l<~k, where dA is A1 v ... YAk, Ba ..... Bl~dA
and dC is obtained from dB by deleting all BsEdB, 1 <~j<~L
As a special case o f o 2, we have

o'2 o(dA, dB) =*, [(A1 PB~ v ... v Ak PB~) r o(dA, dC)],
where B i ~dB. Using propositional logic we easily derive o z. Hence, W1
contains Wo.
On the other hand, consider a schema o2, where BjCdB, for all 1 -<<j~<l.
Using propositional logic only, we derive

o(dA, dB) ~ [(.41PB: v ... v A,, P B 0 ^ -.- A


A (A1 PBt v . . . v Ak P B 3 ] ,
for all B j ~ d A - d B , 1 <~j<,<l. This is to say, for each eliminated member
B s o f d A there is a member Ai ofdA such that At PBj.
The intended interpretation o f W~ is the same as that o f Wo. The main
difference between Wo and W~ is that in W~ we explicitly make the
assumption that the elimination o f feasible alternatives is possible only by
means o f preferences.

8. SOME M E T A T H E O R E M S

We consider Wi and the case where the result of elimination is o (dA, ).


This means one of two things: either the set of our preferences is in-
consistent (8.2) or one o f our hypotheses is o (dA, dB), where dA # dB (8.3).
A set of wits of W 1 is called inconsistent iff S ~-X ^ 7 X, for some X.
Otherwise, S is consistent. Note that S is inconsistent iff S [-A PA, for
some A.

8.1. ~ o(dA, ) is a theorem of W1.


Proof. Consider a schema of the form 0 2,

o(dA, )=> [(A 1 P B l v . . . v A kPB1) A . . . A


^ (A1P Bk v ... v Ak P B~) ~ o(dA, )],

where dA is B i v - . . v Bk. Using propositional logic only, we derive

O(dA, ) => (-41 P B 1 v "." v A k P B 1 ) A " " A


^ (A~ PBk v.-. v A~ PB~).

41
194 ALEKSANDAR KRON AND VESELIN MILOVANOVI~

Using the distributivity of ^ over v and propositional logic, we obtain

o(dA, ) =~ [(A1 P B1 A ... A A1 P Bk) v ... v


v ( & P B~ A... A Ak P Be)].

It is clear that each member of the disjunction

(A1 PB1 ^ "- ^ A1 PBe) v -.. v (Ak PB1 A ... ^ A e PBe)

can be written in the form

B h PBI ^ "'" ^ Bi k PBe,

where {B h .... , B J ~ { B t , . . . , Bk}. NOW consider a B s, l<<.j<~k and let


Sj = {B jr,..., B j,} ~_ {Bi ..... Bk} be the set such that

BhPB 1A...AB~PBeF Bj.PBj, foralll<~s<~l.

Moreover, let B s be such that Sj is minimal. (Obviously, there is such a


S i, since {B1,..., Bk} is finite; of course, Sj is non-empty.) Since Sj G {B1,
.... Bk}, for each B~s there are Bsl, ..., Bsme{Bt, ..., Bk}, m>~l, such that

B h PB 1 ^ ... ^ B~k PB e l- B~p PBj~, for all 1 <~p ~< m.


Let Sm= {B~l.... , B~..}. Since P is transitive, S,, ~ Sj; since S s is minimal,
S,, = Sj. Hence, there is a B,p which is a Bs,, and we obtain
Bil P B1 ^ " " A Bi~ P Be I- B~, P Bs,, for some B , .
Therefore, each B** P B 1 ^ . . . A B , ~ P B k implies a contradiction. Using
propositional logic, we conclude that
(A1 P B ^ . . . ^ A~ P BD v - . . v ( & P BI ^ " " ^ Ae P Be) )-
FXA~X.
Hence,
o(dA, ) I- X A ~ X
and
I-'-1 o(dA, ).

8.2. F o r any set ofwffs o f W 1, the following claims are equivalent: (a) S
is inconsistent; (b) S b o ( d A , ).
Proof. By propositional logic and 8.1.

42
PREFERENCE AND CHOICE 195

8.3. Let S u {o(dA, dB)} be consistent; then

(a) s, o(dA, dB) o(dA, )


iff
(b) S, o(dA, dB) k (A1PB1 v . . . v AkPBI) ^ ""^
^ (A1 PB~ v - . - v Ak PBI),

where dA is A 1 v . . . v Ak and dB is B 1 v . . . v B~; and

(c) k > l.

Proof. I f (b) and (c), then (a) by 02.


I f (a), then

S, o(dA, dB) k [(A 1 P B1 v ... v Ak P Bx) ^ "" ^


A (A1 PBI v . . . v Ak PBl) r o(dA, )].

Hence, (b). I f k = 1, by the same argument as in the p r o o f o f 8.1, we have


S, o(dA, dB)FXA -'nX. But S w {o(dA, dB)} is consistent and this proves
8.3.

8.4. I f there is a m e m b e r ofdA in d A - d B , then

d(A A ~ B) PB, 0 (dA, dB) k o (dA, ).


Proof. Note first that d (A ^ --7B) is dA - dB. F r o m d (.4 n -7 B) P dB it
follows by W3 that for all A ~ d A - d B and for all Bj~dB we have

d(A A -1 B) P B, o(dA, dB) t" Aj PBj.


Using 02 we prove 8.4.

University of Belgrade
Filozofski fakultet

University of Novi Sad


Masinski Fakultet
NOTE

* We wish to thank Professor Georg Henrik yon Wright for his very useful criticism
of an earlier version of this paper, especially for his remarks on our previous intuitive
interpretation of o (dA, dB).

43
196 ALEKSANDAR KRON AND VESELIN MILOVANOVI(~

REFERENCES

[1] G. H. Von Wright, The Logic of Preference, Edinburgh University Press, Edin-
burgh, 1963.
[2] G. H. Von Wright, 'The Logic of Preference Reconsidered', Theory and Decision 3
(1972), 140-169.

44
DEDUCTION THEOREMS FOR T, E AND R RECONSIDERED
KRONin Beograd (Yugoslavia)
by ALEKSANDAR
In [11 we proved deduction theorems for systems T, E and R of relevance logic
(T is there called P). Here we re-define the concept of a proof from hypotheses and
we prove deduction theorems in a simpler and more general form.
Let S range over (T, E, R). A proof of B in S from hypotheses A,, . . .,.4,,is a list
of wffs B,, . . ., B,,,, where B is B , and for any 1 6 i m , Bi is one of n (not neces-
sarily distinct) hypotheses A,, . . ., A, or else an axiom of S or else a consequence
of predecessors by adjunction (AD) or modus ponens (MP) such that conditions (i);
(ii) and (iii) are satisfied.
(i) Finite (possibly empty) sets of numerals 1,2, . . . may be prefixed to t.he steps
B,, . . ., B,,,of the proof so as to satisfy the following rules (we assume that a, a,, . . .,
b, b,, . . ., c, c,, . . ., range over the set of sueh sets).
(a) If Bi is a hypothesis, then an arbitrary bi + 0 is prefixed to B i .
(b) If Biis an axiom of S which is not a hypothesis, then 0 is prefixed t o B,.
(c) If Biis a consequence of Bj and Bkby AD and b i is prefixed to both Bj and B A ,
then bi is prefixed to Bias well.
(d) If Biis a consequence of Bj and Bkby M P and bj and bk are prefixed to Bj and
Bk respectively, then bj u bk is prefixed to Bi.
(ii) The application of AD is not permitted in the derivation of Bi from Bj and
Bk if Bj and Bk have different prefixes.
(iii) Suppose that Bj is Bk --+ Bi and that bj and bk are prefixed to Bj and Bk re-
spectively. The application of M p is permitted only if the following further conditions
are satisfied.
Por T: if bj 0, then bk =# 0 and max (bk) 2 max (bj), where max (a) denotes the
greatest element of a.
For E: either (e) bj = 0 or (f) bk = 0 or (g) max (bk) 2 max (bj) or (h) Bk is of the
form BL --+ B:.
For R : no restriction (i.e. we assume only (i) and (ii)).
If in S there is a proof of B from hypotheses A,, ...,A,, ,then we write A,, ...,,4,, t-s B.
If n = 0, B,, . . ., B,, is a proof of B in the ordinary sense.
..
Lemma 1. Szsppoae that B,, . . .,B, is a proof iiz S of B from hypotheses A , , .,A,,
. . .
that b,, . ., b, are p e / i d to B,, . .,B,, respectively and that %, . ., a, are prefixed
. ..
to A , , . . , A n respectively (obviowly, a,, ., a, are among b l , . . ., b,,,). If
(*) for all 1 p n - 1, either a,,*=ap or a,n a p = 0,
then for dl 1 5 i 5 m , either a, E; bi or a, n bi = 0.

45
262 ALEKSANDAR KROK

P r o o f . Induction on the length of the proof. If Bi is a hypothesis, the lemma fol-


lows by (*). If Biis an axiom which is not a hypothesis, then bi = a, A bi = 0 by (b).
Suppose that for all 1 5 j, k k i ,
(1) either a, bj or a, A bj = 0
and
(2) either a, s b,,. or a, nbh: = 0
(induction hypothesis). If Biis a consequence of Bj and BId by AD, then the lemma
follows from (1) and (2), since bi = bj = b k . If Biis a consequence of Bjand B,; by
MP, suppose that a,, n bi = a,, A (bj u b,) = (a,,A bj) u (albr\ blJ 0. Then either +
+
a, n bj 0 or a,, n bk += 0. By (1) and (2), either a,, bj or a,, b,;. Therefore,
a,, b i .
T h e o r e m 2 (Deduction theorems). Suppose that
(1) A , , 1 . A,, Ap+i,. . . ) A , ,FS B ;
( 2 ) a , , . . ., a, are prefixed to A , , .. . , A , respectively and for all 1 t 5 p < n,
a, na,, = 0 ;
( 3 ) a = a, g b,,, = b += 0 is prefixed to .AI,+,,. . ,, A , ;
(4) inax ( b ) E a ;
then A , , . . .,A , t s (Ap+,& * * & A,,)-P B aizd b - a iu prefixed to (A,,, & - & A,,)-+ B.
* *

Proof. Let C,, . . ., C, be the list of all wffs among B,,. . ., B,, such that c, is
prefixed t o C, in the given proof, 1 q r and max (c,) E a. Obviously, C, is 3.
By Lemma 1, a g c q ; hence, max (a) = max (c,).
Let us construct the sequence A -P C,, . ..
, A + C,, where A is A p t , & - & A,,. --
We shall show how t o insert a finite number of additional wffs in this sequence so
that the resulting sequence is a proof of A -+ C, from hypotheses A , , . . ., A,, where
c, - a is prefixed to A -+ C,. The inserted wffs will be put in before each of the wffs
A -+ C, in order in such a way that, after completing the insertions as far as A -+ C,,
the whole sequence of wffs up to that point is a proof of A + C, from the hypo-
theses A , , . . .,A,.
Let us consider A -+ C, , q > 1, and suppose that the insertions have been completed
as far a.s A -i C,-,.
(2.1) If C, is a hypothesis, then it is a conjunct of A (otherwise, a n c , = a = 0,
violating (3) and (4)). Tf n - p = 1, A -+ C, is an axiom and no wff is inserted. If
n - p > I , insert in before A -i C, the sequence of wffs constituting a proof of A --+ C,
(there is such a proof in S).
( 2 . 2 ) C, cannot be a n axiom of S which is not a hypothesis.
(2.3) If C, is a consequence of C, and C, by AD, where c,, is prefixed to C,, , C, and
C,, 1 5 11, w < g , insert in before A + C, the following wffs: ( A S C,[) & ( A -+ C,.),
which can be inferred by AD from A + C,, and A + Ct3,already present in the se-
quence, and the axiom ( A -+ C,) & (A---i ct,) +. A + (Cth& Cu) of S. Theii A + C,
can be inferred by MP. Moreover, if cq - a is prefixed t o A + C, and to A + C,,
and if 0 is prefixed to the inserted axiom, then cq - a is prefixed to A -+ C,, as well.
(2.4) Suppose that C, is a consequence of C,, and C,. by MI', -here C,, is C, -+ C,.

46
DEDCCTIOX THEOREMS FOR T, E AND R RECONSIDERED 263

Let S = T ; then either (2.4.1) c, = 0 or (2.4.2) c, + 0 and max (cv)2 max (c,).
(2.4.1) A + C,. is already present in the sequence that we are constructing, since
a E c , , and c, - a = c, - a is prefixed to it. On the other hand, C, + C, occurs in
the proof B,, . . .,B , and does not depend on A, sin- a n c, = 8. Hence,
A,, .. .,A , C, -+ C,. Insert in before A + C, the wffs constituting the proof of
C, -+ C, in B,, . . . , B,,, then the axiom C,. + C, 4. A -+ C, 4. A -+ C, of T (0 is
prefixed to it) and finally the wff A + C, 4. A -+ C,, which can be inferred by MP
from C,. -+ C, and the inserted axiom, Now, 0 is prefixed to A + C,. -+. A -+ C,;
hence, A + C, can be inferred by MP and c, - a can be prefixed to it.
(2.4.2) Suppose that max (ctJ > max (c,). We have max (c,,) = max (c,) E a ; hence,
a c l , , by Lemma 1. On the other hand, max (a) # c,; hence, a nc, = 0, again by
Lemma 1.
Suppose now that max (cJ = max(c,) = max (c,) = max (a); then a c,, nc,,
by Lemma 1. Therefore, there are two cases to consider: (2.4.2.1)a c, and a nc, = 0
and (2.4.2.2) a E c,, nc,.
(2.4.2.1) A 4 G, is already present in the sequence that we are constructing end
c, - a is prefixed to it. On the other hand, C, -+ C, occurs in the proof B,, . . .,B ,
and does not depend on A, since a n c, = 0. Hence, A,, . . ., A , t T C, + C, and C,
is prefixed to C, -+ C,. There are two subcases: (2.4.2.1’) max (c, - a) max (c,)
-
and (2.4.2.1”) rnax (c, a) > max (c,).
(2.4.2.1’) Insert in before A + C, the axiom A -+ C,, +. C, -+ C, +.A 3 C, of T,
t,he wff C,, + C, -+.A 3 C, and then the wffs constituting the proof of C, 3 C, in
B,, . , ., B , from hypotheses A,, . . .,A,. Now, 0 is prefixed to the inserted axiom
and c,, is prefixed to C, -+ C,; hence, A + C, can be inferred by MP and c, u (c, - a)
can be prefixed to it. But c, w (c, - a) = c, - a, since a n C, = 0.

(2.4.1“) We proceed as in (2.4.1), but we prefix c, 0 to A + -+ C,, - + . A -+ C,.


-
Hence, (c, a) u c, = c, -
a can be prefixed to A + C,.

(2.4.2.2)Both A + (C,, + C,) and A -+ C, are already present in the sequence that
we are constructing and c, - a and c, - a are prefixed to them respectively. There
are two subcases:
(2.4.2.2’)max (cz, - a) 2 max (c, - a)and (2.4.2.2”) rnax (c, - a) < max (c, - a).
(2.4.2.2’) Insert in before A -+ C, the wffs constituting a proof of
A 3 (C, -+ C ,) -+.A -+ C, +. A -+ C,. where 0 is prefixed to this wff (there is such
a proof in T) and then the wff A --f C, +. A 4 C,. The last wff can be inferred by
hW and c, - a can be prefixed to it. Now A + C, can be inferred by MP and
(c,~- a) v (c~,- a ) = c,, - a can be prefixed to it.

(2.4.2.2”) Insert in before A + C, the wffs constituting a proof of


A -+ C,. -+ . A 3 (C,,-+ C,) 4. A -+ C, , where 0 is prefixed to it (there is such a proof
in T) and then the wff A + (Cz,-+ C,) +. A -+ C,. The last wff can be inferred by
M p and c, - a can be prefixed to it. Now, A -+ C, can be inferred by M P and
(c, - a ) v (c2,- a ) = cy - a can be prefixed to it.

47
264 ALEKSANDAR KRON

Let S = E. We have four cases to consider: (e) c, = 0, (f) c, = 0, (g) max (c,) 2
2 max (c,)and (h) C, is of the form C: -+ .C
:
(e) We proceed exactly as in (2.4.1).
.
(f) C, occurs in the proof B,, . ., B, and 0 is prefixed to it; hence, by (a) and (b)
of (i), kE C,. On the other hand, from the axiom C, -+ C, -+.C, + C, of E by the
rule (d), which is provable in E (cf. [2], p. 39), we obtain FE C, + C, + C,,. Insert
in before A + C, the wffs constit.uting the proof of C, + C, --+ C, (0 is prefixed to it),
then C,,-+ C, + C, -+.A + (C,3 C,) +. A + C,, an axiom of E (0 is prefixed to it)
and finally A + (C, + C,) +.A + C, which can be inferred by M P (0 is prefixed
to it). The wff A + (C, -+ C,) is already present in the sequence that we are construct-
ing and c, - a is prefixed to it; hence, A + C, can be inferred by MP and c , - a =
= c, - a can be prefixed to it.
(8) We may assume that c, + 0 and c, + 0. We proceed as in (2.4.2).
(h) We may assume that c, + 0 , c, #= 0 and max (c,) > max (c,). Hence, there is
only one case to consider: a c, and a nC, = 0. There is a proof of C,'. -+ C'; in
B,, . . .,B, where A is not used; hence, A , , . . . , A , t E Ci +.:C On the other hand,
A + (C: -+ C;' + C,) is already present in the sequence that we are construct,ing and
c, - a is prefixed to this wff. Insert in before A 3 C, the wffs constituting a proof of
the wff A + (C: C'; + C,) +. C; + C;' +. A
3 --tC, (there is such a proof in E) and
the wff Ci -+ C: -+.A + C, which can be inferred by M P (c, - a is prefixed to this
wff). Finally, insert in the wffs constituting the proof of C l + C
; from hypotheses
A,, . . .,A,. Obviously, A + C, can be inferred by M P and (c, - a) w c, = c, - a
can be prefixed to it.
Let S = R. We may assume that none of the restrictions (e), (f), (g) and (h) holds.
Then c, n a = 0 and there is a proof of C, from hypotheses A , , . . ., A,. The wff
A +. C, + C, is already present in the sequence that we are constructing and c, a -
is prefixed to it. Insert in before A -+ C, the wffs constituting the proof of C, in
.
B,, . ., B , from hypotheses A , , . . . , A , , then the axiom C, +. C, + C, + C , of R
(0 is prefixed to it) and then the wff C,, + C, -+ C,, which can be inferred hy MI?
(c, is prefixed to it). Now, A 3 C, can be inferred by M P and (c, - a) v c, = cq - a
can be prefixed to it.
This concludes the proof.

References
[1] KROIT,A., Deduction theorems for relevant logics. This Zeitschr. 19 (1973),85-92.
[2] ANDEBSON, A. R., and N. D. BELNAPJr., The pure oeloulus of Entctilment. J. Symb. Logic 17
(1962), 19-52.

(Eingegangen am 3. Januar 1975)

48
ALEKSANDAR KRON

AN ANALYSIS OF CAUSALITY

1. INTRODUCTION

The basic aim of this paper is to give a brief sketch of a formal theory of
causal relations. More precisely, our aim is (a) to show how the language
of the first-order predicate logic can be applied to an analysis of this kind
and (b) to discuss some both technical and philosophical aspects of such
an analysis.
Before we develop the formal machinery, let us describe the motives
for introducing some technical details to be given later.
If 'causation' is a name of a relation between 'cause' and 'effect' at
all, then we must ask what are the individuals which such a relation can
hold for. We suppose that causal relations are defined for (1) states of
affairs and (2) for changes of states of affairs. Let S1' S2' S3' and S4 denote
states of affairs. Then we can say that Sl is a cause of S2 and that S2 is an
effect of Sl; also, we can say that the change from Sl to S2 is a cause of
the change of S3 to S4 and that the change of S3 to S4 is an effect of the
change of Sl to S2' From (1) and (2) some other important meanings of
'cause' and 'effect' can be derived. We mention here only two of them.
From (1), as it will be seen later, we can derive a meaning of the phrase
'The property P is a cause of the property Q'. From (2) we derive the
meaning of the phrase 'The change C1 of the individual a is a cause of the
change C2 of a'. It can be seen that other meanings of 'cause' and 'effect'
can be derived as well, but we do not claim that our analysis exhaust the
whole field of such possibilities.
The two main problems in our analysis are how to understand a
(possible) state of affairs and how to represent it within first-order logic
or model theory, and how to define that a state of affairs has changed to
another state of affairs.
The first of our problems can be solved in various ways. Whevener we
talk of a state of affairs, we do that only with respect to some specified
relational structure m. Then a solution of our problem along Carnap's

Manninen and Tuomela (eds.) , Essays on Explanation and Understanding, \59-\82. All Rights Reserved.
Copyright © 1976 by D. Reidel Publishing Company, Dordrecht-Holland.

49
160 ALEKSANDAR KRON

line would be the following one. Let L be our first-order language con-
taining an individual constant for each individual in m:, and a relational
symbol for each relation in m:. Then the diagram D(m:) of m: could be
taken as a description of the actual state of affairs with respect to m:. (We
use the notation of Bell and Slomson's Models and Ultraproducts.)
There are many difficulties involved in such a definition. For example,
in this case L might be uncountable. Furthermore, in scientific and
ordinary discourse we neither require that a description of a possible
state of affairs contains only atomic sentences nor that it contains all
sentences of D(m:). Hence, in order to simplify the matter, we shall
suppose that any set K of first-order sentences defined in m: such that
m: 1= K partially describes a state of affairs with respect to m:. For example,
ifm:1=3v 1A, then 3V1A 'says' that there is an individual in m: such that A
is true for it. But this is a 'fact' about m:, isn't it? Consequently, m: itself
may be considered as a complex state of affairs and any set K of sentences
such that m: 1= K as a partial description of it. Thus, if m: 1= K, K partially
describes a sub-structure m:' of m:. Namely, if m: = <Ct, R), where Ct is a
non-empty set and R a set of relations defined on Ct, then m:' = <Ct, R'),
where R' is the set of relations 'mentioned' in K. Again, m:' is a state of
affairs partially described by K and 'contained' in the state of affairs m:
partially described by K as well.
What means that m: 1 has changed to m:z ? The consideration of the
concept of change is not the main issue of this analysis. But the fact that
m: 1 has changed to m: z can be represented by a sequence <m: 1 , t 1 ),
<m:z, t z ), where t1 and l z are moments of time. Thus, in 11 there is a state
of affairs m:1 and in t z a state of affairs m:z• Ifm:1 = <Ct1' R 1), m: z = <Ct z, R z )
and Ct1 = Ctz, we can say that the relations between individuals of Ct 1 have
changed, provided that R1 =l=Rz ; if R1 =R z and !Xl =I=!Xz, we have the same
relations in t 1 and t z, but they hold for different individuals.
Consider now a denumerable sequence of ordered pairs of this kind.
Such a sequence can be viewed as a sequence of states of a process or
a sequence of values of a two-place vector.
Our definition of partial descriptions of states of affairs may seem to be
weak: every consistent set of sentences is a description of a possible state
of affairs, i.e., it holds in a relational structure. In our further considera-
tions we shall define descriptions of a very special kind which will play
important roles in our analysis of causal relations.

50
AN ANALYSIS OF CAUSALITY 161

Let us ask what means that a state of affairs \!fl is a cause of a state of
affairs \!f2' Suppose that Kl and K2 are partial descriptions of \!fl and \!f2
respectively. Then our question can be answered by considering Kl and
m m
K 2 • Namely, l is a cause of 2 if there is a causal explanation of \!f2 in
terms of mi' This is only an incorrect way of expressing the fact that K2
can be causally explained in terms of K j • Of course, such an explanation
takes place within a theory T. Hence, the investigation of causal relations
must concentrate on T, Kl and K2 first. To such an investigation the next
part of our paper is devoted.

2. CAUSAL ORDERING

In a system of linear nonhomogeneous equations H. A. Simon defined


an asymmetrical relation interpreted by him as causal dependence. He
has developed an appropriate conceptual apparatus and proved a number
of interesting theorems concerning the connection between the causal
ordering of such a system and the identifiability concept [1].
Following the basic ideas of Simon, we develop an analogous apparatus
for an arbitrary set of formulas (wffs) with free individual variables. This
apparatus will be used in the definition of a causal explanation.
Let L be a first-order language with individual variables

and an arbitrary set of individual constants; if this set is denumerable,


we write for its elements

Let T be a closed first-order theory with equality (the axioms of T contain


no free variables) such that
* TrY vm3vnvm =J: Vn
and such that for every wff 3v"A there is an individual constant lin and

Thus, if mis a model of T, then mcontains at least two individuals.


We introduced ** in order to simplify the exposition; it can be omitted
if we use a purely model-theoretic language.

51
162 ALEKSANDAR KRON

DEFINITION 2.1. Let A(v 1, ... , Vn- 1, vn) be a wff with free variables
Vl' ... , Vn -l' Vn; Vn is real in A iff
T I- VV 1 ... Vv n- 13v nA and T I- VVl ... VVn-13vn -, A.
If Vn is real in A, then it 'matters' to A. In other words, if T is consistent,
then neither
T I- VV 1 ... VV n -lVvnA nor T I- -, 3v 1 ... 3vn _ 13v nA.
Let K be a non-empty set of wffs containing free variables, let K' be a
finite non-empty subset of K, let V' = {Vl' ... ' vn} be the set of real vari-
ables in K', let mK' and m V' be the numbers of elements in K' and V'
respectively. Then we give

DEFINITION 2.2. K is linear iff for every K'


(a) for every AeK' there is a real variable in A;
(b) mV'~mK';
(c) let k=mV'-mK', let I\K' be the conjuction of all elements of
K', let /(' be the set of universal closures of elements of K' with respect
to vir' ... ' vik' l~k~:.n and 1~I~k; if /(' satisfies (a) and (b), then
TI- VV it ... Vv ik 3!v jk +1 ••• 3!v j" I\K'.
We shall say that K is strongly linear iff it is linear and
(d) if T I- 1\ K' (iiit' ... , iiik' Cjk +l ' ••• , Cj,,),
T I- 1\ K' (h it' ... , h ik' dik +l ' ••• , dj,.)
and TI-iii/=fhi/ for some 1~I~k, then TI-cjr=fdjr for some k+l~
~r~n.
(a) and (b) need no special comment; to see what (b) 'says', it is
sufficient to note that if K contains A (v 1) and B(v 1), where V1 is the only
real variable in both A(Vl) and B(Vl), then K is not linear.
(c) means that for any choice of constants iiit' ... , ii jk we can define
unique values iiik+l, ... ,iij" of Vjk+l, ... ,Vj" respectively such that
TI-I\K'(iiit, ... , ii jk , ii jk+l' ... ' iij")' The restriction that /(' satisfies (a)
and (b) is necessary. Consider, for example,
K' = {A (v 1), B(v 1, Vl, V3)' C(Vl' Vl , V3, V4)}'
where Vt is the only real variable in A (Vt). Then k=mV' -mK' = 1. Now,
if we apply Definition 2.2. without restriction, then we can choose a value

52
AN ANALYSIS OF CAUSALITY 163

for any of the variables VI'"'' vn and the values of the remaining variables
will be determined uniquely. But if we apply Definition 2.2. to K" =
= {A (VI)}, then there is a unique value of VI' say aI' such that TI- A (at).
If in K' we choose a{ for VI' where TI-a 1 ;6a;, then there are unique
az, a3' a4 such that
T I- B (aI' a2' a3) A C (a;, az, a3, a4),
but it is not the case that
T I- A (aD A B (a;, az , a 3) A C (a;, az, a3' a4 ),
provided T is consistent. The restriction that K' must satisfy (a) and (b)
prevents us from choosing a value for VI in K', for
{Vv 1 A(v1), VVIB(Vl' V2, v3), VV 1 C(V 1 , VZ, V3' v4)}
contains a formula with no real variable.
(c) is the most important point of Definition 2.2. It defines what it
means (as we shall see soon) that a set of real variables depends on another
set of real variables. In our example, vz, V3 and V4 depend on VI' but VI
does not depend on vz, V3 or V4' This is the main ingredient of the sub-
sequent definition of Simon's causal dependence.
(d) does not exist in Simon's definition of a linear set, but we find
strongly linear sets to be important in some contexts and theorems.
There are interesting consequences of Definition 2.1. and Definition 2.2.
and we shall prove some of them. In the sequel K will always denote a
linear set, K', K", ... , Ko, K 1 , ... will denote finite non-empty subsets of K,
V, V', V", ... , Vo, VI' ... , will denote the sets of real variables in K, K',
K", ... , Ko, Kl>'" respectively. This notation will be extended in obvious
ways.

THEOREM 2.1. 1fT is consistent, then every linear set is consistent.


Proof. Let K be such a set. Using Definition 2.2. (c) and predicate logic,
we obtain

T I- 3v ik+ 1'" 3vin A K' ,


for every K'. If K is inconsistent, there is a finite inconsistent K'. Hence,
1--, AK' and 1--,3vik+l ... 3vinAK'. Therefore, T is inconsistent; this
proves the theorem.

53
164 ALEKSANDAR KRON

THEOREM 2.2. Let A(VI' ... , vn)eK, where VI' ... , Vn are the only free
variables in A; then VI' ... , VI are real in A.
Proof {A(v l , ... , vn)} is a finite subset of K. Hence, by Definition 2.2.
(c),
T I- "Iv iI ... "Iv in- 1 3!v inA (VI' ... , vn),
vhe{vl, ... ,vn}, 1~I~n.
Now, we easily derive

and
T I- "Iv iI .•• "Iv in-l 3v in VV k (A (V ii' ... , Vin- l ' Vk) ~ Vin = Vk)·
From the latter, using *, it follows that

TI- VViI ... VVin_l 3vin -, A (VI' ... , vn)·


This shows that Vin is real in A (VI' ... , Vn).

THEOREM 2.3. For any AeK, it is not the case that


TuK-{A}I-A,
provided that T is consistent.
Proof It is sufficient to prove the theorem for any finite subset K' of K.
For, suppose that for every finite K' £K and any AeK' it is not the case
that TuK'-{A}I-A, but that TuK-{A}I-A. Then there is a finite
K"£K-{A} such that TuK"I-A. Obviously, then TuK"'-{A}I-A,
where K"' =K" u {A}, contrary to our hypothesis.
Let mK' = 1; then K' - {A} = 0. Since A contains a real variable, it is
not the case that TI- A (remember that T is a closed theory).
Let mK' = n > 1, A e K', let m V' = k and let V" be the set of free variables
in K'-{A}. We shall distinguish two cases: (I) mV'=mV" and (II)
mV'>mV".
Case (I). Let K" =K'-{A} and let V' = V"={VI' ... ' Vk}. Suppose that

Tu K" I- A;
then by predicate logic
T I- 1\ K" <=> 1\ K' and T I- 1\ K" (vlk ) <=> 1\ K' (Vl k ) ,
where I\K"(v1k ) and I\K'(vl k) are obtained from I\K" and I\K' re-

54
AN ANALYSIS OF CAUSALITY 165

spectively, by substitution of VJk for vik. Since

and
T I- 3! vik_n+2 ... 3! vik _1 3 vik( /\ K" /\ Vvjk( /\ K" (vjJ =>
=> vik = VJk» ,
by the substitutivity of equivalence we obtain
T I- 3! vik - n+2... 3! vik_, 3vik( /\ K' /\ VVJk (/\ K' (vJJ =>
=> vik = vJk )
and

On the other hand, we obtain


T I- 3! vik _n+1 ••• 3! vik /\ K'.
Let B=3!vik_n+2 ... 3!vik/\K'. Then

and
T I- 3v ik-n+ I VvJk _n+I (B (vJk_n+ ,) => vik-n+ I = vJk - n+,),
where B(VJk_n+ 1) is obtained by substitution of vJk - n+I for vik-n+' in
B (vjk_n+' does not occur in B). By predicate logic,

T I- Vv ik-n+ I 3vJk _n+,v ik-n+ I =f. vJk _n+' =>


=> 3vJk _n+, • B(vJk_n+)
since vik _n+' is not free in .B(vJk _n+). Using *, we derive
T 1-. VV ik _n+,3! vik - n+2... 3! vik /\ K'
and thus T is inconsistent, contrary to the hypothesis of the theorem.
Hence, it is not the case that Tu K" I- A.
Case (II). Let K"=K'-{A}. Since mV"<mV', there are variables in
K' which do not occur in K". Let

and let

be the variables of A, k ~ m and n ~ 1. Suppose that


Tu K" I- A.

55
166 ALEKSANDAR KRON

Then, using predicate logic,

By Definition 2.2. (c),


T f- 3v I ... 3vm 1\ K" ;
hence,

By Definition 2.2. (c) and predicate logic,

Thus, T is again inconsistent, contrary to the hypothesis of the theorem.


This completes the proof.

DEFINITION 2.3. A finite non-empty K't;;.K is self-contained iff


m V' = mK'; K' is sectional iff m V' > mK'.
As a trivial consequence of Definition 2.3. we have

THEOREM 2.4. If K' is self-contained and V' = {VI" .. , VR}' then Tf-
f-3!Vl···3!v n I\K'.

THEOREM 2.5. If K' and K" are self-contained, then K' nK" is self-
contained.
Proof mK' +mK"-m(K' nK")=m(K' uK") and mV' +mV"-
-m(V'n V")=m(V'u V"). But mK'=mV' and mK"=mV". Hence,
mV' +mV" -m(K' nK")=m(K' uK"). On the other hand, m(K'uK")~
~m(V' u V")=mVK'uK", where VK'uK" is the set of variables in K' uK".
We get m(K' nK'')~m(V' n V''). But K' nK"t;;.K; hence, mVK'nK"~
~m(K' nK"), where VK'nK" is the set of free variables in K' nK".
Obviously, VK'"K"t;;. V' n V" and mVK'nK,,~m(V' n V"). Therefore,
mVK'nK"=m(V' n V")=m(K' nK"). This completes the proof.
From the proof of the preceding theorem we have

THEOREM 2.6. If K' and K" are self-contained, then K' uK" is self-
contained.

DEFINITION 2.4. A self-contained K' t;;.K is minimal iff it contains no


self-contained proper subset.

56
AN ANALYSIS OF CAUSALITY 167

THEOREM 2.7. If K' and K" are minimal, then K' nK" =0 and
V' n V" =0.
Proof K' n K" = 0 follows from Theorem 2.6. and Definition 2.4. Now,
m(K'uK")=mK' +mK" and meV' u V'')=mV' +mV" -mev' n V").
Since mY' =mK' and mV" =mK", we have that meV' n V''»O. This
implies m(K'uK"»m(V'u V")=mVK'uK", contrary to the hypothesis
that K is linear.

DEFINITION 2.5. Let M be the union of all minimal subsets of K. We


say that K is causally ordered iff
(a) M~K,
(b) for every non-empty K' £K there is a self-contained K" £K such
that K'£K".
That a linear K may be the union of minimal subsets is clear if

That a linear K may have no self-contained subset is clear if we take

K = {A (Vl' V2)' B(Vl' V2' V3), C(Vl' V2' V3' V4), ... }.
The importance of (a) and (b) will be seen from several theorems about
causally ordered K, which we prove below. The reader familiar with
Simon's original definition of a causally ordered system will notice that
Definition 2.5. coincides with Simon's definition if K is finite.

THEOREM 2.8. If K is causally ordered, then M ~0.


Proof Since K is non-empty, it contains a non-empty finite subset K';
by Definition 2.5. (b), there is a self-contained K" such that K' £K".
K" is either minimal or it contains a minimal subset.

THEOREM 2.9. If K'£K-M, then K' is sectional.


Proof If K' is self-contained, then K' contains a minimal subset.
Hence, it is not the case that K' £K - M.

THEOREM 2.10. If K' and K" are sectional and K' uK" is self-contained,
then either K' n M #/1 or K" n M ~ 0.
Proof Since K' uK" is self-contained, it is not the case that K' uK" £
£K-M. Hence, (K'uK")nM=(K' nM)u(K' nM)~0.

57
168 ALEKSANDAR KRON

THEOREM 2.11. If K' cK is minimal, then there is a self-contained


K" :;) K' such that K" n (K - M) #/J.
Proof Take any sectional K"'; if K' uK'" is self-contained, then the
theorem is proved; if K' uK'" is sectional, by Definition 2.5. (b) there
is a self-contained K", K' uK'" cK". This proves the theorem.

THEOREM 2.12. Let VM be the set offree variables in M and let VK - M


be the set offree variables in K-M. Then VMn VK - M :;60.
Proof By Definition 2.5. (b), Theorem 2.8. and Theorem 2.11., it
follows that there are a self-contained K" and minimal K I , ... , Kn> n~ 1,
such that
KI U··· U Kn c K", K' = K" - (KI U ••• U Kn)
contains no minimal subset and K"n(K-M):;60. Let V', VI"'" Vn be
the sets of real variables in K', K I , ... , Kn respectively. We have
mK" = m (KI U··· U Kn uK') = mKI + ... + mKn + mK' =
= m V" = m (V1 u ... u Vn U V') =
= mV1 + ... + mVn + mY' - m(VI u ... u V n) n V'),
since (KI u···uKn)nK'=0 and KjnKj=Vjn V j =0 by Theorem 2.5.,
l:r:;.i,j:r:;.n,i:;6j. (Note that (VIu···uVn)nV' is not necessarily empty,
for V' is not necessarily V" -(VI u ... u V n). Since mKj=mVj, we have
mK' = mY' - m«(VI u ... u V n) n V').
If mK' = m V', then K' is self-contained, but not minimal. Hence, it
contains a minimal subset, which is a contradiction. Thus, K' is not self-
contained. Hence, m V' > mK' and thus
m((V1 u ... u V n) n V') > o.
But (VI U •.• U Vn)n V's;;; VMn VK - M; this concludes the proof.

THEOREM 2.13. Let K' be sectional, let K I, ... , Kn be minimal, let


Ko = K1 U .•• u Kn and let Ko u K' be self-contained. Then
(a) every AeK' contains a Vk$ Vo and at least two free variables;
(b) V' n Vo :;60;
(c) if Voc V', then mY' -mK' =mVo ;
(d) V':;6Vo'

58
AN ANALYSIS OF CAUSALITY 169

Proof. {A} £ K' is sectional; hence, m VA> 1 and A contains at least


two free variables. On the other hand, if VA £ Yo, then m (Ko U {A}) >
>mVo, which is impossible. This proves (a).
(b) Analogous to the proof of Theorem 2.12.
(c) SinceKo uK' is self-contained, m(Ko u K')=m VKouK ' =mKo +mK'.
But mVKoUK.=mVO+mV' -m(Von V'). Since Voc: V', mVKoUK.=mV'.
Hence, mY' -mK' =mVKouK.-mK' =mKo.
(d) From (c), since K' ~0.

THEOREM 2.14. Let K' be sectional and let

choose individual constantsal' ... , an> substitute themforvl' ... , vn respective-


ly in K' and let K; be obtained in this way. Then K; satisfies (a), (b) and
(c) of Definition 2.2.
Proof. By Definition 2.5. (b) there is a self-contained K";:) K'. Since
K" is not minimal, it contains a proper minimal subset. Let Ko be the
union of all minimal subsets of K". Suppose that there is a VkE V" n
n (VM- yo); then there is a minimal Kl such that VkE Vl' We shall show
that Kl nK" =0.
Suppose that Kl n K" ~0. Kl n K" is self-contained, by Theorem 2.5.;
hence, it is minimal or it contains a minimal K 2 • By Theorem 2.7.,
either Kl n K" n Ko = 0 or K2 n Ko = 0. Thus, either Kl n K" £ K" - Ko or
K2 £ K" - K o, and it follows that Ko is not the union of all minimal
subsets of K". Hence, Kl n K" = 0.
Now, by Theorem 2.6. we have Vl n V" =0. This is contrary to our
hypothesis that there is a VkE Vl n V". Hence, V' n VM£ Yo'
This shows that for every sectional K' there is a sectional K"'2K' and
a union Ko of minimal subsets such that K'" u Ko is self-contained and
that Theorem 2.13. can be applied. By (a) of 2.13., every AEK; contains
at least one free variable. The condition (a) of Definition 2.2. is satisfied.
We have to show that mK;~mV;. Now, mK;=mK' and V' n VM£ yo;
hence, mK;+mKo~mV' +mVo-m(V' n yo) and mK;~mV'-m
(V' n VM)=mV;. Thus, K. satisfies (b) of Definition 2.2.
In order to show that K; satisfies (c) of Definition 2.2., suppose that

59
170 ALEKSANDAR KRON

and that the set K; of unjversal closures with respect to

satisfies (a) and (b) of Definition 2.2., where vliE V;, 1 ~i~k. Obviously,

Moreover, the set K' of universal closures of elements of K', with respect
to

satisfies (a) and (b) of Definition 2.2. We also have


mY' = mY; + meV' (\ VM ),
since V; (\ VM = 0; hence,
mY' -mK' = mY; -mK; + meV' (\ VM ) = k +n.
By Definition 2.2. (c)
T f- 'Iv! ... VVn 'Ivit ... 'Ivik 3! v ik+ I ••• 3! vim 1\ K' .
Using predicate logic, we easily obtain

Therefore, K; satisfies (c) of Definition 2.2.


COROLLARY. Let K be causally ordered and let (K-M)c be the set
of results of substitution of constants for variables of VM (\ VK - M in the
wffs of K-M. Then (K-M)c is linear.
The values of free variables in the wffs of (K-M)c depend on the
values of free variables in the wffs in M, but not vice versa. Of course,
we need individual constants in L in order to substitute them for the
variables in the wffs of M. This could be avoided in a purely model-
theoretic approach, but we do not insist on this point.
It is clear that (K-M)c contains at least one minimal subset (this
follows easily by Theorem 2.13. (c), since K is causally ordered). Let us
define Kl as (K - M)c and let Ml be the union of all minimal subsets of
Kl. Kl is not necessarily causally ordered. If it is, we can find the values
of the free variables in Ml, introduce individual constants for them and

60
AN ANALYSIS OF CAUSALITY 171

substitute the constants for the elements of VM l n VK 1_Ml in the wffs of


K1_M1 etc. We give a simple example. Let

K = {A (v t ), B(v 1, v 2 ), C (V2, V3), D (VI' V2 , v 3 , v4 )· .•. }.


Then
M= {A(Vl)}'
Suppose that we have TI- A (d1). Then
Kl = {B(d l , V2), C(V2' v3 ), D(d l , V 2 , V3' V4), ••• }
and
MI = {B(d l , V2)}'
Suppose that
T I- B(dl , d2 );
then

and

etc.
If K' is causally ordered, it is not necessary that K'+ 1 is causally ordered.
Hence, it is possible that our procedure of substituting the values of
variables (constants) obtained from the minimal subsets has to be stopped
(in case that M'+1 =K'+1) or that we have to substitute some arbitrary
constants for some variables of K'+ 1 (in case that for some sectional
subset of K' -M' there are only sectional supersets).
Now we define a finite or countable sequence of causally ordered sets
KO, Kl, ... , K', ... , starting with a causally ordered set K.

DEFINITION 2.6. Let K be causally ordered. Then KO=K; suppose


that K' is causally ordered and let M' be the union of minimal subsets of
K'; let

and let

be the set of individual constants such that for every minimal K: of K',
where

61
172 ALEKSANDAR KRON

T uK: I- /\ K; (ch' .. "' cik),


where cil is substituted for vii in /\K;, 1 ::;;;i::;;;k and cit' """' CikECMr.
Then K r+ 1 is obtained from K r- M r by substitution of cn for Vn in the
wffs of K r - M r , for all n. (For the sake of simplicity, we supposed that K
was countable.) K r is the derived set of order r.
We shall now prove an important theorem concerning a sequence of
causally ordered sets.

THEOREM 2.15. Let KO be causally ordered, let MO be the union of


all minimal subsets of KO, let K\ K2, ... , K r, ... be the (finite or infinite)
sequence of derived causally ordered sets furthermore, let M 1 , M 2, " ..
M r , ... be the unions of all minimal subsets of order 1,2, ... , r, ... re-
spectively; finally, let M"'=UrMr. Then
(a) Tu KO I- B

for all BEM'" and


(b) Tu M'" I- B'
for all B'EKO.
Proof Let us prove (a) first. We obviously have
Tu KO I- B

for all BEKo. Suppose that


Tu KO I- B

for all BEKr (induction hypothesis). Let K; be a minimal subset of order r.


We have

where V;={Vl"'" vn }. Hence, by **

In this way we obtain

T I- Vn = an
for all VnE Vw' Note that the minimal subsets of K r are disjoint as well
as the sets of their free variables and that all is unique. By induction

62
AN ANALYSIS OF CAUSALITY 173

hypothesis, for all

where

we have

Therefore,

and this proves (a).


(b) Let B' E KO; then there is a finite self-contained subset K~ r;;. KO
such that B' E K~. Let M~ be the union of all minimal subsets of K~;
obviously, M~r;;.Mo. Now, K~-M~ may be empty. If it is not empty,
(K~ -M~)r;;.Ko -Mo and after substituting constants from CMo for
variables of VM0I"I (KO _ MO) in the wffs of KO - MO, we obtain a finite
self-contained K; r;;. Kl, where K; is obtained from K~ - M~ by this sub-
stitution. Again, K; - M; may be empty, where M; is the union of all
minimal subsets of K;. If it is not, then there is a finite self-contained
K~ r;;. K 2 obtained from K; - M; by substution of constants from CMl
for variables from VM 11"1(Kl_Ml) in the wffs of K1_Ml, and so on.
Briefly, since K~ is finite, there is a finite sequence K~, K;, ... , K; of
self-contained subsets of KO, K1, ... , K' respectively such that K:+ 1,
O~s< r, is obtained from K~-M~ by the same substitution by which
K S+1 is obtained from KS_Ms. It is clear that there is always a K; such
that K; = M;. Hence,
T u M W I- " K; .
Suppose that
T u M W I- "K~,
O<S~ r (induction hypothesis). We also have

where M:- 1 is the union of all minimal subsets of K:_ l' Let

63
174 ALEKSANDAR KRON

Obviously,

T I- 3! V1 ... 3! Vn 1\ M;-1
and using **

Now, I\K~=I\K;-1(el1, ... ,eln)' where el1, ... ,eln are substituted for
V1, ... , Vn respectively in
1\ K;-1. We have therefore

Tu M()} I- 1\ K:- 1 •

In this way we obtain K~. This procedure can be applied to every B' EKo
and this proves (b).
There is an interesting consequence of Theorem 2.15. Let ~ be any
relational structure such that ~FxTuKo, where K r is causally ordered
for every r=O, 1, ... , x is a valuation and all variables of L are in yo.
Then x is unique. For, by Theorem 2.15., we easily obtain
~ FxTu KO iff ~ FxTu M()}.
It is clear that there is one and only one x such that ~ FxTuM"'.

DEFINITION 2.7. Let K; be a minimal subset of a derived K r , let


VkE V; and let Vk$ V~ for all K';, s<r, where K~ is a minimal subset of K S ;
then Vk is endogenous to K;; if elk has been substituted for Vk in K;, then
Vk is exogenous to K;.

DEFINITION 2.8. Let V; and V~ be sets of endogeonous variables in


K; and K; respectively, where K; and K~ are minimal and of order rand s
respectively; then V~ is directly causally dependent on V; iff there is at
least one VkE V; exogenous to V~, and we write V; ~ V~. V~ is indirectly
causally dependent on V; iff there are V;, ... , V~ such that
V; ~ V;, V; ~ V~, ... , V~ ~ V~.
In this case we write again V; ~ V;'.
Now we can define what means that a state of affairs is a cause of
another state of affairs.

DEFINITION 2.9. Let KO be causally ordered, let K; and K;' be derived

64
AN ANALYSIS OF CAUSALITY 175

minimal subsets of order rand s respectively, r<s, let

and v; --t V;' ;


let and ~2 be relational structures such that the relation symbols of
~1
K; and K;' are defined in ~1 and ~2 respectively. Then ~1 is a cause of
~2 and ~2 is an effect of ~1 iff

Obviously, in Definition 2.9. ~1 and ~2 are states of affairs, in ac-


cordance with our introductroy considerations.
From Definition 2.9. a meaning of the phrase "The property F is a
cause of the property G" can be derived. First of all, the relation --t be-
tween sets of endogenous variables can be defined to hold between sets
of wffs of minimal subsets. Let us write K; --t K;' iff V; --t V;', where,
for example,

Suppose that a property (relation, predicate) F(V1' ... , Vk) is defined as


I\K; and that a property G(Vk+l' ... , Vk+m) is defined as I\K;'. Then
we can say that F(V1' ... , Vk+m) is a cause of G(Vk+1' ... , Vk+m) if there are
a causally ordered KO and derived minimal subsets K; and K;' such that
K;--tK;'.
3. CHANGE

In this section we shall consider the concept of change. Let J O and KO


be causally ordered and let J 1 , ••• , J r , ••• and K 1 , ••• , K r , ••• be the sequences
of causally ordered sets derived from J O and KO respectively; let VO,
Vt, ... , Vr, ... and W O, W 1 , ... , Wr, ... be the corresponding sets of free
variables and let M O, Mt, ... , M r, ... and N°, Nt, ... , N r, ... be the cor-
responding unions of minimal subsets. Suppose that VO = WO and that
there is a mapping/from the set of all minimal subsets derived from J O
(including those in J O) to the analogous set of minimal subsets derived
from KO (including those in KO) and satisfying the following conditions:
(l) / is 1-1 (obviously,!: MOl --t N Ol ).
(2) if/(J;)=K~, then V;= W; and r=s.
(3) J;--+J;' iff/(J;)--t/(J;').

65
176 ALEKSANDAR KRON

Let J; c J' be a minimal subset such that


T I- A J; Af(J;).
~

It is clear that although J O and KO have the same causal ordering, the
relation symbols of J; differ from the relation symbols off (J;).
Now, let ~1 and ~2 be two relational structures such that (a) ~1 FxJO
and (b) ~2 Fyf(JO), where, obviously, if ~1 and ~2 have the same in-
dividuals, then x =p y. ~1 and ~2 are two different sates of affairs with the
same causal ordering. If we have (a) in a period tl of time and (b) in a
period t2 later than t 1, then we can say that ~1 has changed to ~2' There
is no difficulty to think of the transition from ~1 to ~2 as of a change in a
system S of individuals, where ~1 and ~2 are two consecutive states of S.
Of course, the transition from ~1 to ~2 is here not explained, but simply
introduced.

DEFINITION 3.1. Let J O, KO etc. be defined as above and let J;cJ r


be minimal and such that TI- AJ;~ A f(J;); if K;=f(J;), then we
write X(J;, K;) and, in order to simplify the exposition, we say that
J; has changed to K;; we write X(J°, KO) iff X(J;, K;) for some J; and
some r.
Now we shall prove several theorems concerning X and ~.

THEOREM 3.1. Let J O be causally ordered, let J\ ... , J r , ... be causally


ordered and derived from J O; let KO, K\ ... , K r , ... be obtained from the
preceding sequence by changing one and only one minimal subset J; of
order r, i.e., X(J;, K;). Thenfor all minimal J~' such that not J;~J;',
T u KO I- A J;' .
Proof. Every VkE VO is endogenous to one and only one minimal subset
J~. For, it is endogenous to no other minimal subset of order k, it is
substituted in all minimal subsets of higher order and it does not occur
in any minimal subset of order lower than k.
We shall distinguish two cases; (1) s:!l;.r and (2) s> r.
Case (1). Let VkE V;; then Vk is neither endogenous nor exogenous to
any minimal J;' different from J;, s:!l;. r. Obviously, TI- A J;' <:> A K;',
since J;' is K;' (this follows by a trivial induction). Therefore,
T u KO I- A J;' ,
by Theorem 2.15.

66
AN ANALYSIS OF CAUSALITY 177

Case (2). Suppose that not J;~J~/, r<s; then not K;~K~/. Let
p = r + 1, let K; be minimal and such that not K; ~ K; and let Np be the
union of all such K;. Then no VkE W; is exogenous to any K;. From the
proof of Theorem 2.15. (a) it follows that
T u KO I- B,

BEK' -N'. But K; is obtained by substitution of constants for variables


in some finite K' cK' -N' and no VkE W; occurs in the wffs of K'. The
relation symbols of K' are identical with the relation symbols in the
corresponding J Ie J' - M' and W' = V'. Hence,
Tu KO I- 1\ J' .

The exogenous variables of J; occur in the wffs of J ', as well as in a


finite M; not containing J; such that by Case (1) (since 1\ M; is 1\ N;)
T u KO I- 1\ M;.

But we have
Tu M; I- 1\ J;,
by Definition 2.6. and this proves the theorem for p = r + 1.
Consider J~/; it is obtained by substitution of constants (say a1 , ... , ak )
for variables (say V 1 , ... , V k , the exogenous variables of J~/) in a finite
Js - 1 cP- 1 _M s - 1 • Let r<p<s, suppose that not J;~J; and let Mp be
the union of all such J;, for all p. Then Vl'"'' VkE Vp and, in particular,
there is a finite union M~ c Mp of minimal subsets of different orders
such that v1 , .•. , VkE V~. Each J; is obtained by substitution from a finite
J p - 1 cJ P - 1 -M P - 1 • Let our induction hypothesis be
T u KO I- 1\ Jp and T u KO I- 1\ M"P'
for all, p, Jp and M~. Then we have
T u KO I- 1\ Js - 1 and T u KO I- 1\ M~/-l

for all Js - 1 and all M;'_~.


The exogenous variables of J~' occur in the wffs of some Js - 1 and in
the wffs of the corresponding M~/-l and do not causally depend on the
variables of J;. For that particular M~/-l we have
Tu KO I- 1\ M;'-l (al'"'' ak),

67
178 ALEKSANDAR KRON

by Definition 2.6. On the other hand, /\ J;' is /\ J5 - 1 (a l , ... , ak ). This


proves the theorem.
Theorem 3.1. can be stated differently, as follows. Suppose that the
hypotheses of the theorem are satisfied. If
m: Fx JO and m: F, K O ,
then x and y coincide at all places where the variables are not causally
dependent on the variables of J;.

THEOREM 3.2. Let the hypotheses of Theorem 3.1 be satisfied and let
XV;, K;) consist in changing one and only one wff A of J;. More precisely,
A EJ; is replaced by B, where B contains the same free variables as A does,
and K; is the result of this change. If T is consistent,

and both
TI- /\J;(al, ... ,ak) and TI- /\K;(bl, ... ,b k),
thenfor noj, TI-aj=b j, where l~j~k.
Proof. Let us first note that if Tl-aj=D j for allj, then from the hypoth-
eses of the theorem we can derive a contradiction. For, then
T I- /\ J; (a l , ... , a k) -¢;> /\ K; (a l , ... , a k),
while, on the other hand, from XV;, K;) it follows that
T I- /\ J; (a l , ••• , ak) <P /\ K; (a l , ... , a k)
and T is inconsistent. Hence, under the hypotheses of the theorem, it
is not the case that TI- a j=b j for allj. Now we shall show that if TI- aj=b j
for at least onej, then TI-aj=b j for allj.
Let VA be the set of free variables in A and let Vl be the set of free
variables in J;'=J;-{A}. We shall show that mVl -mJ;'=1. In order
to do that, it is sufficient to prove VAS;; Vl'
Suppose that the contrary is the case. Then for some j, vjEt: Vl' Hence
V A -VA nVl =tf0, i.e., VAnVlcVA and mVA-m(VAnVl»O. But
mV;=mVl +mVA -m(VAn Vl) and it follows that mV;>mVl and
mV;-l~mVl' Since mJ;'=mJ;-l=mV;-l, we obtain mJ:'~mVl
and mJ;' = m V l . Therefore, J; is not minimal and this proves VAS;; V l •
Now, mV;=mVl =mJ;. Hence, mVl-mJ;' = 1.

68
AN ANALYSIS OF CAUSALITY 179

Since J~ is minimal, J;' with respect to any variable satisfies (a) and
(b) of Definition 2.2. By Definition 2.2. (c),
(1) Tf- 'v'vj3!vt ... 3!vj_13!vj+l ... 3!Vk 1\ J;.
Since J;' is K;', if Thij=b j for at least onej and
(2) T f- 1\ J;' (aI' ... , ak) and T f- 1\ K;' (b l , ... , bk),

then Tf-aj=b j for all j. But (2) follows from the hypotheses of the
theorem. Therefore, for no j, Tf-aj=b j '

THEOREM 3.3. Let J O be causally ordered, let J1, ... , J', ... be causally
ordered and derived from J O; let KO, K1, ... , K', ... be obtained from the
preceding sequence by X(J;, K~) for exactly one J~, where X(J~, K~)
consists in replacing exactly one AEJ~ by B, VA = WB • Let

V~ = W~ = {VI' ... , Vk},


T I- 1\ J~ (aI' ... , ak) and T f- 1\ K; (b l , ... , bk ).
Suppose that V; ~ V;' directly, where

and suppose that


T I- 1\ J;' (a k +l' ... , aHm) and T I- 1\ K;' (b H 1, ... , bHm ).
If T is consistent and there is exactly one ViE V;
exogenous to V;', then for
no 1 :~:..j~m, Tf-aHj=b Hj .
Proof. J;' and K;' are obtained from finite J,cJ' - M' and K,cK' - M'
respectively, by substitution of constants for variables, where, obviously
J, is K,.
Let V r = W,={w 1 , ••• , Wn> VHI ' ... , Vk+m}, n';:?; 1. We know that if we
choose arbitrarily values for some n=mV,-mK,>O variables, then the
values of the remaining ones are determined uniquely. Suppose that

and

and that for no p, TI-cp=dp, q+ I ~p~n, and for no u, TI-au=b u '


k+l+ I ~u~k+m. We shall show that q+lx<n.

69
180 ALEKSANDAR KRON

If q + I ~ n, then we can choose arbitrarily values for n variables from


the list Wl"'" wq , Vk+ 1, ... , Vk+ I such that the values of the remaining
variables are not determined uniquely, contrary to Definition 2.2 (c).
Hence, q+l<n.
From the hypotheses of the theorem, we have that J;' is

and that K;' is

where cn is ai and I n is b i and ai and b i are substituted for Wn=ViE V;. By


Theorem 3.2. it is not the case that n·cn=Jn• Moreover, we have

and
T I- 1\ J,(c l , ... , Cn - l, bi' bk+h ... , bk+m)'
By the preceding argument, there is no j such that TI- ak+ j =b k+j (other-
wise we would have I ~ 1 and n - 1 +1< n). This completes the proof.
By an easy inductive argument Theorem 3.3. can be generalized to
indirect causal dependence, i.e., to the case where directly

and for each J:'+m+ 1 there is exactly one ViE V:'-tm exogenous to V:'+m+ 1,
O::=:;;m::=:;;s-l.
Suppose that the causal ordering is defined on a strongly linear set;
then we call such an ordering S-causal ordering.

THEOREM 3.4. Let J O be S-causally ordered, let J 1, ... , J', ... be S-


causally ordered and derived from J O; let KO, Kl, ... , K', ... be obtained
from the preceding sequence by X(J;, K;) for one and only one J; and let
this change be as in Theorem 3.2. Let
T I- 1\ J; (a l , ... , ak) and T I- 1\ K; (b l , ... , bk);
then for all J;' such that directly J; -? J;', if
T I- 1\ J;' (ak+l'"'' ak+m) and T I- 1\ K;' (b k+l, ... , bk+m),
then it is not the case that TI-ak+j=bk+j, at least for one 1 ::=:;;j::=:;;m.

70
AN ANALYSIS OF CAUSALITY 181

Now, for some 1 ~i~k, Vi is exogenous to both J~' and K~'; suppose that
J~' and K;' are obtained from J,c.J' -M' and K,c.K' -M', respectively,
by substitution of ai and b i for Vi' such that J;' is

andK;' is

From the hypotheses of the theorem,

and
T I- /\ K,(b l , ... , bk , bk+l' ... , bk+m)'
By Definition 2.2. (d), if for allj TI-ak+j=bk+j, then for all i TI-ai=hj,
contrary to Theorem 3.2. Therefore, at least for one j it is not the case
that TI-ak+j=bk+j'
Let us now reconsider X(J;, K;). In Definition 3.1. we have required
only that TI- /\J;~/\f(J;) and V;= W;. In Theorems 3.2., 3.3. and 3.4.
a specific concept of change is used where a single AEJ; is replaced
by B. Let us write Xl (J;, K;) for such a change. As a consequence of
Xl (J;, K;), in Theorems 3.3. and 3.4. we had that if J; ~ J~', then it is
not the case that TI- /\ J;' <;> /\ K;'. For, suppose TI- /\ J;' <;> /\ K;'; then, if

and
T I- /\ K;' (bk+l' ... , hk+m) ,
we derive TI-ak+j=bk+j, for all 1 ~j~m, contrary to these theorems.
If T is a complete theory in the sense that for all closed A either TI- A
or TI--,A, then Theorems 3.2., 3.3. and 3.4. can be strengthen such that
'not TI-aj=b/ (in 3.2.) and 'not TI-ak+j=bk+/ is replaced by 'TI-aj#
#h / and 'TI- ak+ j #bk+ / respectively. Moreover, in this case we would
also have TI-/\ J;' ~ /\ K;' and we would be allowed to write X(J;', K;').
Thus, if T is complete, as a consequence of Theorem 3.1. we have:
if X(J;, K;) and not J;~J;', then not X(J;', K;') (under the hypotheses
of the theorem). As a consequence of Theorems 3.3. and 3.4. we have:

71
182 ALEKSANDAR KRON

if X(J;, K;) and J; ~ J;' directly, then X(J;', K;') (under the hypotheses
of the corresponding theorem).

4. CONCLUDING REMARKS

In this section we shall try to explain what we mean by the phrase 'The
change of m1 m2 to is a cause of the change of m3 m4" to
m
LetJ° and KO have the same causal ordering, let (a) 1 F /\ J; (a1, ... , ak)
m
in fl' (b) 2 F /\ K;(ti;, ... , a~) in f2' (c) X(J;, K;) and let 12 be later than fl'
If(d)J;~J;', (e) m3 F/\J;'(b 1 , ... ,b ll ) in 11 and (f) m4F/\K;'(b~, ... ,b~)
m m
in f2' then we say that the change of 1 to 2 , i.e., X(J;, K;) is a cause
of the change of m3 m4, to i.e., of X(J;', K;').
Suppose that mhm2, m3, m4 are such that ai and a; denote the same
m m
individual ai in 1 and 2 respectively, 1 ~ i ~ k, and that bj and bj
denote the same individual b j in m3 m4
and respectively, l~j~n. If
(a), (b), (e) and (0, we can say that the individuals a h ... , ak (b 1 , ... , bll )
have changed, i.e., that they are in different relations in 11 and 12 , If,
moreover, (d) holds, then we can say that the change X(J;, K;) of in-
dividuals a1"'" ak is a cause of the change X(J;', K;') of individuals
b 1 , ... , b ll • Finally, if k=n and a 1 is b 1 , ... , a k is bk> then we say that the
change X(J;, K;) of a h ... , ak is a cause of the change X(J;', K;') of
a 1 , ... , ak'
This completes our analysis of causality.

University of Belgrade

BIBLIOGRAPHY

[1] H. A. Simon, 'Causal Ordering and Identifiability', in Studies in Econometric


Method, ed. by W. C. Hood and T. C. Koopmans, John Wiley & Sons, New York,
1953.

72
ALEKSANDAR KRBN
-

LOCIKA I VREME
Vreme je jedan od osnovnih pojmova kojim kao
Ua je proieto naSe saznanje i naSe razumevanje
sveta. Mada se ne moie dokazati da je nemo-
guCe definisati pojam vremena, takva definicija
ne postoji. U definioijama koje su filosofi pred-
lagaLi definijens obiEno sadrii termine koji za-
vise od pojma vremena. Na primer, za Aristotela
je vreme ,,mera kretanja u odnchsu na pre i
pcrsle", Pllotin ga definiSe kao ,,?hot duSe u kre-
tanju kojim prelazi iz jednog stanja delovanja
ili isku~stvau drugo", a Avgustin tvrdi da postoji
samo ,,sadasnjost proSlih stvari, seCanje; sadal-
njmt 1sadaSnjih stvari, videnje; sadaSnjost stvari
buduCih, ofekivanja".')
Iako je vreme pojam koji, izgleda, mora ostati
nedecinisan, filosofi, nauEnici i logifari su nasto-
jali da taj pojam objasne. Filosofi to Eine tako
Sto izgraduju teonije ili filosofske slsteme u koji-
ma pojam vremena zauzima odredeno mesto i
nalazi se u odredenim odnosima prema drugim
fil~osofskimkategorijama. Pronifubi u takve teo-
rije i sisteme, uoEavajuCi te odnose, mi razu-
mevamo njihwu poruku, pa tako izgleda da
razumemo i Sta je vreme. U tom razumevanju
m i ne zadovoljavamo samo naSu radovnalost i
fed za znanjem, veb i duboke emotivne, reli-
giozne i metafizieke potrebe. Razumevanjem
jedne fiiwsofije mi nalazimo odgovore na pitanja
koja nas dug0 mlufe i odnose se na naSu sud-
binu, koja se odvija u vremenu, i njenu svrhu.
U tom pagledu pourno je Hajdegerovo Bike i
vreme. To nije samo nastojanje da se odredi
jedan novi pojam biCa proietog traianjem. Ose-
CaiuCi ogrmnu razliku izmedu dva pojma -
biCa i vremena - koju nam je ostavila tradicija.
Hajdeger pokuSava ne samo da ie smanii. vet
da pokaie kako su te dve kategorije jedno te
isto. Ali EineCi to, uzimajuti kao polaziSte biCe
ljudskog iivota proieto trajanjem i prolaunoSfu,
on ocrtava perspektivu iz koie naSe trajanje i
niie niSta drugo do prolaznost bez utehe, ali da
nam uteha nije ni potrebna, ier ta prolaznost
i nije drugo do samo bite. Kada zanemarimo
raciomalnost Hajdegerovih analiza, onda to su-
') Ovaj napis nastao je na osnovu predavanja koja je
autor pod istim naslovom odriao u Domu omladine u
Beogradu 10. i 17. maja i 1. juna 1972. godine, i objav-
!jen je u Siroj verzjfi u fasopisu ,,IdeJeW, god. 111, br. 3
(str. 47-81), maj-juli 1972.

73
ALEKSANDAR K R O N

morno delo, tako nam se bar Eini, vrSi duboko


ljudsku funkciju: ono usmerava snagu naSih
osdanja u pravcu u kojem moiemo podneti
,,najviSe stvarnosti" ne traieEi nikakvu naknadu.
Naulka u prouravanju vremena ima mnogo ma-
nje emotivnih funkcija. Fizika, na primer, pro-
uEava kako merimo ono Sto Njutn naziva ,,rela-
tivno vreme", kako promenljiva t, oznaravajufi
neku opStu meru trajanja, pomaie da razumemo
odvijanje nekog procesa, kako se pojam vremena
odnosi prema pojmovima prostora i kretanja,
,i kako od njih zavisi. Pojam vremena tu takode
astaje nedefinisan. Cak i u klasiEnoj mehanici,
koja uiiva glas gotovo savrSene teorije u kojoj
niSta nije nedoreEeno, taj pojam ostaje neobjaS-
njen. Pogledajmo kako je Njutn razlikovao
apsolutno i relativno vreme i kako je pokuSao
da odredi ono prvo. ,,Apsolutno, istinito i mate-
matiEko vreme, samo po sebi, zbog svoje sop-
stvene prirode, teEe ravnomerno, bez obzira na
bilo Sta spoljagnje, i zove se drukEije trajanje;
relativno, prividno i obiEno vreme je neka opai-
ljiva i spoljaSnja (bez obzira da li je taEna ili
neravnomerna) mera trajanja, odredena pomofu
kretanja, koja se obiEno upotrebljava umesto
pravog vremena; to je Eas, dan, mesec, g ~ d i n a . " ~ )
F,izikalne teorije nam, u aajboljem ,sluEaju, kaiu
kako se ~pojamvremena mora upotrebljavati u
nauci i kako, u zavisnosti od te upotrebe, vreme
moiemo zamiSljati. Tako f e nam teorija relativ-
nosti refi da vreme nije apsolutno, da o vre-
menu moiemo govoriti samo kada smo prethod-
no def,inisali sistem merenja u kojem prc;:?a-
vamo neko kretanje. Fizika uzima u obzir vreme
kao neSto Sto je vei: d a b ,i Sto treba samo prila-
goditi odredenim svrhama. Nema nikakve sum-
nje da pojam vremena pre nauEne analize u
fizici nije isto Sto 1 pojam vremena posle te
analize, ali je jasno da fizika ne moie da objasni
Sam fenomen vremena kao nezto Sto je iz osnove
.stvoreno u samoj nauci.
Mada su filosofija i nauka mnogo uticale na
razwmevanje onoga Sto nazivamo vremenom,
one su mnogo manje uticale na naSe doiivlja-
vanje vremena, trajanja, prolaznosti i svega Sto
je povezano s ovim osnovnim predstavama na-
Seg svesnog iivota. U tom pogledu onaj svet
sebanja, mirisa i slika koji se pred nama odvija
u delu Prusta, moie viSe da nas uzbudi i ukaie
na prirodu tih [predstava negn bilo koia teorija
moderne fizike i vise od ma kojeg Hajdegerovog
dela. Treba, igleda, verovati d a postoji neka
osnovna intuicija vremena koju imamo u ,nagem
svakodnevnom iivotu i koju bismo mogLi nazvati
zdravoravumskim pojmom vremena. Taj pojam,
3 Alathematical Principleq of Natural Phtllosophy, ed.
Floiian Cajori, Berkeley, Calif., 1947, p. 6.

74
ALEKSANDAR KRON

ili predstava, bogat je sadriajem i utkan je


duboko u naSa oseEanja, a teSko se moie odvojiti
i od religioznih i slikovnih elemenata koji ga stal-
no prate. Cinjenica Sto niko, verovatno, a i sa sta-
noviSta zdravog razuma nije u stanju da vreme
odredi, me treba nimalo da nas zbwjuje; oaa
je sasvim spojiva s postojanjem takve intuicije.
Setimo se r e g Avgustinovih: ,,&a je, onda,
vreme? Ako me nLko ne pita, ja znam; ali ako
h o b da objasnlim onome ko pita, ja ne znam."
Ove te9koi.e na koje nailazi obifna svest navele
su mnoge naufnike, pisce i filosofe da tvrde
kako vreme ,,nije niSta stvarno", kako ono ne
,,postoj,i", kalko je to ,,samo naS na6in opaianja",
itd. Ttikva tvr8en-j~su irelevantna, jer n i a a
ne objasnjavaju; ona nam ne pomaiu da razu-
memo predstavu koju imamo o vremenu. Pret-
postavimo da mamo Sta znafi da vreme ,,ne
postoji" i da je to neka vrsta iluzije koj,oj svi
pcdleiemo; tada bi jog uvek preostajalo da se
objatsni poreklo i priroda te il,uzije, a to se ne
bi ,moglo ufinibi bez analize njenog sadriaja,
Sto 6e reEi pojma vremena. Tvrdnja da vreme
,,ne postoji" nije odgovor na pitanje Eta je
vreane. Postojalo ono ili ne, postavljaju se ista
pitanja. Calk i d a je vreme iluzija, mi je n e
moiemo razlikovati od stvarnwti.
LogiEka analiza pojma vremena moie se obav-
ljati na dva naeina. Prvi .se sastoji u ispitivaaju
i objahjavanju naSeg zdravorazumskog pojma
vremena. Neki filosofi koji s u okrenuti nauci
prigovorili bi da je taj pojam izvor velikog
broja nesporazuma, protivrefnosti i zastarelih
empiristifkih uverenja. Prema njihovom millje-
nju, moderna nauka je potpuno odbacila takvo
shvatanje vremena pokazavSi sve njegove nedo-
statke. P a ipak. pre nego Z'to se moiemo uveriti
u takvo tvrdenje i pre nego Sta sagledamo u
Eemu je doprinos moderne nauke, neophodno je
zapoEeti analizu upravo te zdravorazums~ke
predstave od koje, kako ovde vidimo, polazi i ta
m,oderna nauka, a nema sumnje da je oma pola-
ziite i za mnoga fi1,osofska razmatranja.
Metod koji s e u ovakvoj analizi ,primenjuje je
metod jeziEke analize i on se sastoji u ispitiva-
nju upotrebe vremenskih odredaba u obihorn
jeziku. Ma kakvo misljenje imali o filosofiji
obihog jezika, odbacivali je mi ili ne, Einjenica
je da nam ovakve analize pomaiu da izgradimo
bar osnovu na kojoj se moie zapofeti analiza
neke druge vrste. Prema VitgenStainovom sa-
vetu, u jeziekoj analizi gradimo logirku grama-
tiku, pravila za upotrebu izraza koji se odnme
na vremes) Da bismo ostvarili taj cili nije do-
voljno v d i t i logiEke veze izmedu razliritih vrsta
vremenskih izraza, veC je neophodno videti na-
3 L. Wittgenstein The Blue Book, Oxford, 1960. pp. 6,
26tt.

75
ALEKSANDAR KRON

Ein na ikoji wi izrazi odreduju neke naSe asnov-


ne pojmove. Sto vise u tom prouEavanju mapre-
dujemo, postaje sve jasnije zaSto ..vremev n e
moie d a s e definige: lpojmovi vremena proii-
maju sve druge pojmove koje upotrebljavamo
k'ada mislimo o evetu; oni su kao ,,koreni jednog
drveta dueboko ura.sli u tle naSeg pojmovnog
sistema, koji d r i e Evrsto razliti'te delove tog
t.1aU4).
Drugi naEin ma koji vreme lpmtaje p~edTIIet
logirke a~nalize jeste ,onaj koji Cemo u ovom
napisu prikazati. On se sastoji u primeni metoda
matematifke logike na pojmove koji se odnose
na vreme. Sami po sebi, ni formalni metodi ni
rezultati dobijeni 'pornoh njih u analizi vre-
mena nisu filosofski zanimljivi, ali interpreta-
cije formabnih teorija poenatih kao ,,vremeaske
logi~ke" znaEajne su za filosofiju. One vkazuju
na neofekivane veze koje gostoje izmedu razli-
Pitih vremenskih odredaba i na povezanast tih
odredxba s unobiEajenim modalnim ikategorijama.
Modalni lpojmovi nuino i mogute 'mogu d a se
o b j a s ~ e a, moida Ealk 'i da s e def~inisu,pod izves-
nim u,slovi~ma,ipomoCu ,nekih vremenskih odre-
daba. U ovom napitsu prikazatemo najvainije
prableme i reilultate s t v a r e n e u toj IogiEhoj
analizi.
Kako ~bis e magli adrediti d l j w i matematieke
logike u prcrufavanju pojma vremena? JoS je
Kant u K ~ i t i c i Eistog uma pisao: ,,Vreme je,
.
dakle, dato a priori . Na ovoj nuinosti a priori
zasniva se i moguhost apodiktitkih osnovn'ih
stavova o odnosima vremena, ili aksiome o vre-
menu uopSte."" Cilj matematicke logike je da
objasini sadriaj, ispita posledice i utvrdi h o s e
izmedu islkaza koji bi anogli posluiiti kao ,,a?:
diktifni asnovni stavovi o odnosima vremena, 111
aksiomi o vremenu uopBteU. Medutim, msi nismo
obavemi da prihvatimo filosofske osnove od
kojih je polazio Kant.
KlasiEna matematieka logika, o kojoj je najviSe
lrnjiga napisano, ne poznaje iskaze Eija istini-
tost zavisi od vremenskih odredaba. U tom smi-
slu je ona, kao i najveCi deo tradicicmallne logike,
,,vanvremenska".
Na ne-vremenskom karakteru istine moramo se
zadriati. Prema vrlo raiirenom shvatanju, svahi
iskaz moie bibi ili istinit ili laian. Mi ne moramo
mati koja od ovih alternativa je u pitanju, ali
utoliko golr po nas. Ako je jskaz istinit, onda
je t,o njegova bitna karakteristika koja se ne
menja; ako nam s e EQi da je to neSto Sto se
3 Richard Gale, The Language of Time, Humanities
Press. NEW York. 1968, pp. 5 - 4 .
') Prevod Nikole PopovtC-a, Beograd, 1932, s. 52.

76
ALEKSANDAR KRON
- - -

menja, onda to a i j e iskaz. Na primer, reEenica


,,Dams je sreda" nije iskaz, jer reE ,;dana,sH
nije odredena. Ova retenica postaje iskaz kada
je dolpluaimo reci'mo, ,,Danas, 19. jula 1973, je
sreda". Tek uz ovu odredbu izraz, koji je bez
nje iskazna funkcija, polstaje iskaz, pa je onda
ili istinit, ako je 19. jula 1972. zaista sreda, ili
laian, ak'o toga dana nije sreda. Ovalkva, u osno-
vi realistiEka koncepcija, bila je izgradena za
potrebe .matematike. Osnova za takvu koncep-
ciju bila je, verovatno, ne-vremenski ilntemre-
tirena kopula ,,je". Ako je objekat a u lklasi
X, ,onda je Ito taka bez obzira na vremenske
odredbe, jer m i n e zamidljamo ,klasu ikao nedto
Sto menja, gubi ili ,dobija elemente. h k o klasi X
oduzmemo ili dodamo elemenat b, dobicemo
novu klasu, recimo Y; ntko nece refi da je to
jod uvek tklzsa X lcoja je izgubila ili stekla
elemenat b. Jedna klasa je odredena u potpu-
nosti kada se znaju svi elementi te 'klase, a to
se moie znati kada ~postojineki kriterijum po-
moku kojeg za proizvoljni objekat r n o i m o utvr-
diiti ,da li toj klasi pripada. Klase X i Y su
identiEne samo ako ismaju iste elemente. T,o pro-
izlazl iz aksioma j.ednakosti i ekstenzilonalnosti,
osnovnih princiga ,teorije ,~kupova.Otuda je jas-
no d a se kogula ,,jeWmora ruzeti ne-vremenski:
da bi se saEuveo identitet jedne kla~seu nekoj
argumentaciji, nuino je ,pretpostaviti d a se njeni
elementi n e menjaju i da se n j i h w lbroj ne
menja.
Nema sumnje d a ise lovakva, ae-vremmlska upo-
treba kopule ,,je" moie nafi i u sva,kodnevnom
govoru i ru nauci. Na primer, kada kaiemn da
je fovek smrtaln, mi nemamo u vidu nikakvu
vremensku odredbu ovog ili cmog foveka. Bilo
bi, verovatno, pogresno tvrditi da m i pod tim
podrazumevamo da je Eovek uvek smrtan. Na-
prosto, to je jedna nevremenska odredba Cove-
ka; ona a i j e ni ,,veEnan ni ,,vanvremenska" niti
,,nadvremenskaW.Vaino je pitanje d a li s e sve
reEenice mogu izraziti bez vremenskih odredaba,
da li za svaku rerenicu koja sadrii vremenske
cdredbe moiemo nafi drugu koia ih ne sadrii
i koja ima isti smisao kao prva. Odgovor je
verovatno negativan. ,,Prevodenjen vremenskih
reEmica na ne-vremenske vodi vrlo neobienim
i nep~irodnimposledicama. Na primer, za reEe-
nicu .,JuEe je lpadala kiSa", na osnovu pravila
logike, imamo prevod ,,Svi dani koji su iden-
tiEni sa juEera5njim danom jesu dani u kojima
pada kiss". Tako izgleda da su vremenske od-
redbe relacije i svojstva, Sto nam izgleda sasvim
neprihvatljivo. Zanimljivo je da su veliki l ~ g i -
Eari ovoga veka, Rasl i Broud, to upravo i tvr-
dili. Medutim, izgleda da u modernoi ne-
-vremenskoj logici a e postoji padesan naEin da
se izraze vremenske odredbe, kao Sto je ,,juEe-
ragnji", koje ipak nisu obiEna svojstva i rela-

77
..- .---
- .-
ALEKSANDAR KRON

cije. Ne gostoji n i naEin d a se u toj logici ,p-


demo .izraze glagoli, jer ,,pada 'kiSa" nije svoj-
stvo nekog dana, kao Sto ni ,,padaVnije osobin~a
kiSe. Ne liEi nam na ,svojstvo a i ,;biti gadajuCa",
kao ~kada bismo rekli ,,KiSa je padajuCa" po
ugledu na ,,Petar je visok". Ostavljamo Eitaocu
da :Sam prosudi o bprirodnosti ovakvog ,,prevo-
denja". PodsetiCemo lsamo na jog jednu Einje-
nicu. Dugo ,su 1,ogifari i filosofi rasnravljaii o
tame da li se svi lsudovi mogu ixraziti u oblilh
subjekat-kopula-predikat, sve do 1931. godhe,
kada je na osnovu Gedelovih rezultata postal0
jasno da to nije mogufe. Logika 2redikat.ivnih
sudova je odluCujuCa, ali to nije logBa rela-
cionih sudova. Mcida Ce se do sliEnih ~ e w l t a t a
dloC'i i u pogledu vremenskih i me-vremenslkih
sudova. Za sada olstaje uverenje da se ~prvine
mogu svesti na druge. Zato je potrebna jedna
LogiEka teorija koja Ce ispitivati iskaze sa vre-
menslrim odredbama. Da iakva potreba postoji,
mogu se navesti joS i ovi razlozi: (1) moguke je
razvi%i vremensku logiku, a logika ne moie
ostati ravnoduSna prenla jednoj svojoj moguCoj
grani; (2) sa logifkog ,stanoviSta takvo istraii-
vanje je zanimljivo; (3) ono je korisno za filo-
sofska razmatralnja, posledice takve aaalize ima-
ju znaEaj i za nauku i za filosofiju.

Prethodnici. - Ne smemo biti sigurni da je


vremenska logika tekovina moderne logike. Vef
su antiEki logifari uoEili vezu izmedu vremen-
skih lodredaba i modalnih kategorija. StaviSe,
Diodor Kronos je imao teoriju vremenske impli-
kacije na kojoj bi mu pozavideli mnogi savre-
meni 10giEari.~)Sto se tiEe modalnih 'kategoi-ija,
Megarani i stoiEari su razvili dva razliEita shva-
tanja nuinosti i moguCnosti. U oba sluEaja ove
dve kategorije definisane su pomoCu vremenskih
odredaba. Prema pisanju Seksta Empirika i Bo-
etija, Diodor Kronos je smatrao (oko 300 godime
p.n.e.) da je sada moguCe ono Sto je sada ili ono
Sto Ce biti; da je sada nemoguEe ono S t o nije
sada i Sto nikada neCe biti; da je sada nziino
ono Sto je sada i Sto Ce uvek biti; da sada niie
nuino ono Bto sada nije ili neCe uvek biti.')

Medu antitkim logitarima nije oostoiala saglals-


nost. Prema jednom drucom m i S l i e n j ~ . ~sada
) ie
moguCe ono Sto ie vef bilo ili oao Sto je sada
ili m o Sto Ce biti, a sada je n u h o m o Sto je
uvek bilo, Sto je sada i Sto f e uvek biti.

3 I. M. Bochnnski, A . History of Formal Logic, trans.


and ed. Ivo Thomas, Unlv. of Notre Dame Press. 1961,
pp. 116133.

') Nicholas Rescher and Alasdair Urquhart. Temporal


Logic, Sprlneer 1971 n

78
ALEKSANDAR KRON

Zanlmljivo je d a je Megarsko-1stoiEka 9kola, u


lrojoj su se j'avile ove ideje, bila potpuno zane-
marena, delom zabo Sto je od njih ostalo vrlo
malo fragrnenata, a tdelom zato Sto s u docniji
komentatori intenpretirali njihovo uEenje u duhu
Aristotelove logike. To su Einili i ,modemi isto-
rifari logilke, .pa s e tako PrantluO) desilo da
uopSte ne primeti da je :kod njih reE o logici
iskaza; on je umesto isk'aznih promenljivih vi-
deo promenljive Eije su vred'nolsti termini. Pers
je prvi uoEi'o da je reE o logici is'kaza, a tek je
LukaSijeviE dao tafnu interpretaciju.lO)
StoiEari su zadui,ili logiku i svojom teorijom
predikacije, koja je znaEajlna za vremensku lo-
gi~ku.Njihove ideje prihvatili su srednjovekovni
arapski 1ogiEari. Avicena je produbio stoiElru lo-
giku vremenskih odredaba, a Diodorovu teoriju
implikacije razvio je u opStu teoriju katego-
riekog silogizma.ll) On je takode vrlo znaEajan
za vremensku interpretaciju modalnih kate-
gorija.
Evrop~kisrednjovekovni logifari au takode za-
sluini za razvoj vremenske logike. Medu njima
treba pomenuti Tomu Akvinskog, Viljema
Okamskog, Alberta od Saksonije i Jovana Bu-
ridana.
MatematiEka varijanta vremens'ke logike u pra-
vom slmislu poEinje 1947. jednim Elankorn polj-
skog matematiEalra LoSa.12) Njegove ideje s u pri-
hvatili i razvili Prajor i ReSer.
Kako zamiSljamo vreme? Pre nego Sto se upo-
znamo s rezultatima logifko-matematieke ana-
lize, vratimo s e za Easak zdwvorazumskirn
predstavama koje h a m o o vremenu. Razmatra-
nje tih predstava pomoCi Ce nam d a Tazumemo
t e rezultate.
Veruje se, moida pogregno, d a mi o vremenu
mislimo na dva razliEita nafina. Pre svega, vre-
me zamigljamo hao prolaienje. Dogadaji nam
se prikazuju kao proSli, sadaSnji ili buduCi, a
ove odredbe se neprekidno menjaju. Dogadaji
kao da ,rputuju" iz hduCnosti ka sadagnjosti,
nnda (se crstvaruju, a zatim prolaze, ,,putmju"
m e dublje i dublje u proSlost. Izgleda nam da
9, Geschlchte der Logik im Abendlande, 4 Bde., Leip-
zig, 1855-1980.

' 3 ,,Zur Geschichte der Aussagenlogik", Erkenntnis,


5, 1935136, pp. 111-131.

") Nicholas Rescher, Temporal Modalltles In Arabic


Logic, Reidel Pub. Co, Dordrecht, 1966.

Jerzy Los, ,,Pods.tawy Analizy Metodologicznej Ka-


'I)
nonow Milla", Annales Universitatis Mariae Curie-
-Sklodowska, ~ 0 1 .2. pp. 269--301.

79
-.
ALEKSANDAR KRON

s e sve viSe udaljavamo od proSlih dogadaja i da


se pribliiavamo buduCim. To dinamiEko doiiv-
ljavanje vremena leii u osnovi naSeg fnaEina na
koji o vrernmu govorimo - u osnmovi glagolskih
vremena - lkao i u mnogim pesaiekim i filo-
sofiskim izrekama o lrajanju i proglosti. Kao Sto
Cemo askoro videti, ono leii i u osnovi naSe
formalne analize pojma vremena.

S dmge strane, mislefi o vremernu, mi doiiv-


ljavamo izvesnu statiEku strukturu ili poredak.
Isti dogadaji koji isu prvo buduCi, zatim isadas-
nji, pa onda groili, aalaze se medusobno u jed-
norn odnosu koji se pri prolaienju a e menja:
oni su ili istovremeni ili prethode jedan drugo-
me. Ovaj odnos izmedu dogadaja zvaCemo ,,biti
raniji". On ureduje sve ldogadaie u niz u kojem
svaki od njih ima svoje nepromenljivo mesto.

Sve nas ovo navodi da pomiSljamo na dve


vrste vremenskih Einjenica. Prvo, postoje Finje-
nice o vremenskim odlnosima prethodenja i,
drugo, postoje einjenice o proSlosti, sadakjosti
i budubnosti lstih dogadaja. P ~ o vrstij tiinjenica
odgovara niz dogadaja koji se u literaturi zove
B+niz. Taj se niz ,,odvija" od ranijeg ka docni-
jem i an generise relaciju koju moiemo nazvati
ranije (kasnije) od. Drugoj vrsti vremenskih Ei-
njenica odgovara niz koji se z w e A-niz; cm
se odvija iz daleke proSlolsti kroz blisku proslost
do sadahjosti, a onda od sadagnjosti kroz blisku
ka ldadekoj budufnosti. B-niz se ne menja u
bom smislu Sto je odnos ranije (kasnije) stalan.
S druge strane, A-niz lpodrazumeva promeno,
jer se granica izmedu prollosti, sadagnjosti i
hduCnosti menja. Lako moiemo zamisliti dva
A-niza koji sadrie lsti dogadaj, ali je u j&om
od njih taj dogadaj u sadaSnjosti dok je u dru-
game u prGlosti. Odnos ranije (kasnije) zove
se B-relacija, a ' p l o i a j jednog dogadaja u A-
-nizu, njegovo prisustvo kao proSlog, sadaSnjeg
ili buduCeg dogadaja zove s e A-determinacijaJ3)

U B-nizu dogadaji n e mogu m m j a t i svoj polo-


taj. Platcmovo mdenje, na primer, ostaCe uvek
ranije u odnosu na Hegelovo rodenje. Aku je
izmedu t a dva dcugadaja bilo toliko i toliko
vremmskih jedinica do sada, onda f e taj broj
astati za Eitavu budubost. Dogadaji mogu me-
njati A-determinaciju; jedan buduCi dogadaj
postaje dogadaj u sadahjosti i #pretvara s e u
proSli dogadaj. PraSli dogadaj ostaje uvek pro-
Sli, on svoju A-determinaciju ne moie proine-
niti. Tako je Platoncuvo rodenje nekada Mlo u
dalekoj buduhosti, zatim je bilo sve manje bu-
duke d a bi se ru jednom trenutku ostvarilo i
'9 J. M. E. McTagart, ,,The Unreality of Time", Mind
XVIl (1908); The Nature of Existence, London, 1927,
Vol. 2, p. 271; Philosophical Studies, London, 1934.

80
ALEKSANDAR KRON

postal0 sadalnje, a onda se otisnulo u pmSlost


u koju sve dublje tone.
Nefemo se zadriavati n a analizi A-niza, jer
to nije cilj nalih razmatranja. UkazaCmo jog
samo na vezu lkoja postoji izmedu B-nizova i
A-determliaacije, ~lprksosvelikoj razlici, i na je-
dan cpojam koji je prilsutan u nalim razmislja-
njima o A-determinaciji.
Nije teSkso ~ a z u m e t i d a s e u A-detenniaaciji
granica izmedu prollosti i b u d u h m t i stalmo po-
mera. Svaki sadalSnji dogadaj mima svoju proSlost
i svoju buduCnost. Hegel je iiveo u Platonovoj
budhtnosti, a Platon u Hegelovoj proSlosti. U
odnasu na Hegellovo, Platanovo rodenje nije ni-
kafda bilo 1buduCi dogadaj. T B n a m nagoveStava
VRZU izmedu B-niza i A-determinacije: ako je
dogadaj d u B-nizu p r e ,dogadaja e, onda a e
,postoji A-niz u kojem je dogadaj d budufi do-
gadaj, a e nije. Dakle, A-determlinacija je nu-
i a n uslov pastojanja odredene B-relacije.
Svako razmilljmje o A-determinaciji podrauu-
meva ostvarivanje. BuduCi dmogadaji za obienu
svest nisu stvarni, v e t moguCi, verovatni ili
sigurni. Neki od njih be s e ostvalriti, ali to n e
mora biti sigurno, p a su u tom pogledu neiz-
vesni i neodredeni. Ako s e ostvare i postanu
proSli, onda se njihova priroda n e menja. SadaS-
njost je granica izmedu stvarnog i moguCeg. To
prisustvo ideje ostvarivanja ili stvarno,g u za-
miSljanju v r m e n s k o g ttoka pomaie nam d a ra-
zumemo zagto je joS tako d a m u s p o ~ t a v l j e ~ a
veza izmedu vremenskih odredaba i modalnih
(kategorija.
U matmatiEko-logiekoj aaalizi vremenskih od-
redaba bike prisutne obe *predstave o vremenu,
i B-nizovi i A-determinacija. U nekim sistemi.ma
one s u ehsplicitine, u drugima sarno se podra-
zumevaju, ali ,intuitivna interpretacija vreinen-
skih lagika nije lbez njih mogufa.
V r e m e koje se grana. - Predstavu o vremenu
koja je n'a,Sla ,svoj iskaz u A-nizovima i A-de-
terminaciji moramo d w n i t i . Lako moiemo za-
mislilti d a nam u iednom trenutku neki dogadaj
iagleda rnoguk i da Ce .se moida 1 desiti, ali da
to niie izvesno. Na primer, mogufe je .da tim
za koji navijamo pobedi i~dufenedelie. Pretpo-
s t a d m o d a nal tim nije pobedio, a da je pode
nekoliik'o d s n a prestao d a lpostoii, ier su svi
igraEi pregli u nek,i drugi ~klub.Tako se jedan
momf .buduCi dogadaj nij'e desio i vile se n e
moie desiti. Za takav dogadaj ne moZmno rekt
ni d3 je budufi ni .da je sadaSnii niti da j e
proSli: on vi'Be niie moguf. T,o s'amo pokazuie
d a odnos moguCeg ,i stvarnoq niie iscrplien
dosada5njnjom analiaom. U A-'determi~laciji do-

81
gadaji su buduCi, onda se ostvaruju i postaju
proSli. Ali ta predstava je suviSe jednostavna.
Zar fie bi bilo potrebno pored ,,stvamog" vre-
menskog toka posmatrati i tok pretvaranja mo-
gutih buducih dogadaja u stvarne ili u neostva-
rene? Zar n e bi bilo potrebno pored ,,stvarnogV
vremenskog toka posmatrat,i i tok pretvaranja
mogutih budutih dogadaja u stvarne ili u ne-
ostvarene? Zar neostvareni ne mogu joS uvek
biti mogufi ili nemoguti, zar nema ostvarenih
dogadaja koji vise nisu moguCi? Sve to poka-
zuje koliko je sloien odnos izmedu bzldueih,
sadaSnjih i proSlih dogadaja, i budutih sadaS-
njih i proSlih moguEnosti.
U ,nama je duboko usadeno uverenje da na
buduknost moiemo donekle uticati, ali da je
besmisleno i pomiSljati o uticaju na proslost.
Ona nam izgleda zavrSena i potpuno odredenn.
Dogadaji k,oji su se ,desili ne menjaju se, iako
se menja naSa slika i naSe znanje koje imamo
o njima. Ukoliko i govorimo d a je 8proSlost neod-
redena, da je neSto moLda bilo ovako ili onako,
onda se to odnosi samo na naSe znanje o pro-
S1,osti. OMgledno, naSe znanje o proglosti moie
biti budufe. DruikEije je s s budu6noSt.u; budufi
dogadaji su lsami neodredeni. Ovo uverenje, tako
nam bar izgleda, nije nikakva iluzija. NaSe
razlilkovanje pr03osti i tbu.duCnosti u vezi je s
razlik'om izmedu stvarnog i mogu.kg. Buduti
dogadaji s u mogufi ili neizbeini, ali joS uvak
nisu stvarni, jer SP nisu odigrali. ProSli dogadaji
su ,se odigrali i oni mi u tom smislu stvarni.
Kretanje, degavanje, pmmenu muopSte, moie,mo
razumeti kao pretvamnje mogufeg u stvarno,
kao ostvarivanje mogufnosti. BuduCnost n,ije
neodredena samo zato Sbo je ne znam,o. Svakako,
za ~ l e k edogadaje znamo da f e s e neminowo
desiti, ali n e znamo ni kada n i kako nsi gde,
jer joS ne postoje n u h i us1ov.i da s e oni dese.
S druge strane, mnoge dogadaje ,;stvaramo" tako
Sto ih zamiSljamo .i ielimo. Kao zamiSljeni do-
gadaji i ciljevi naSe akcije oni .su gri'mtni u
naSoj svesti i tako postaju mogufi dogadaji koji
pre toga nisu bili budufi. Istina, tkada se veC
d e s i l ~neSto sasv5m neoEekivano, neBto o remu
nismo uopSte pomigljali, o,seCamo s e kao d a smo
otkrili da je to bilo mogute. Medutim, izgleda
da je to drugi problem - problem mi'saone re-
konstrukcije prdlosti. Mogufi 'budufi dogadaji,
ciljevi naSe akcije, ostvaruju se u n a 5 m delo-
vanJu. Dakle, mi ostvarujemo buduenost koja
rjam nije u n q r e d data; ona nije neoidredena
samo zato Sto je ne znamo, vee i zato Sto je
u izvesnom smislu i nema.
Kad kaiemo d a ne znamo Sta Ce se i kako Ce
se desiti. onda je to drugi m i s a o od m o g u
kmojem ~kaiemod a n e znamo kako se neSto desi.10.
U p r v m slueaju znamo d a se to nije desilo, u

82
ALEKSANDAR KRON

drugome d a jeste. U prvom slueaju moiemo jog


pomisljati da utiEemo na ono Sto Ce se desiti,
u drugome znamo da je to nemoguCe i mi na
to i ne pomigljamo.
Keo gto rnoiemo zamigljati moguCe budube do-
gadaje, tako rnoiemo zamisljati i nizove buduCih
dogadaja. Na primer, ako je E O neki sadaSnji
dogadaj, onda moiemo zamisliti da Ce se posle
toga desiti prvo E l , pa E2, pa E4, ili d a Ce
..
dogadaji teCi redom E l , E3, EG, . , ill moida
E l , E2, E5, ... Sve cve mogukcsti moiemo
prikazati w a k o :

,,~2/~~
EO--El \~,5
\
E3-E6
Na wan dijagramu dagadaji E l i E2 se granaju,
ali se ne granaju EO i E3. Razlozi za ovaj naEin
izraiavalnja su otigledni. ImajuCi u vidu ovalzav
dijagram, moieano definisati uslovljenost doga-
daja E y dogadajem E x : E x uslovljava E y ako
se dogadaj E x ne graaa i E x je povezan sa E y .
To znaEi da se E y deSava ako se desi E x , jer
nema drugih altwnativa. U aaSem primeru EO
u~slovljavaE l , E3 uslovljava E6, ali E2 n e uslov-
ljava ni E4 ni E5.
Ako E x n e uslovljava E y , onda moiemo reCi d a
je E y u otvorenoj buduCnosti dogadaja E x .
Tada, ako s e E x desi, nije izvesno da Ce se
desiti i E y ; za E x moiemo reCi da ima razne
altennativne buduhosti. Tako je u nagem pri-
meru E2, E4 jedna alternativna buduhnost doga-
daja E l . Da li jedan dogadaj ima otvorenu
budubnost, nije pitanje samo naSeg znanja i
m d i zamigljanja, vet i pitanje s t r u h r e sveta
u kojem iivimo. E l moie biti uslovljeno doga-
dajem E O zato Sto medu njima postoji neka
uzmEna veza Eije prisustvo nije samo stvar
naSeg znanja.
U m u n o deterministi3kom svetu nijedan do-
gadaj nema otvorenu buduCnout, pa bismo mo-
i d a imali ovakav dijagram:
EO-El-E3----E6
U toan sluEaju bilo bi moguCe da je EO uzrok
E l , d a je E l umolk E3, da je E3 uzrolk E6. Ali
isto je talk0 lnogube zamisliti svet u kojem
izmedu ovih dogadaja nema nikakve u z r o h e
veze, mada je a n ipak deterministieki jer nije-
d a a drugi raspored ovih dogadaja nije moguC,
kao ni bilo koji drugi raspored drugih dogadaja.
Vratimo s e joS jednom nagem dijagramu. Pret-
postavimo da se ostvari niz dogadaja EO, E l ,
E2, E5. Tada je, naravno, EO ranije od E l , E l

83
ALEKSANDAR KRON

je ranije od E2 i E2 je ranije a d E5. Dogadaji


F3, E4 i E6 nisu se desili i mi ih moiemo sma-
trati prorpuStenim prilikama ili sreCnim sticajem
okolnost~i.Mi, d a li je E3 proSli dogadaj u od-
mow a a E5? Ne. Da bi neki dogadaj bio proSli
u odnosu na dati dogadaj, oni se moraju nalaziti
na istoj ,,granim,a moraju biti ispunjeni i neki
uslovi u pogledu ,,ostvarenosti". PoSto se u na-
Sem primeru ostvario niz EO, E l , E2, E5, u od-
nosu na E5 ~proSli su dogadaji E2, E l i EO.. .
Da se ostvario niz dogadaja EO, E l , E3, E6,
onda bi E3 bio proSli dogadaj u odnosu na E6
Std. Ali to s e nije desilo i E3 nije proSli dogadaj
u odnosu na E6. Razli'ka iznnedu ostvarenog i
onog Sto je bilo moguCe odraiava se i u nagem
naEinu izraiavanja: mi ne kaiemo da je E x
ranije od E y ili da je E x proSlo u odnosu na
E y , ako 6e E x i E y nisu ostvarili, vet da bi
E X bilo proSlo u odnosu na Ey.
I

U hrolnoloSkim i vremenskim logikama elementi


jedne razgranate strukture uvek predstavljaju
dogadaje ili trenutke, a grane su nizovi mogukih
dogadaja ili trenutaka. U dijagramu koji nam
:le posluiio kao ilustracija elementi razgranate
stmkture bili su dogadaji. Prajor'" je smatrao
da otvorene budutnolsti leie u prirodi samog
vremena, a ne u taku dogadaja. Pasmatrajmo
sledeCi dijagram:

/OE se dogada
-
0 0
n v\o E s e n e dogada

n mnaEava sadaSnji trenutak, a docnije deSa-


vanje grena se u v; jedna grana vodi dogadaju
E, druga dagadaju ne-E. Tako se E nalazi u
otvorenoj buduknosti i gredstavlja moguki do-
gadaj. Prajor misli da je v taPka grananja u
samom vzeunenekom toku. Prema tome, v nije
nikakav dogadaj, veC izvesna karakteristika
,,vremenskog kanala"15) kroz koji dogadaji
1pl.olaze.

ReSer i Arkart mitsle d a vreme treba zami,Sljati


kao l h e a r n i tok, a da u vremenu 'postoji gra-
nanje tokova dogadaja. Dakle, postoje razliEiti
moguti tokovi dogadaja u jednozn jedinom vre-
menu.

Ova dva shvatanja vrlo ,su razliEita i svako od


njih ima .svoje dalekoseine posledice. Na primer,
prema Pra jorovom shvatanju neodredenost bu-
t.
") A. N. Prior, Time and Modality, Oxford, 1957.

's)-Ovaj izraz je iz pomenute knjige ReSera ,i Arkarta.

84
- --
ALEKSANDAR KRON
- -

d v h o s t i , njena kontingmcija, data je ont~loS-


kom strukturom samog vremena. Zato jedan
iskaz o h d u C e m dogadaju ne moie biti ni isti-
qit ni laian; on mora biti neodreden u tom
pogledu. Prema drugom shvatanju, nemogut-
nost odredivanja istinonosne vrednosti iskaza o
budutim dogadajima leii u nemogu6nosti odre-
divanja posledica dogadaja koji se granaju, pa
tome nije razlog ~priroda vremena. Za iskaz o
nekom buduCem dogadaju ima s d s l a reCi da
je on istinit ili laian, mada ne moramo biti
u stanjv da kaiemo koja je vrednost u pitanju.
Iako Cemu to svakako saznati kada se taj doga-
daj desi, vet sada ima smisla reCi da nag iskaz
ima jednu od tih wednosti. Prema ReSer-Arkar-
lovom shvatanju, dakle, koniingentnost baduC-
nosti leZi u kauzalnoj strukturi toka dogadaja, a
ne u strukturi vremena.

Kada se uporede ova dva shvatanja, onda na


prvi pogled izgleda da ReSer-Arkartovo shvata-
nje ima prednost nad Prajorovim. Ono je, prvo,
jednostavnije. Drugo, u Prajorovom shvatanjv
nastaju teSkoCe u interpretiranju relacije L' ili
ranije (kasnije) od. U naSem prethodnom ras-
pravljanju bilo je sasvim prirodno Sto dogadaji
koji se nisu fostvarili ne moraju biti u relaciji U
(ako se nalaze na razliEitim granama razgranate
strukture). Medutim, Prajor misli da je U rela-
cija izmedu trenutaka ili vremenskih razmaka.
Kada bi vreme bilo linearno, poEeci svaka dva
razmaka bl se pok!apali ili bi poEetak jednog
preth3dio poEetku drugoga ili bi poEetak drugog
prethodio poEetku prvoga. Ako se vreme grana,
onda ima trenutaka za koje me vaii ni jedna od
p m e n u t i h alternativa, kao Sto je to sluEai i
kod neostvarenih dogadaja. Ali dok to izgleda
prirodno kada je re? o neostvarenim dogada-
jima, priliEno je Eudno kada je reE o vremen-
skim intervalima ili trenucima. Uostalom, niie
uvek u skladv s nagom intuicijom reCi da su
trenuci ili vremmski razmaci u relaciji U , ia!ro
je sasvim mprirodno reCi da su dogadaii u toj
relaciji. Na primer, kada se brod priblilava,
dim se vidi ranije nego Sto se vidi brod, ali
iako se kaie da je sreda pre Eetvrtka, me lraie
se da je sreda ranije od Eetvrtka, kao Sto se
ne kaie n i da je devet sati uiutru bilo raniie
od dvanaest sati. Razume se, umesto raniie, mi
kaiemo i pre, ali nije sigurno da ova reE ima
iskljufivo vremensko znaEenie; na primer, 5 je
pre 8 u nizu brojeva.

Pa ipak. Praiorovo stanoviSte izgleda prihvnt-


ljivo kada imamo na umu relativistieko vreme
modeme f.izike. U tom slufaju se grananie vre-
mena poklapa s grananjem dogadaia. M o m se
zamisliti sistemi koii se udaliavaiu jedan od
drugog tako da nikakva komunikacija medu nji-

85
ALEKSANDAR KRON

ma nije mogufa, pa ni uporedivanje vremena.


Za trenultke u jednom od tih si'stema ne moiemo
reCi ni da su raniji n i d a su kasniji nit4 da se
poklapaju s trenucima iz drugog sistema.

Svi s4istemi vremenske logike ~kojesmo razmo-


trlli jli cmpisali prilitno su siromaSni u odnosu
na nafin na koji upotrebljavamo vremenske od-
redbe. Oni nam niSta ne k a i u o tome da li je
v r m e diskretno, gusto ili kontinu~irano, da li
ima pofetak d kraj itd. Odgovori na neka od
ovih pitanja su znafajni za ~nauku.Za primenu
matematieke analize moie bibi znaEajno d a li
je vrerne kontinuirano ili diskontinuirano. Me-
dutim, mo~gufe je postavrti i druge zahteve u
pogledu prirode vremena. U daljem izlaganju
mi Eemo izloiiti i razmotriti samo neke od njih.

Pojam vremena ikoji upotrebljavamo u obiEnom


iivotu i u nauci, osim linearnosti, ima i mnoge
d m g e osobine. Tako je vrlo mnogo raspravljano
i knjiga napisano o ,,,kontijnuiranostli i diskonti-
nuiranasti" vremena, (njegovoj ,,deljivosti a bes-
konahost" i drugim odlihama. Na ialost, mnoge
takve analize su sasvim nepouzdane, jer su ne-
precizne. Na primer, ru njima tse retko razlikuje
kontinuiranost od gust~ine.
V ~ e m ei modalne kategorije. - Vef u pofetku
ovog aa'pisa pomenuli smo da su logigifari odavno
znali e a vezu izmedu modalnih kategorija nui-
nosti i moguEnosti s a vremenskim odredbama.
Da ispitamo taj odnos.
Fakazalo s e tako d a s e skoro sve modalne logike
luisovskog tipa mogu izvesti u akviru vremen-
skih logika, pa nam se Eini da modalne katego-
rije, bar m a k v e kakve su u modalnim siste-
mima, nisu ni potrebne. Mnogi autori to i tvrde.
Prema priliEno raSirenom misljenju, u Luisovim
sistemima auinost je iato Sto i ,,uvek isilnito",
s mogufnost isto Sto i ,,ponekad istinito". Ofi-
gledno, postojefi sistemi modalne dogike obja4-
niavaju samo neke osobine modalnih kategorija.
Mnoge druge osobine nisu uopSte m e t e u obzir.
Sto se vremenskih logilka tife. rezultati koje
smo prikazali zavise od naEina na koji su pret-
p o s t a v k ~o vremenu iskazane i od metoda koji
su primenjenl u analizi tih pretpostav1:i. Mi
smo se ogranieili samo na metode ilskaznog rm-
funa. Svi sistemi vremenske i modalne logike
bilci su zasnwani n a iskaznmn raEunu iz Prin-
cipia Mathematica. PolazeCi od nekih drugih,
neklasifnih rafuna, dobili bismo sasvim d m g e
rezultate. Ne znamo d a li bi oni bili bliii na.5o.i

86
ALEKSANDAR KRON

intuiciji. Uvodenje logike predikata veoma bi


obogatilo naSu analim, ali bi i izlaganje bilo
sloienije. Mi smo kvantifikovali samo vremen-
ske promenljive, (pa tako nismo imali raEun
predikata LI pravom smislu. Kvantifikacija dru-
gih promenljivih u ~prieustvumodalnih operatora
stvara izvesne teSkoCe, pa su se neki autori
protivili izgradnji modalne logike predikata.
n a n a s se 1ogiEa1~ia e ustruEavaju da to Eine, ali
su jog uvek ostali mnogi problemi u interpre-
tiranju takvih raEuna predikata.

87
88
AuDKSANDAK
Gentzen Formulations
keon
0? jwo pos^?ve Relevance Logics

Abstract. The author gentzenizes the positive fragments and R+ of relevant


T+
T and R using formulas with prefixes (subscripts). There are three main Gentzen
formulations of S+ e {T+, R+} called Wx S+, W2 S+ and G2 S+ . The first two have
the rule of modus ponens. All of them have a weak rule DL for disjunction introduction
on the left. DL is not admissible in S+ but it is needed in the proof of a cut elimination
.
theorem for G2 S+ Wi S+ has a weak rule of weakening Wx and it is not closed under
a general transitivity rule. This allows the proof that f- A in S+ iff \- A in Wx S+.
From the cut elimination theorem for G2 S+ it follows that if f- A in S+, then h A
in 6r2 ?>+ . In order to prove the converse, is needed. It contains modus
W2 S+ ponens,
transitivity, and a restricted weakening rule. G2 S+ is contained in W2 S+ and there
is a proof that h A inW2 S+ iff h- ^4 in ?Fi S+ .

The are gentze


positive fragments T+ and JB^ of T and R respectively
nized using wffs with prefixes (subscripts).
1. Proofs from hypotheses in $+ Let S+ range over
{T+,R+}.
The rules and axioms of S+ are given in [1], pp. 339-340: the axioms of

T+ are Al-All and R+ = T+ +E3. A prefix is a finite (possibly empty)


set of positive denote ... .
integers. We prefixes by a, ax, ..., b, b19 By
a prefixed wff (pf) we mean a wff with a prefix. Sometimes we write a (A)
instead of aA and we omit the prefix 0. Thus every wff is a pf and hence
we may assume that the axioms given in [1] are prefixed by 0 (notice that
0A can bean axiom of S+, but aA cannot unless a = 0). Let max (a) denote
the greatest element of a, if a # 0; if a =0, then max = 0
(a) by defini?
tion.

a proof bB in
By S+ of
from hypotheses <*>\AX,..., anAn, n^ 0,
we mean a finite
sequence of pfs&i-B, such that &m-Bm is bB and
...,bmBm
for any 1 < k < m h either a or else an axiom of S+
bkBk hypothesis
or else a consequence of predecessors or modus ponens.
by adjunction
Furthermore,
(1) if bkBk is a hypothesis, then bk i=-0;
(2) if o?JSj. is a consequence of ?^2?; and by and
bjBj adjunction
b{ =.bj, then 6? - &,.;
*
I wish to thank Professor Nuel D. Jr. for a previous version
Belnap, reading
of this paper and for an correction. Also, I wish to thank Mr. Zoran Markovic
important
for thorough discussions we have had about earlier attempts to prove the cut elimina?
tion theorem for the formulations presented here.
89
382 A. Kr?n

(3) if bkBk is a consequence of biBi and bj(Bi->B/i) by modus ponens,


then =
bk b?ubj.
The application of adjunction is permitted only if 6? =
bj.
The application of modus ponens is further restricted as follows:
if S+ =T+, then max(6?) > max(fy).
Let ... range over finite (possibly empty) sequences of pfs;
JT, Y,Z,
by x, y, z, ... we denote the unions of all prefixes used in X, Y, Z, ...,
respectively. We shall write X YaA iff there is a proof of aA in S+ from
hypotheses which are members of X.
In the sequel we shall need some properties of proofs from hypotheses
in S+.

Theorem 1.1 In we have the following derived rules:


S+
P From X, aA, bB, Y Y cG to infer X, bB, aA, Y V cG
W From X V bB to infer X, aA Y bB, for any prefix a
C From X, aA, aA Y bB to infer X, aA Y bB
TE From X YaA and Y, aA Y bB to infer X, Y Y bB
MP From X YaA and Y Y b(A->B) to infer X, Y Yaub(B), provided
that if S+ = T+, then max (a)> max (b)
CL From X,aAY cG (from X,aB Y cG) to infer X,a(A&B)YGc
CE From X YaA and X YaB to infer X Ya(A&B)
DL* From X,Y,aA,Z Y cG and X, Y, aB, Z Y cG to infer X, Y,
a(AvB),Z YcG, if
(1) anx = ac\z = x(~\z ~ 0
and max(#) < max(a),
(2) if Y is non-empty, then ais the only prefix used in Y,
(3) if Z is non-empty, then all prefixes used in Z are pairwise
and for any such d max (a) < max (d)
disjoint
DE From X YaA (from X YaB) to infer X Ya(AvB)
IL From X YaA and Y, aub(B) Y cG to infer X, Y, b(A->B) Y cG,
=
provided that if S+ T+, then max (a)> max (b)
IE From X, aA Yaub(B) to infer X Y b(A-*B), if anx =0, bs x
and max (a) > max(#).

Proof. P, W, C, TE and MP follow easily by the definition of the


proof from hypotheses in S+. Then using these rules, A5 (A6) and adjunc?
tion we prove CL and CE. DE follows using A8 (A9). IE is the deduction
theorem for S+ (cf. [3]). The proof of DL* is given in [4] as follows. Without
any loss of generality we may assume that Y is aF, that Z is dD and that
a\Jd s o. Suppose that X, aF, aA, dD Y cG and X, aF, aB, dD Y cG in
Using CL and C we obtain X, a(F&A), dD YcG and X, a(F&B),
S+.
dDYcC; hence by IE X Y c-{avd)(F&A->.D->G) and XYc-(aud)
(F&B->.D->C). :Now by CE X Y
c-(aud)((F&A-+.D->G)&(F&B->.
On the other hand, X Y (F&A->.D->G)&(F&B-*.D->G)->.F&A
D->(7)).
vF&B->.1)>G, by A10. Hence, using MP, X Y c~(a\Jd)(F&AvF&B

->.D->G). But All gives us X Y90


F?k(AvB)-~>F&AvF&B. It is now
Gentzen formulations... 383

easy to prove X Y c? {avd)(F&(AvB)~*.D->C) and eventually X,


aF, a(AvB), dD YcG. Notice that conditions (1), (2) and (3) in DL*
make possible the application of IE. Also, notice the importance of both
A10 and All.
The proof of IL is easy.
Let us consider the rule

DL From X, aA Yauc(G) and X, aB Yauc(G) to infer X,a(AvB)


Y avc(C).

Suppose that we have two different proofs in S+ of auc(G) from


hypotheses which are members of X, aA and X, aB, respectively; then,
if DL were an admissible rule of S+9 there would be in S+ a proof of auc(G)
from X, a(AvB). However, in general, there is no sequence in S+ the
existence of which is denoted by X, a(AvB) Yauc(G).
Theorem 1.2. DL is not admissible in S+.

Proof. the pf (p->qvr)&(q->s)->.p-~>rvs


Consider (31). This formula
was constructed by E. K. Meyer and it has been shown by a matrix method
that it is not provable even in jB and a fortiori it is unprovable in
S+ (this
is known to us only from correspondence with Nuel D. Bel?ap, Jr.).
However, if we admit DL, (M) becomes provable, Let anb = 0 and 0
< max (a) < max (6); then we have
(a) a(p->qvr), a(q->s), bp Y aub(qvr)
(b) aub(r), a(p~>qvr), a(q~>s), bp Y aub(r)
(c) aub(r), a(p-^qvr), a(q->s),bp Y aub(rvs)
(d) aub(q), a(p-^qvr), a(q~>s), bp Yau b(s)
(e) aub(q), a(p->qvr), a(q->s), bp Y a\Jb(rvs)
where (b) holds trivially, (a) and (d) hold by virtue of MP, and (c) and (e)
follow from (b) and (d), respectively, by DE. Now, using DL, from (c)
and (e) we obtain
(f) aub(qvr), a(p->qvr), a(q->s), bpYaub(rvs)
and hence by TE and C from (a) and (f) we derive
(g) a(p->qvr), a(q->s), bp Y aub(rvs).
Now it is easy to apply CL, C and IE to prove (M).
Let us note that in this proof we have needed DL, TE and the fact that
(b) holds even though neither a(p->qvr) nor a(q->s) is used in the
derivation of aub(r). This will give rise to different Gentzen formulations
of S+.

2. Gentzen formulations with modus ponens. We gentzenize and


T+
?+ using symbols denoting existence the of proofs from hypotheses
as
sequents (consecutions). Each of T+ and R+ will have three Gentzen
formulations with modus ponens; they are denoted by WXS+, W2S+
and where is as before. 91
Each Gentzen formulation has Y as
TRS+, S+
384 A. Kr?n

a primitive symbol. If X is non-empty and all prefixes used in X are non?

empty, then X Y xA is a sequent. If X is the empty sequence, then we write


YA for X Y xA. X is the antecedent and xA the consequent part o? the

sequent X h ?pj..
The sequents of the form aA Y aA are basic.
Our Gentzen formulations with modus ponens have the following
rules in common: P, C, MP, CL, CE, DL (not only DL*!), DE and IE.
Notice that these rules have to be restated in order to meet the condition

imposed by the definition of a sequent: the prefixes in a non-empty ante?


cedent part must be non-empty and the prefix in the consequent part
must be the union of all prefixes used in the antecedent part. This holds
for all rules to be introduced later on, too.
Now we define two rules of weakening, W1 and W2 :

Wj From X YxA to infer X, bB YxA, if b s x


W2 From X YxA to infer X, bB YxA, if b is a singleton and there is
a member bC in X.

To obtain let us adjoin W1 to the preceding rules.


Wr1/S+,
To obtain let us adjoin W2 and TE to the preceding rules.
W2S+,
To obtain TRS+, let us adjoin TE to W^.
In CL, CE, DL, DE and IE the pf d(AoB), where o is a connective
and d the corresponding prefix, is called the principal member-, the members
of X, Y or Z are the parametric members. We shall say that in an application
of IE the prefix a is discharged and that in an application of TE the pf
aA is the eliminated member.
A proof of a sequent in any of these formulations is a proof tree with
usual properties. that we have a proof of a sequent St and that
Suppose
as the number of nodes below
Sk is a node of it. The rank of Sk is defined
Sk, on the path from Sk to St. The rank of St is 0. The weight of Sk is defined
as the number of nodes above Sk, on all branches containing Sk. The weight
of a basic sequent is 0.
The the existence of a proof from hypotheses in
symbol denoting
is interpreted in these Gentzen formulations as a sequent, provided
S+
that the prefixes in non-empty antecedent parts are non-empty and the
of all prefixes in the antecedent
prefix in the consequent part is the union
On the other hand, a sequent is interpreted in S+ as a symbol denot?
part.
ing the existence of a proof from hypotheses.

Let MPS+ range over {W,S+, W2S,TRS+}.

in a proof ...
Theorem2.1 ZetbxBx, ..., bmBm Y bB be a node of axAx,
..., anAnY aA in MPS+ ; then
b -^u ...
(1) ubm,
(2) if m # 0, then for all 1 < j <92 m bj =?0.
Gentzen formulations... 385

Proof. Induction on weight.

Theorem 2.2 If X YxA is provable inMPS+, then X YxA in S+ +DL.


Proof. Left to the reader.

Theorem 2.3 YA
is provable in W?S+ iffYA in S+.

Proof. Let Y YyB he a node in a given proof of X YxA in WXS+,


and let c and d be used in Y, such that c ^ d and end ^ 0; then c and d
are used in X. For neither c nor d can be discharged by IE and neither
of them can be eliminated. It follows that ifX is empty, then all prefixes
used are discharged
in Y by IB; therefore, for any two prefixes c and d
used in Y either c = d or end = 0. It is obvious that conditions (1),
(2) and (3) of DL* are satisfied. Hence, DL is DL*. By Theorem 2.2, in
S+ +DL we have YA ; now it is obvious that hA in S+.
In order to complete the proof of the theorem, let the reader show
that if A is an axiom of S+, then I-A is provable in
WXS+.

Theorem 2.4 If axAx, ..., anAn Y aA is provable in WXS+, then for


all a[, ..., an, a[A, ..., anA Y a'A is provable in WXS+, provided that
for any 1 < i< n
=
(1) d{ ? ai and max(%) max(a?)
= then a!i =
(2) if a{ a}, a'j.
Proof. Left to the reader.

The sequents corresponding to (a)? (f ) in the proof of (M) are provable


inWXS+, but (M) is not a theorem of S+ ; this shows that TE is not admis?
sible in WXS+. If we admit IL, we can prove h (M) ;hence, IL is not admis?
sible in WXS+.
We leave to the reader the proof of

Theorem 2.5 X YxA is provable in TRS+ iff X YxA in S+ +DL.


Let us consider Although contains TB, the sequent Y (M)
W2S+. W2S+
is unprovable in for the sequent corresponding to (b) in the proof
W2S+,
of (M) in S++DL is unprovable in W2S+.
Theorem 2.6 // atAx, ..., anAn Y aA is provable in W2S+, then for
any choice of a[, ..., an the sequent ..., anAn Y a'A is provable in
a[A1,
W71S+, provided that for any
e,fe{a1,...,an\,
(i) if max(e) = max(/), then e' =f,
(ii) if max(?O > max(jf), then e'nf =0 and max(e') > max(/').
Proof. that in a proof otaxAx,
Suppose W2S+ we have ..., anAnY aA.
Let us
consider only the prefixes in this proof. we have a tree F
Thus,
the nodes of which are sequences =
of prefixes bx, ...,bm,b, where b bxu ...
...
ubm. Let us agree to write 93
b instead of bx,..., bm and sometimes to
386 A. Kr?n

omit &; let us write [Jb instead of b, ifwe find it convenient. Starting from
ST with a at the beginning, we shall construct a tree ?T' by substituting
?> ->
V for b at each node of ?^\ Let us choose a[, ..., an satisfying (i) and (ii)
-? -> -*
and let us substitute a' for a. Let b he a node of 2T; if it is not an end-point,
-* ->
then immediately-* above b there is either (I) a node c or (II) there are two
-> ->
nodes c and d. By cheeking the rules of W2S we see that in case (I) c is

either (a) a permutation of 6 or (b) c is-> 6, bm or


(c) & is c, cp where cp is
.> ,>
a singleton and a member of c or (d) c is 6.., cp, where bncp = 0 and max(^)

> max(6). Suppose that we have already substituted b' for 6; it is obvious
that in cases (a)?(c) the sequence c' is also defined and that we should

substitute c' for c. In case = and substitute


(d) let cp {max(&')+l} b',cp
for c.
_>->-> _> ->
In ease (II) we must have either (a) c and d are b or (b) c and d are
-*-> _>-> _>

sub-sequences of b, [Jb
=
UcUU^ an^ ^ ^2?+ = W2T+9 then max(^J c)
-? -* _> _> _> -> _>

(c)dis 0 , c and 6 are subsequences of b. Suppose


>max(^d)or
-> (Jcandboth
->
that 6' has been substituted for b ; it is obvious that in case (a) the sequences
-> -*
'
c and ?' are already defined, and so are in case (b). In case (c) the sequences
' '
c and e are defined. is a member c such that max
Now, there cp of (cp)
= the member of c' is already defined. Let us substitute
->
max(\Jc);
-? -> <^
-^ ,
c' for e and e', cp for dL

Thus, by induction on the rank of a node of &~ we have defined a tree


5~'. Now let us consider the sequents (the pfs) of 3T'. We shall prove that
any sequent of $" is provable in Let Sk he a node of S and let Sk he
WXS+.
the node of ZT'. If Sk is a basic sequent in so is Sk
corresponding W2S+9
in W^S^. If Sk is obtained inW28+ from $< (and fy) by an application of
either of the rules P, C, CL, CE, DL or DE, then Sk is obtained in
WlS+
from S'i (and 8j) by the same rule.

In. order to consider the remaining rules, we need

Lemma 2.6.1 For any pair of members e and f of b

(1) if max(e)
=
max(/), then e' = /';
(2) if max(e) > max(/), then e'nf = 0 and max(e') > max(/').

The proof of the lemma is by an easy induction on the rank of bin&~


and by definition of 3T'.
The consideration of the rules W2, MP and IE is left to the reader.
Let us assume that Sk is obtained 94
in
W2S+
from S{ and S? by TE, where
Gentzen formulations... 387

c , d and ft are prefixes in Si9 d is e, c and botli


Sj and Sk, respectively, \J
-> -> *"
c and e are subsequences of ft. By construction of ST', the prefixes of S'?,
-> -> -> -> *
and fijareC, 6', c^ and V, respectively, where c' and e' are subsequences
?j -> _>
of b', and is a member of c'. Suppose that S^ and are provable in
WXS+ ;
c^ $?
we shall show that Sk is provable in W?S+9 too.
Let S'{ and ?} be Zj h c'G and Z?, c?0 Y cpvd'(B) (remember that c'
and are Lemma 2.6.1, for any pair
cpud' IJc' and {Jdf, respectively). By
of prefixes e' and /' used in !F' we have either e' =/' or e' C\f = 0. Let
J?' be di-?!, ..., d!QDq and let the members of Z' he all pfs in such that
Z'}
for any 1 < r < q (q^ 0) max(d^) < max(^); let ^, ..., e8 be all prefixes
in such that < max(ei) < ... < let F1,...,FS
Z'j max(c^) max(^) (s > 0),
be all wffs in Z\ such that Fu is the conjunction of all wff s in Z'5with the
prefix eu, 1 < u < s, and let <7Pbe the conjunction of all wffs in
Z]
with the
prefix cp. Now using IE (in WxS+)9 from
fl^
we obtain
Zf ... ...
(a) Y(cpudf)-(e[u ue8ucp)Gp^.F1-> -+.FS->B.

Suppose that Z? is c[Gx, ...9cvCv and that Gp is Gf &...&GI&G. Using


IFj, from S'i we obtain ?^-, c'Cp, ..., o'C* Y c'G. On the other hand, since
= ...
o' 6*;u uev9 from c'Cf h o'?f, ..., c'Gpw Y c'Cp using Wx we prove

Zj,.c'Cf, ..,c'GpYc'Cf

Z'i,c'Gp,...,c'GlYc'GZ.

Applying CE w times we obtain


(b) Z'{,c'Cf,...,c'GpYc'Gi>.
By Theorem 2.4,
(c) Z[,cpGf,...,cfpGpYcfGp
is provable in Now =: ? ...
WXS+. max(c') > (e[u
max(^) max((o^ud')
= ... and hence we can use MP with and
ve'suc'p)) max(diu u^) (a) (c)
as premisses to obtain ? ... ues
Z\, Z', cpCf, ..:, cpGp Y ((cpud') (e[u
ucp))vc'(F), where F is JE7a->... ->.FS->B. Using a similar argument
s times we prove Y in But s c' and
Z?,Zj ((c'pud')?cp)uc'(B) WXS+. c^
hence we have proved #? in .
^S^
The theorem follows by induction on weight of a node in f.
Sk
There are sequents provable in WXS+ but in W2S+. For
unprovable
example, {1, 2}p, {2}q Y {1, 2}p is provable in WXS+, but not in W2S+.
On the other hand, {l,2}p->qvr, {1, 2}q->s, {2}p Y {1, 2}rvs is pro?
vable in but unprovable in W1S+.
Wy?+,
We summarize our investigations in this section in

Theorem 2.7 For all singletons ax, ..., an the following propositions
are equivalent:
(a) a1Ax, ..., anAn Y aA in S+; 95
388 A. Kr?n

(h) a19 Ax, ..., anAn Y aA is provable in WXS+;


(c) a1A1, ..., anAn Y aA is provable in W2S+.

Proof, (b) =>(c). By an easy induction we can prove that for any
node bxBx, ..., bmBm Y bB in the proof of axAx, ..., anAn Y aA in WlS+, if
m > 1, then is a singleton, 1 < j < m. Hence, an application of Wx
bj
is an application of W2. Therefore, (b)?>(c). The part (c) ^(b) follows by
Theorem 2.6.
(a) =>(b) follows easily from Theorem 2.3.

3. Gentzen without
formulations MP. We want to have genuine
Gentzen formulations for S+ + DL and S+. Let us omit TE and let us
replace MP in TRS+ and W2S+ by IL. The resulting Gentzen formulations
are denoted by GXS+ and G2S+, respectively. By GS+ we denote any of
or G2S+.
GXS+

Theorem 3.1 Let YbBbe a node in a proof ...


bxB1, ..., bmBm of axAx,
anAn Y aA in GS+, let M = {ax, ..., an} and let N be the set of all prefixes

discharged in this proof; then


(1) b =6jU... Ufem;
(2) if m ^ 0, then for all 1 < j < m bj ^ 0;
=
(3) if GS+ GT+ and a ^ 0, then b ^ 0
(4) if b ^ 0, then for any 1 < j < m there are cx, ..., ck eMuN
= ... uek.
such that bj cxu

Proof. To prove (1) and


(2) proceed by induction on weight, and
to prove (3) and (4) proceed by induction on rank. Notice that we need

(1) and (2) in order to showthatthe concept of aproof in GS+ is well-defined.

in GS+, then for


Theorem 3.2 If axAx,..., anAn Y aA is provable
any choice of a[, ..., an the sequent a[Ax, ..., an Y a'A is provable in GS+

provided that for any f9g9f19 ...,fue M we have:

(1) #/<= U/^/iU--- u?;


(2) i// = U/>^m/' =u/M,j?m/'s
U/';
=
(3) if GS+ G2S+ and f is a singleton, then f is a singleton;
=
(4) if GS+ GT+ and max(/) < max(gf), then max(f) ^ max(gf');
= = =
(5) if GS+ GT+ and max(/) max(gf), then max(/') max(gf').
Proof. Suppose that in GS+ we have a proof of axAx,..., anAn Y aA.
Let us consider the prefixes used in this proof. We have a tree if the
only
nodes of which are sequences of prefixes. Let us use the conventions
->
in the proof of Theorem 2.6. from F with a at the be
explained Starting -> ->
we shall construct a tree ST1 by substituting V for b at each node
ginning,
of $". Let us choose a' satisfying (1) ?(5) and let us substitute a1 for a.

Let 6 be a node of F; if it is not an 96


endpoint, then immediately above b
Gentzen formulations... 389

-> -> ->


there is either (I) a node d or (II) there are two nodes d and e. By checking
-> ?>
the rules we
->
see that
->
in ease
->->
(I) d is either (a) a permutation of b or (b)
->
d is fe, &mor (c) & is $, d|p, where dp ? U$ and if GS+ = G2S+, then dp
is a singleton or = 0
(d) $ is b, dp, where bndp and
max(d!p) > max(&).

Suppose that we have already substituted V for b; it is obvious that in


->
cases (a) ?(c) the sequence d' is also defined and that we should substitute
-> -> -> ->
?2' for $. In case = and substitute
(d) let dp {max(J') +1} -*" -> b', dp for d.
_>
In case (II) we must have either (a) d and e are 6 or (b) there are bj
-* -> -*
= b has
in & and er in 6such that bjUd er. Suppose ->
that
->
been substituted
_>
for & ; it is obvious that in case (a) the sequences d' and e' are already defined

and that we should substitute a' and e' for d and 6 respectively. In case (b)
~> ->
,
the d' defined and so are the members of e'
sequence is already bj and -> _>.
= =
except 4- Let us define a? (bjVd)' Z^ud' and substitute ?' and ?'
for d and e respectively.

Thus by induction on the rank of a node of ST we have defined a tree ^'.


->
By Theorem 3.1, any non-empty prefix used in b is the union of some
elements of MuN, and so is b. Let Mb and Nb he the subsets of M and N,

respectively, such that b is the union of all elements of Mb uNb. In order


to continue the proof we need

Lemma 3.2.1 For any f, g,fx,...,fue MbuNb and h, t,hx, ...,h


e{&i, ...,&m}>

(i) if fe Mb and g e Nb, then fng = f'ng' - 0, max(/) < max(ijr)


and max(/') < max(#');
then either f = =
(ii) if f, g e Nb, g or fng 0, and
=
(ii-1) if f 9, Men f - g',
= 0 <m$ < max(</), then f'ng' = 0 a^? max(/')
(ii.2) if fng max(/)
<max(sO;

(i?) ?// ci u/u, tte? /' s y/.


U/j=/iU
(iv) iff = =
(j/;
U?l^enf
(v) ifhc:{jl thenV s (J? 5
(vi) </ ? = (j? **^ *' = U?;
=
(vii) ?/ C?8+ GT+ a^? max(?) < max(f), ?fee%
max(fe') < max(i');
and max = then max =
(viii) if GS+'= GT+ (A) max(?), (A') max(f );
=
(ix) if h a [Jh hYu ... \Jhl9 then h' c \Jh';
= i*6? h' = \Jh'i
(x) i/ ? (J?
= and h is a then h' is a
(xi) ?/ 6rS+ Cr2S+ singleton, singleton.
97
5 ?- Studia L?gica 39/4
390 A. Kr?n

We shall show that (iii) and (iv) follow from (i) and (ii).
(iii) Let / cz \Jf; ->iifeNb, then by (i) and (ii) / is amember of /; hence

/' is a member of /'. If f e Mb, then / is disjoint to any member of Nb


-> _>
from (i)). Let gx, ..., gve Mb he among / such that / c
(this follows \Jg
= ... vgv;
9i u by (1) and (2) we have/' s
\J~g'.
= then / isf, by (i) and (ii) ;hence f is/. If
(iv) Let f {Jf.l?feNb,
=
/e Mb9 then fx,...,fueMb (this follows from (i)). Hence/' uf* by (2).
We shall prove (i), (ii), (v), (vi) and (xi) simultaneously, by induction
on the rank of a node of ST. However, in the proof of (v) and (vi) we shall
need (i) and (ii), in the proof of (vi) we shall need (v), and in the proof
of (xi) we shall need (vi) as well; henee, in the proof of (v) and (vi) we
shall assume that (i) and (ii) are already proved, in the proof of (vi)
we shall assume that (v) is already proved, and in the proof of (xi) we shall
assume that (vi) is already proved.

Suppose that b is the initial node a; then Nb^0, (i) and (ii) hold
vacuously, and (v), (vi) and (xi) hold by virtue of (1), (2) and (3) respectively.
Consider now the construction of ST'. In cases (I) (a), (b) and (c), as
well as in case (II) (a), all of (i), (ii), (v), (vi) and (xi) can be established
for d and e immediately; using induction hypothesis.

If /,/1,..,,/u,j,?gi =
Gase (I) (d). \Jb9 then (i), (ii), (v), (vi)
and (xi) follow from induction hypothesis. If otherwise, let us consider
(i), (ii), (v) and (vi)-in order (in (xi) we have h ? b).
= or / = =
(i) ?(ii) We must have either / ? b and g dp g dp; hence,

(i) follows by definition of dp and of d', and (ii) follows by definition of d'.

(v) ?(vi) We must have u = 1 and either h c b and fx


fx=dporh=
= by virtue of definition
dp ; hence, either (v) and (vi) hold vacuously
of d' or (v) and (vi) follow by (ii.l).

have = ds c =
Gase (II) (b). We \? b9Md Me Mb and Nd g N?
= of
Nb9- hence (i) and (ii) hold by virtue induction hypothesis.

Iif1,...,fu,h are members of b, then (v), (vi) and (xi) follow by indu?
ction hypothesis; if otherwise, let us consider (v), (vi) and (xi) in order.
h = = c Since and all members of d occur
(v) Let er bjUd (J/. b}
in b, by induction hypothesis we obtain b'jU{Jd' c [Jf. By definition
d' = of = =
of d', \Jd', and by definition
?> er, h' er bjUd' s \Jf.
Let h = = = induction and
(vi) er bjUd [Jf. Using hypothesis (v)
we obtain h' ? On the other by Theorem there are gx, ...
\Jf. hand, 3.1,

t..,gv,h1, ...,hwe MbuNb such that 98 = and d = Using (iii) and


bj (^Jg [Jh.
Gentzen formulations... 39?

(iv) we obtain \Jf ?z \J~g'u \Jh'. By induction hypothesis and the defini?
tion of d'9 bj = (J? and d' = \Jh'. Therefore, \Jf ?z b?ud' = er = V.
(xi) Let GS+
=
G2S+ and let A = er =bjUd he a singleton; then

A = er = = d.
Hence, d is d and a member of ?. By induction hypothe?
bj
= = == == d' and hence
sis, d' and bj are singletons. By (vi) h' er fe^ud' &?
A' is a singleton.
Thus (i), (ii), (v), (vi) and (xi) follow by induction. Now using Theorem
?
3.1 we easily derive (vii) (x). This completes the
proof of the lemma.
Now let us consider the sequents (the pfs) of &'. Let Sk he a node of
if and let S'k he the corresponding node of F'. If Sk is a basic sequent, so
is Sk. If Sk is obtained from $? (and Sj) by an application of either of the
rules P, C, CL, CE, DL or DE, it is obvious that Sk is obtained from S'4
(and Sj) by the same rule.
Suppose that Sk is obtained from S by Wx (W2); then we have Case
(I) (c). By induction hypothesis, S'{ is provable and by (ix), (x) and (xi)
we have c d' = then is a singleton and a member
d'p [J (if GS+ G2S+9 dp
of d'). Hence, S'k is obtained from S^ by Wx (W2).
Suppose that Sk is obtained from S? by IE; then we have Case (I)
(d). By induction hypothesis, $? is provable and bj (i) and (ii) of Lemma
3.2.1 we have = 0 and > max(bf). Hence is obtained
b'ndp max(d^) Sk
from S'i by IE.
Suppose that Sk is obtained from S{ and S? by IL ; then we have Case
(II) (b). By induction hypothesis, S'{ and Sj are provable, and by (vi),
3.2.1 we obtain =
(vii) and (viii) of Lemma er bjUd'
and
max(J]) < max(d').
Hence, Sk is obtained from Sj and
Sj by IL.
This completes the proof of the theorem.
We state three corollaries deserving the name of theorems.

Theorem 3.3 If X, aA Y aux(B) is provable in GT+, anx =0


and m&x(a) > max(#), then for any a' such that max(a') > max(#) X, a'A
Y a'ux(B) is provable in GT+.

Theorem 3.4 aA Y aux(B) is provable in and anx =


If X, 6?JB+ 0,
then for any a' X, a' A Y a'ux(B) is provable in GR+.

Theorem 3.5 If axAx, ..., anAn Y aA is provable in GS+ and for


any 1< i< n and some b either b e a{ or a,?nb =0 in case of GT+
[and
> then ax?b(Ax), ? a ? b(A)
max(a^) max(6)), ..., an b(An) Y is provable
in GS+.

4. Cut elimination theorem for GT+. Let us start with some defini?
tions. We g&y that XYxA is provable with weight w if there is a proof
of X YxA where XY xAi& of weight 99 w. We say that X YxA and Y YyB
392 A. Kr?n

are provable with combined iveiglit w if XYxA is provable with weight wx,
Y Y yB is provable with and w = These definitions
weight w2 wx +w2.
are analogous to those in [7], p. 113. We say that a pf aA is of degree
h iff h is the number of occurrences of connectives in A.

Theorem 4.1 If A is of degree h and X Y xA and Yx,xA, Y2Y bB are


provable in GT+ with combined weight w9 then for all Y*, X,Y* Y bB is

provable in GT+, where Y* is obtained from Y1? Y2 by deleting some (pos

sibly none, possibly all) members of the form xA.

Proof. Suppose that A is of degree h and that X YxA and Yx,xA, Y2


Y bB are provable with combined weight w. Our induction hypotheses
are:

Hyp 1 The theorem holds for any A' of any degree h' < h,
and any combined weight w

Hyp 2 The theorem holds for any A of degree h and any


combined weight wf < w.

If X Y xA is a basic sequent, then X is xA; hence Yx, X, Y2 Y bB is


provable. Using P and C we can prove X, Y* Y bB for any Y*.
If Yx,xA,Y2Y bB is a basic sequent, then Y and Y* are empty, B
is A and hence X, Y* Y bB is provable (Y is Yx, Y2).
Suppose that neither X YxA nor Y1, xA, Y2 h bB is of weight 0.
We shall distinguish two main cases:
(i) the eliminated member has no
occurrence in the consequent part of either of the premisses of X Y xA
and no occurrence in the antecedent part of either of the premisses of
Yx, xA, Y2 h bB, and (ii) otherwise.

Gase (i) A is A'-^A". We must have

X, aA' Y aux(A") Zx Y zxA' Z2, x\jzx(A") Y bB


X Yx(A'->A") Y,x(A'->A") Y bB

where Y is Z1, Z2, anx = > max(#) and ^ max(#).


0, max(a) max(^)

By Theorem 3.3 X, zxA' Y x\jzx(A") is provable and hence by Hyp 1

Zx, X' Y xuzx(A") is provable, where X' is obtained from X by deleting


some (possibly none, possibly all) members of the form z^A'. This holds
for any such X'; hence Zx, X Y xvzx(A") is provable. Again by Hyp 1,
for any Z2 obtained from Z2 by deleting some (possibly none, possibly
all) members of the
form xuzx(A"), Zx,X,Z'2YbBi% provable. Therefore,
Zx, X, Z2 is Y bB
provable; using P, we prove X, Y Y bB.
If A is either A'&A" or A'y A", the proof is left to the reader.

Suppose that xA is introduced by Wj (W2) thus:

- Y h bB
X Y xA
100
Y,xA YbB
Gentzen formulations... 393

If GS+ = then from Y YBb we obtain


GiS+, using Wx, X, Y Y bB.
If GS+ = then # is a singleton and the only prefix used in X;
G2S+,
on the other hand, there is a pf with the prefix x in Y; hence, using T72,
from Y h bB we prove X, Y h bB.

Gase (ii)We shall distinguish two subcases: (ii.l) xA occurs in the


consequent part of a premiss of X Y xA, and (ii.2) otherwise.

Subcase (ii.l) Let Xx YxA he a premiss of X YxA. Since Xx YxA


and Yx,xA,Y2Y bB are provable with combined weight smaller than w,
by Hyp 2, Xx, Y* h bB is provable for all Y*; hence, it is easy to prove

X, Y* h bB.

Subcase (ii.2) It is clear that xA occurs in the antecedent part of


a premiss of Yx,xA, Y2YbB. We shall distinguish two sub-subcases:
(ii.2.1) the number of members of the form xA in all premisses of Yx,xA,
Y2 Y bB equals the number of members of the same form in Yx,xA, Y2
h bB, and (ii.2.2) otherwise.
(ii.2.1) In this sub-subcase the premisses and the conclusion of
any rule have the same prefixes. Therefore we may proceed as usual.
Let us consider only W1(W2). Suppose that Y1, xA, Y2 Y bB is obtained
from Y19 xA, Z Y bB by Wx (W2), where Y2 is Z, cG. Now XYxA and

Yx,xA,Z YbB are provable with combined weight smaller than w. By


Hyp 2, X, Y?,Z*YbB is provable for all Y*,Z*. If GS+ =GXS+,
then using W, we obtain
X, Y*, Z*, cG Y bB, since c <= b. If GS+ =G2S+,
then c is a singleton, and either c # x and there is a pf in Y*, Z* with the
prefix cot c = x.Ifc =
x, then x is the only prefix used in X; hence, there
is a pf in X, Y*, Z* with the prefix x. In both eases we can use W2 and
obtain X, Y\,Z*, cG YbB.
(ii.2.2) Suppose that X Y xA is the conclusion in an application of
IE and that Yx,xA, Y2 Y bB is the conclusion
application in an of IL,
as in Case(i), where A is A'-*A", Y is Yx, xA, Y2, a n x = 0, max(a)
> max(#) and max(^) > max(#), and either (ii.2.2.1) xA is a member
of Zx and a non-member of Z2 or else (ii.2.2.2) xA is a member of Z2 and
a non-member of Z3 or else (ii.2.2.3) xA is a member of both Zx and Z2.
X Y xA and 2^ Y zxA', as well as X Y xA and Z2, xuzx(A") Y bB are pro?
vable with combined weights smaller than w.
(ii.2.2.1) By Hyp 2, X,Z\ YzxA' is provable for any Z\ and so is
X, aA' Y a\Jx(A"); by Theorem 3.3, X, zxA' Yxuzx(A") is provable.

ByHypl,X,Z*,X' Y xuzx(A") is provable for any Z\, X', where X'isob


tained from X by deleting some (possibly none, possibly all) members
of the form zxA'. C, we prove
Using P and X, Z\ Y xkjzx(A"), and using
Hyp 1 and Z?, x \jzx(A") Y bB, we prove X,Z\,Z2 Y bB, where Z2 is
obtained from Z2 by deleting some (possibly none, possibly all) members
of the form xkjzx(A"). 101 Y bB is provable for all Z\,Z2.
Hence, X,Z?\,Z%
394 A. Kr?n

(ii. 2.2.2)
By Hyp 2, X, Z\, x\Jzt(A") Y bB is provable. On the other
hand, X, zxAf Y xuz1(A") is provable, by Theorem 3.3; hence, by Hyp 1,
Zx, X' Y xuzx(A") is provable, where X' is obtained from X by deleting
some (possibly none, possibly all) members of the form zxA'. Hence,
zx, X Y xvz^A") is provable. By Hyp 1, Zx, X, X", (Z2)" Y bB is pro?
vable for any X",(Z*2)" obtained from X,Z\ by deleting some (possibly
none, possibly all )members of the
x\jzx(A"). Hence, form
Zl, X, X,Z\
Y bB is provable and using P and C we prove X, ZX,Z\ Y bB, for any Zx, Z2.
(ii.2.2.3) By Hyp 2, X, Z\ Yxuzx(Af) and X, Z\, xuzx(A") Y bB are
provable, for any Z* and Z*. By Theorem 3.3, X, zxA' Y xuzx(A") is
provable. Hence, X,Z\,X' Y xkjz1(A") is provable for any Z\, X', where
X' is obtained from X by deleting some (possibly none, possibly all)
members of the form zxA'. Using P and C, we prove X,Z\Y xuzx(A").
By Hyp 1, X, Z\, X", (Z*2)"Y bB is provable for any X", (Z\)" obtained
from X,Z2 by deleting some (possibly none, possibly all) members of
the form xuz1(A"). Hence, X,Z\,X,Z\YbB is provable and using
P and C we prove X, Z\, Z\ Y bB, for any Z\, Z\.
If xA is introduced either by CE and CL, or by DE and DL, the proof
is easy and left to the reader. If Yx,xA, Y2 Y bB is obtained either by
C or by Wx (W2) from Yx, xA, xA, Y2 Y bB or Yx,xA,Z Y bB, respecti?
vely, where in the second case Y2 is Z, xA, the proof is trivial.
This completes the proof of the theorem.

Corollary 4.1.1 GT+ is closed under TE.

S. GT+ is closed under MP. The reader has certainly noticed that
so far there is no proof that GT+ is closed under MP, for in TE we must
have x ^ 0. Moreover, by Theorem 3.1 (3) all prefixes used in the proof
oiXYxA are non-empty. Hence the closure under MP is not an immediate
consequence of the closure under TE. However, GT+ is closed under MP.
This can be shown by a method of Harrop (cf. [2] and [4]). Here we give
a more direct proof.
We shall say that Y xxAx, ..., XnY n> 1, are provable
Xx xnAn,
with combined weight w iff for any 1 < i < n X{ Yx{A{ is provable with
= ... ..., anAn and let hx, ...
weight w{ and w w1+ +wn. Let X he atAl9
he the degrees of Ax, ..., An, respectively; then we define the
...9hn
h of X: h = ...
degree h^+ +hn.
We shall write aX instead of X if X is aAx, ..., aAn. Also, we say that
hX is provable iff X is axAx, ..., anAn and Y Ax, ..., YAn are provable.
For any two sequences X and Y we shall write X[Y] to indicate that
all members of Y are members of X.
Let Y be bxBx, ..., bmBm, m > 0, and let the following conditions
be satisfied for any 1 < j < m :
either a ? or = 0 102
(1) bj anbj
Gentzen formulations... 395

(2) if anbj = 0, then max (a) < max(fy);


? a ? a
then by Y? we denote the sequence obtained from bx (Bx), ..., bm (Bm)
by deleting all members with the prefix 0.

Theorem is of degree h, Y a? and Y[aU]


5.1 If all Y bB are pro?
vable with combined weight w, (1) and (2) are satisfied and all members of
Y[aU] with the prefix a are members of all, then Y0 Y b?a(B) is provable.

Proof. Suppose that all is of degree h, that Ya? and Y[a?] Y bB


are provable with combined weight w and that Y is Zx, Z2. Our induction
hypotheses are:

Hyp 1' The theorem holds for any a? of any degree h' < h
and any combined weight w

Hyp 2' The theorem holds for any a? of degree h and any
combined weight w' < w.

If Y [a 17] Y bB is a basic sequent, then a = b and aB is the only mem?


?
ber of Y [a 17]; hence, Y? is empty and Y b a(B) is provable.
Suppose that Y[aU] Y bB is of weight > 0; we shall distinguish two
main cases: (i) no member of a? has an occurrence in the antecedent
part of either of the premisses of Y[a?] Y bB and (ii) otherwise.
Gase (i). Obviously, a? is aA. Let A he A'->A"; we must have

dA' YdA" Zx YzxA' Z2, auzx(A") Y bB


YA'-+A" Y,a(A'->A") YbB

max(^) > max (a).


By Theorem 3.3, we may put d == e; hence, by TE Zx Y z^A"
is prov?
able. By Theorem
3.5, Z\ Yz1~a(A') and Z?29z1-a(A") Yb-a(B) are
provable, is no pf in ZX,Z2
for there with the prefix a, and either a cz
bj
or = 0 and max
anbj (a) < maxf^), for any bj in Zl,Z2. Hence, by TE
Z\,Z\Yb-a(B) is provable.
If A is either A'&A" or A'vA", the proof is easy.
Suppose that aA is introduced by WX(W2) thus:

Y,aA Y bB
It is clear that
there is no member of Y with the prefix a; hence, W2
is not used. Let = for in Y either a cz or = 0
GT+ GtT+ ; any bj bj anbj
and max (a) < max(?y). By Theorem 3.5, Y0 Y b?a(B) is provable.

Gase (ii). We shall distinguish two subcases: for any member


(ii.l)
aA{ of a? the number of members of the form aA,t in all premisses of
Y[a?] Y bB equals the number of members of the form aA{ in Y[aU]
Y bB, and (ii.2) otherwise.

Subcase (ii.l). Suppose that Y[a?] 103 YbB is obtained by an applica


396 A. Kr?n

tion of IL. Then we have

Zx YzxG' Z2,duz1(G") YbB


Ya? Y[a?],d(C'->G") YbB
where max(d) < max(^), a ^ d,a ^ zi9 and either (ii.1.1) all members
of aU are among the members of Zx and there is no member o? a? among
the members of Z2 or else (ii.1.2) there is no member of a? among the
members of Zx and all members of a? are among the members of Z2 or
else (ii.1.3) there are members of a? among the members of both Zx and
Z2. It is obvious that h a? and Zx Y zxC'9 as well as Y a? and Z2, duzx(C")
Y bB are provable with combined weight smaller than w.
?
(ii.1.1) By Hyp 2', Z\ Y zx a(G') is provable, and by Theorem
?
3.5, Z\, (d\Jzx) a(G") Y b?a(B) is provable; hence, using IL, we prove
Z\, Z\, d-a(C'->-C") Y b-a(B).

(ii.1.2) By Hyp 2', Z\, (duz,) -a(G") Y b -a(B) is provable, and

by Theorem 3.5, so is Z\ Y zl?a(G')t9 hence, using IL, we prove Z\,Z\,


a-a(G'->G")Yb-a(B).
(ii.1.3) By Hyp 2', Z\ Yzx-a(G') and Z\, (duz,)-a(G") Yb-a(B)
?
are provable; hence, using IL, we prove Z\, Z\, d a(C'->C") Y b ?a(B).
If Y[a?7] Y bB is obtained by an application of either of the rules

IE, CL, CE, DL, DE, P or C, the proof is easy and left to the reader.
Suppose that Y [a?], cC Y bB is obtained from Y [a?] Y bB by Wx W2),
(
where a
=? c. Y a? and Y[a?] Y bB are provable with combined weight
smaller than w. By Hyp Y? is Let =
2', Yb-a(B) provable, GT+ GXT+;
c ?
since c? a ?z b?a, using Wx we obtain Y0, a(G) Y b?a(B). Let GT+
=
G2T+; there is a pf in Y[aU] with the prefix c, and c is a singleton.
Hence, we have anc = 0. By Hyp 2', Y0 Y b?a(B) is provable and there
= ? we
is a pf in Y? with the prefix c c a; hence, using W2, prove Y?, cG
Yb-a(B).
Subcase (ii.2) There is a member aA of a? such that the number
of members of the form aA in all premisses of Y[aU] Y bB is unequal
to the number of members of the same form in Y [a U] Y bB.

Suppose that A is A'->A", that Y [a?] Y bB is obtained by IL from


Zx Y zxA' and Z2, auzx(A") Y bB, and that Y[a?] is Z?Z2, a(A'->A").
(ii.2.1) There is no member of Z2 with the prefix a. By Hyp 2', Z\ Y z1
? is provable.
a(A')
(ii.2.1.1) a cz zx. Obviously, zl ?a(A') Y zx~a{A") is provable and
hence by TE we obtain Z\ Y z1?a(A"). On the other hand, Z2,zxA"
Y bB is provable, and since there is no of in Z2 with the prefix a, we have
a cz or = 0 and max for all bj in Z2. By
either bj anbj (a) < max(&j),
Theorem 3.5, Zl,zx?a(A") Y b?a(B) is provable. Hence, using TE, we
prove Z\,Z\ Yb-a(B).
(ii.2.1.2) a =-zx. YA' is provable. Since dA' Y dA" is provable,
104 and it is clear
so is Y A". On the other hand, Z2, aA" Y bB is provable,
Gentzen formulations... 397

that the sequenceof all pfs in Z2, aA" with the prefix a is of degree smaller
than the degree of a?; hence, by Hyp 1', Z2Yb?a(B) is provable.
(ii.2.2) There is no member of Zx with the prefix a; hence, a cz zx.

By Hyp 2', Z_29 zx?a{A") Y b?a(B) is provable and by Theorem 3.5 so


is Z\ Yzx?a(A'). It is obvious that zx?a(A') Y zx? a(A") is provable,
and hence using twice, we prove Z\,Z\Yb
TE ?a(B).
(ii.2.3) Both
Zx and Z2 contain members with the prefix a.
-
(ii.2.3.1) aczzx. By Hyp 2', Z\ Y zx -a(A') and Z\, zx-a(A") Yb
? are provable. ?
a(B) Obviously, zx a(A') Y zx?a(A") is provable;
hence, using TE twice, we prove Z\,Z\ Y b~a(B).
(ii.2.3.2) a = zx. YA' and Z2,aA" YbB are provable. Obviously,
dA' Y dA" is provable; hence, by Hyp 1', hA" is provable. It is clear
that the sequence of pfs with the prefix a in Z2, aA" is of degree smaller
than the degree of a?; ? is provable.
hence, by Hyp 1', Z\Yb a{B)
If Y[aU] YbB is obtained by either of the rules IE, CL, CE, DL,
DE, P or C, the proof is easy and left to the reader.
Suppose that Y[aU] Y bB is obtained by Wx (W2) thus:

Y'[a?']YbB
Ya? Y'[a?'],aA YbB

where is Y'[a?'],
Y[aU] aA and a? is a?', aA.
Obviously, Y a? and Y'[aU']YbB are provable with combined
weight smaller than w. By Hyp 2', Y0 Y b?a(B) is provable.
The theorem follows by double induction.

Theorem 5.2 Let X be axAx, ...,an, An; X Y x{B~>G) is provable


iff there is a prefix b such that bnx =0, max(b) > max(#) and X, bB
Y bux(G) is provable.

Proof. The only if part is trivial. To prove the if part, proceed


by induction on the weight w o? X Y x(B->C). l?w = 0, then X is x(B-*G).
Moreover, there is a b such that bnx =0 and max(b) > m&x(x). Consider
now the following proof :

bBYbB aub(G)Yaub(G)
a(B->G),bB Yaub(G)
this is a proof of X,bB Yaub(G).
If X Y a(B->G) is the conclusion in an application of any of the rules,
the use of induction hypothesis is straightforward.

Theorem 5.3 GT+ is closed under MP.

Proof. Suppose that X Y xA and Y Y y(A->B) are provable, max(#)


> max(#). By Theorem 5.2, Y, aA Y auy(B) is provable for any a, anx
= 0 and max
(a) > inax(^). By Theorem105 3.3, Y,xA Y x\j y(B) is provable.
398 A. Kr?n

If x ^ 0, we use TE and we prove X, Y Yxuy(B); if x = 0, then we


= YB
must have y 0; hence, by Theorem 5.1, is provable.

6. Cut elimination theorem for GR+. Let Y be bxBx, ...,bmBm,


m ^21, let c be any prefix that either c ?z or ?
and such bj bjnc 0, for any
1 < j < m. By Y? we denote the sequence obtained from Y
= if c ?z bj, and 6^
(a) by substituting b? for fy, where bj (bj?c)ua
= if =
bj 6j.no 0;
(b) by deleting all members with the prefix 0.

Theorem 6.1 Suppose that the following conditions are satisfied:


(1) X is Xx,...,Xn,n^l;
(2) cU is cAx, ..., cAm and of degree h;
(3) Y[cU] is given and for all bj used in it either c ?z bj or bjnc == 0;
(4) either (I) a = c or (II) a = 0;
(5) if a
=
c, then n
= 1 ?iw?! i/ ce=0, ??ew m = n and all members

of Y[cl7] with the prefix c are in c?;


(6) Y* is obtained from Y[cU]? by deleting at least one member of
the form aAx;
(7) Xx Y aAx, ..., Xn Y aAn, Y[c?] Y bB are provable with combined

weight w;
thenX, Y* Y (b-c)ua(B) is provable for any Y*.
Proof. We
proceed by double
shall induction. Let Hyp 1" and Hyp
2" be analogous to Hyp 1' and Hyp2', respectively. Suppose that c? is
of degree h, that Xx Y aA?, ..., Xn Y aAn, Y[c?] Y bB are provable with
combined weight w and that Y is ZX,Z2.
If for some 1 < i < n X{ Y aAi is a basic sequent, then a ^ 0; hence
a = c and n = in Y[cU].
1 by (5) and (6). Obviously, X, Y* is contained
Using P and C, from Y[c?] Y bB we prove X, Y* h 6J5.
If Y[W] h bB is a basic sequent, then Y[c?] is bB,m^l, b =c
and JB is JL.
= c; we have that since Y* is empty
(I) a X, Y* h &J3 is provable
and n = 1.
(II) a =0; obviously, X and Y* are empty and hB is provable.
that neither Xx ..., XnY Y aAx,
aAn nor Y[eU] Y bB are
Suppose
with combined 0. We shall distinguish two main cases:
provable weight
(i) there is no 1 < i < n such that aAi occurs in the consequent part of
a premiss of Xi Y aA{ and cAi occurs in the antecedent part of a premiss
of Y[c?l *"bB9 and (ii) otherwise.
Gase (i) Let A? he A^->A"; then Xx Ya(A!i->A!?) is obtained from
=
X?, dA'{ Yaud(A") by IE, where and 0, and Y[cU] YbB is obtained
from ZxYzlAfi and Z2, cvz^A'/) Y bB by IL, where Z19 Z2, c(A,i->A,{)
is Y[cU]. 106
Gentzen formulations... 399

(I) a
=
c; hence n = 1. If zx ^0, we proceed as in Theorem 4.1.
If z, - 0, from hJi; and Xx,dA[ Yaud(A") by Hyp 1" we obtain Xx YaA",
and then usingZ2, aA" Y bB and H p 1" we prove XX,Z2 Y bB.
= =
(II) a =0; hence m n 1. Let ^ ^0; by Theorem 3.4, ^.Ai
h z1A" is provable. Hence, using Zx Y zxA[ and Hyp 1", we obtain ?^
YzYA". By Theorem 3.5, Z^^A" Yb? c(B) is provable since there is
no member with the prefix c in Z2. By Hyp 1", ZX,Z2Y b?c(B) is pro?
vable.
Let = 0. YA" is provable, cA" Y bB
zx By Hyp 1", Hence, using Z2,
and Hyp 1", we prove Z2Yb ?c(B).
If A is either A? &A" or Ajv A", the proof is easy and left to the reader.
Let us consider Wx (W2); obviously, n = 1. Hence we must have

Y' YbB
X YaA, Y', cAx Y bB
=
(I) a c; we proceed as in Theorem 4.1.
a = 0. It is clear that there is no member of Y with the
(II) prefix
c ; hence W2 is not Let =
used. GR+ GXR+ ; for all bj used in Y either
c c or = Theorem Y h b?c(B) is provable,
fy fync 0; by 3.5,
Oase (ii) It is clear that there is a 1< i < w such that either ol4?
occurs in the consequent part of a premiss of X{ Y aAi or cAi occurs in
the antecedent part of a premiss of Y[c?] Y bB. If aA occurs in the
consequent part of a premiss of Xi Y aAi, then a = c and n = 1. We

proceed as in the proof of Theorem 4.1, Subcase (ii.l).


IfcAi occurs in the antecedent part of a premiss of Y[cZ7] h bB, we
distinguish two subcases: (ii.l) for any member cAi o? c? the number of
members of the form cAi in the premisses of Y[oI7] Y bB equals the number
of members of the form oAi in Y[cU] Y bB, and (ii.2) otherwise.
Subcase (ii.l) Suppose that Y[o?7] YbB is obtained by an applica?
tion of IL. Then we have

Zx Yz,C' Z2,duzx(C") YbB


X YaAx,...,XnYaAn Y[cU], d(G'->G") YbB

It is obvious
that Xx Y aAx, ..., XnY aAn, Zx Y zxG' as well as Xx
Y aAx, ..., XnY Y bB are with combined
aAn, Z2, duzx(G") provable weight
smaller than w.
a = c; hence n=l.
(I) Suppose that aAx occurs either in Zx or
in Z2. Let zx ^ 0; then we proceed as in the proof of Theorem 4.1.
Let = 0 of c U are among
zx ; then all members the members of Z2.
By Hyp 2", from X, Y aAx and Z2, dC" YbB we obtain XUZ*29 dG"
Y bB and hence using YG' and IL we prove d(G'->G") Y bB.
Xl9Z\9
(II) a hence m= 107
n and there is no member with the prefix
=0;
400 A. Kr?n

c either in Zx or in Z2, except the members of o?. Let z, =?0; we proceed


as in the proof of Theorem 5.1, Subcase (ii.l).
Let z, =0; then all members of c? are among the members of Z2.
By Hyp 2", YAx,...,YAn
from and Z2, dC" Y bB, we obtainZ\, d-c(G")
Y b-c(B). Hence, using Y G' and IL, we proved*, d-c(C'~>G") Y b-c(B).
If Y[cU] YbB is obtained by an application of either of the rules
IE, CL, CE, DL, DE, P or C, the proof is easy and left to the reader.
Suppose that Y[c?],dG Y bB is obtained from Y[cU] Y bB by Wx
(W2).
(I) a =
c; hence n = 1, and we proceed as in Theorem 4.1 (ii.2).
(II) a =0; hence m =n and c # d. YA,, ...,YAn and Y[cU]
Y bB are provable with combined weight smaller than w. By Hyp 2",
Y* h b?c(B) is provable.
Let GR+ =
GXR+; since d?c?^b?c, using Wx we obtain Y*, d? c(C)
Yb-c(B).
Let GR+ = G2R+; there is a member of Y[cU] with the prefix d and
d is a singleton. Hence end =0 and there is a member of Y* with the
d = d? we d h b?c(B).
prefix c; using W2 prove Y*, ?c(0)

Subcase (ii.2) There is a member cAi o? c? such that the number


of members of the form cAi in the premisses of Y[c?7] Y bB is unequal
to the number of members of the form cAi in Y[cU].

Suppose that A{ is A!i->A!?', that Y[c?] Y bB is obtained by IL from

Z, Y zxA!t and Z2, cuzx(A") Y bB.


a = c and n = 1. Let as in Theorem
(I) zx =?0; we proceed 4.1 (ii.2.2).
Let zx =0. Obviously, Xl9 dA{ Y aud(A") is provable. Using YA[
and Hyp 1" we prove Xx Y aA". On the other hand, from Xx Y a(A{->A")
and Z2, aA" Y bB by Hyp 2" we obtain Xx, Z2, aA" Y bB. Now using
Xj Y aA" and Hyp 1" we prove Xl9(X19Z2J aA") YbB, where (Xx,
is obtained from aA" by deleting at least one member
Z\, A") XX,Z%,
of the form aA". Hence we obtain XX,XX,Z\ Y bB and using P and C
we prove Xx,Z%YbB.
(II) a = 0 and m = w. If ^ ^ 0? we proceed as in Theorem 5.1

(ii.2), using Hyp 1" instead of TE.


= 0. is provable; 1" YAl9 ...
Let zx Obviously, <L4? Y dA" by Hyp
..., h -A?-1, f- -A'/, YAi+l9 ..., h JLW are provable. It is clear that the

sequence of members with the prefix c in Z2,cA" YbB is of degree


smaller than the degree of c?; hence, by Hyp 1" Z* Yb?c(B)is provable.
If Y[cI7] Y bB is obtained by either of the rules IE, CL, CE, DL,
DE, P or C, the proof is easy and left to the reader.
If Y[c?] Y bB is obtained by Wx (W2), we proceed as in Theorem 5.1.
This completes the proof of the theorem.

There are two corollaries 108


deserving the name of theorems.
Gentzen formulations... 401

Theorem 6.2 GR+ is closed under TE.

Theorem 6.3 GR+ is closed under MP.

7. We summarize our investigation by proving some


Equivalences.
equivalences.

Theorem 7.1. The propositions are equivalent:


following
(1) XYxA in S++DL;
(2) X YxA is provable in TRS+;
(3) XYxAis provable in GXS+.

Proof. (l)o(2) was proved in Theorem 2.5. (3) =>(2) follows from
the fact that TRS+ contains GxS+9 for it is closed under IL. (2)=>(3)
follows from the fact that GXS+ is closed under TE and MP.

Theorem 7.2 The following propositions are equivalent:


(1) X YxA is provable in W2S+ ;
(2) X YxA is provable in G2S+.

Proof. (1) =>(2) follows from the fact that G2S+ is closed under TE
and MP. (2) =>(1) follows from the fact that IL is a derived rule of W2S+
Theorem 7.3 If all prefixes used in X are singletons, then the follow?
ing propositions are equivalent:

(1) X YxA in S+;


(2) X YxA is provable in W\S+;
(3) X Y xA is provable in W2S+;
(4) X Y xA is provable in G2S+.

Proof. (1)o(2)o(3) was proved in Theorem 2.7. (3)o(4) follows


from Theorem 7.2.
Let us say that A is a theorem of a Gentzen formulation iff hA is
provable in that formulation; then we have

Theorem 7.4 The sets of theorems of S+, and G2S+ coin,


WXS+,W2S+
cide.

8. Concluding remarks. We conclude the paper by making some


brief remarks.
(1) In the present Gentzen formulations the sequents are pieces
of notation denoting the existence of proofs from hypotheses in
S+ +DL!
(2) In 6r2S we need DL (instead of DL*) in order to prove Theorem
3.2 which seems to be unprovable with DL*.
(3) We need Theorem 3.2 in order to prove the cut elimination
109
theorems.
402 A. Kr?n

In a proof
(4) in G2S+ the prefixes in the antecedent parts of sequents
are changed as the rank increases ;hence, there is no direct proof that the
sets of theorems of S+ and
G2S+ coincide (the main obstacle is DL).
(5) Hence we need WrjS+, for in W1S+ the prefixes in the antecedent
parts of sequents remain unchanged as the rank increases, and we can
prove that the sets of theorems of WXS+ and S+ coincide. DL is here
harmless.
(6) However, WXS+ is not closed under TE in general, and there
is no direct proof that the sets of theorems of WXS+ and G2S+ coincide.
(7) Hence we need W2G+; it is the link between WjS+ and G2S+.
(8) Let us notice that the condition that Y a? is provable is indis?
pensable in the proof of Theorem 5.1. If we weaken this condition requiring
that for some member aA of a? we have that hA is provable, then we
can derive Ackermann's rule ?.
(9) Also, let us notice that (5) in Theorem 6.1 is indispensable.
If we weaken (5) allowing a = 0 and n^m, we obtain the rule:
From hA and hA&B-+G to infer YB->C, contrary to the nature of R+.

(10) We do not know whether a decision procedure can be based


on Let the reader decide whether {l}p-+q->q Y {l}p->q is provable
G2S+.
in He would easily construct an infinite branch, for the principal
G2S+.
member of IL may occur in the antecedent part of the left premiss of IL,
in the presence of G, and the hypothesis that IE was used in the proof
would produce new prefixes as the rank increases. We know no remedy
for that.
(11) The rule C is needed in the proof of A4 and in the proof of
All. Also, to prove All we need a rule of weakening. However, if we adapt
IL by requiring that the union of prefixes in the left premiss be disjoint
to the prefix of the principal member, and if we adapt MP in an appro?

priate way, then A4 is unprovable, but the resulting system is deeidable

[4]). The adaptation of IL consisting in the requirement that the


(cf.
principal member has no occurrence in the antecedent part of the left
does help. In this case A4 is provable,
not but Theorem 3.2 is not.
premiss
(12) If we omit All, then in case of R+ (R+ ?All)
we need no we?
and no prefixes (cf. [6]). It seems that without weakening and
akening
DL and DL* coincide. There is a decision procedure for the re?
prefixes
sulting system (cf. [5]).

References

[1] A.R. Anderson and N.D. Belnap Jr, Entailment Vol. I, Princeton University
Press, Princeton and London 1975.
R. Harrop, On and existential statements in intuitionistic systems
[2] disjunctions
of logic, Mathematische Annalen 132 (1956), pp. 347-361.
[3] A. Kron, Deduction theorems for T, E 110
and R reconsidered, Zeitschrift f?r ma
Gentzen formulations... 403

thematische Logik und Grundlagen der Mathematik Band 22 (1970) Heft


3, pp. 261-264.

[4] A. Kr?n, Decision procedure for two positive relevance logics, Reports on
Mathematical Logic 10 (1978), pp. 61-79.
[5] V.M. Popov, O razresimosti relevantnoj sistemy R? (On decidability of relevant
system B?), in Russian, in Metody logiceskogo analiza, Moscow 1977.

[6] V.A. Smirnov, Predstavlenie sistem s i relevantnoj


logicesJcih siVnoj implikacijami
v with strict and relevant
s?kvenciaVnoj forme (Eepresentation of logical systems
implications in sequential form), in Russian, in Teorija logiceskogo vyvoda,
Moscow 1974.

[7] R.M. Smullyan, First-Order Logic, Springer-Verlag, Berlin 1968.

Added in proof I wish to thank the referee who drew my attention to the paper
by Gr.E. Mine, Teorema ob ustranimosti secenija dlja relevantnych logik, in Zapiski
naucnych seminarov LOMI. Issledovanija po konstruktivnoj matematike
i ike, U. L. 1972, pp. 90-97, which was unknown to me.
matimaceskoj, log

Mathematical Institute
University of Belgrade

"ReceivedApril 24, 1979.

Studia 4 111
L?gica XXXIX,
aleksandae Gentzen Formulations
ebon
of Two Positive Relevance Logics

Correction

In [1] Theorem 3.2 (p. 388) condition (3) is too strong; it should read:
= is a singleton, and /? g for some g, then /'
(3) if GS+ G2S+,f
is a singleton;
In Lemma 3.2.1 (p. 389) (xi) should read:
=
(xi) if OS+ G2S+, h is a singleton, and h s for some #, then &' is
*?r
a singleton.
The proofs of Theorem 3.2 and Lemma 3.2.1 remain unchanged.
The present correction is necessary, for otherwise Theorem 3.2 is not
applicable in the proof of the cut elimination theorem.

Reference

[1] Aleksandae Kron, Gentzen formulations of two positive relevance


logics, Studia L?gica, XXXIX, 4, (1980), pp. 381-403.

Announcement
The Department of Philosophy of the State University of ?TewYork at
Buffalo is organizing an International on and
Conference Philosophy
Science in Phenomenological Perspective, dedicated to the memory of
Marvin F?rber (1901-1980) during March 11-13, 1982. The Conference
emphasizes the critical examination of the aims, issues, and methods of
philosophy in their relation to various disciplines. There will be a special
section dedicated to Marvin F?rber, reviewing his contribution to and
criticism of phenomenology as well as his general contribution to American
philosophy in the 20th century. The Conference is also sponsored by the
International Phenomenological Society. Further information may be
obtained from Professor Kah Kyung Cho, Department of Philosophy,
State University of New York at Buffalo, Buffalo, NY 14260.

BALOY HALL BUFFALO; NEW YORK 14260 TEL. (716) 636-2444


112
ENTAILMENT AND QUANTUM LOGIC *

Aleksander Kron
Zvonko Maric
Slobodan Vujosevic

Univeristy of Belgrade
Yugoslavia

INTRODUCTION

Let L be an orthomodular lattice (OML). It is well-known that


there is no operation + definable in terms of lattice operations such
that + satisfies most of the desirable conditions for a non-modal
implication. 5 Some authors have come to the conclusion that perhaps
the implication in quantum logic must be treated as the relation <
in L. ...

In this paper we answer the following question: can an operation


+ be added to L, independently of the lattice operations, such that

(a) + has most of the desirable properties of an implication;

(b) there is a suitable interaction between + and < •


-'
(c) there is a suitable interaction between + and the lattice
operations;

(d) the addi tion of + does not destroy the properties of L?

In the sequel we shall answer this question in the affirmative


by making (a) to (d) precise and by proving some relevant theorems.

*This paper has been written under a contract of the first two
authors and the Republic Fund for Scientific Work of Serbia.

193

113
194
A. KRON ET AL.

It turns out that there are several candidates for + to be added to


an OML. All of them are extracted from relevance logic. l

Why relevance logic? We know that the operation + in relevance


logic is independent of other connectives and that the axiom-schema
of distribution is independent of the remaining ones. These two facts
single out implications from relevance logics as good candidates for
our operation.

There is an independent interest in relevance logics in the


present context. In a recent paper2 it has been suggested that there
is a connection between the concept of relevance and that of
compatibility. It has also been suggested that relevance logics of
some kind could be interpreted in OML's.

We do not assume that the reader is familiar withl. However,


an acquaintance with this work could be of importance not so much
for readability of the present paper as for an understanding of
various aspects of relevance logics in the context of quantum theory.

The addition of + to an OML gives rise to a new algebraic struc-


ture. In Section 1 we define the strucutres QT,QE, QR, QR t , and QRM
arising from addition of a relevant implication to an OML. From the
properties of these structures it will be clear that (a) to (c) are
satisfied. In Section 2 we investigate more closely QR and QRt,
and in Section 3 the fragments QR.+ and QRt.+ of QR and QR t ,
respectively. In Section 4 we show how QRt can be obtained from an
OML. Finally, in Section 5 we are dealing with the corresponding
and related propositional calculi and we prove (d).

1. THE ALGEBRA

Let S be a non-empty set, let < be a binary relation on S, and


let +, {\ , U, and .J. be operations on-S, where .J. is unary and the
remaining ones binary. We assume that the following conditions are
satisfied, for all a, b, c, .•. EX:

(i) a + b < (b + c) + (a + c)

(ii) b + c < (a + b) + (a + c)

(iii) a + (a + b) < a + b

(iv) a () b < a

(v) a f'I b < b

(vi) (a + b) () (a + c) < a + (b () c)

114
ENTAILMENT AND QUANTUM LOGIC 195

(vii) a2. au b
(viii) b2. au b
(ix) (a -+- c) n (b -+- c) < (a V b) -+- c
.i.
(x) a -+- b4 < b -+- a

(xi) a-+- a J. < a'"

(xii) a.L..i. < a

(xiii) a f\ (a .&. U(a () b» < b.

Moreover,

MP' if a < b and a -+- b < C -+- d, then c < d

AD' if a < b and a 2. c, then a < b f\ c

DI if a < c and b 2. c, then a U b < c.

Now QT (S, 2., -+-, n, U, "") , where (i) to (xiii) , MP', AD' and DI
hold.

Let us define:

a = b iff a < b and b < a

a < b iff a < b and a I b

C a (a -+- a) -+- a.

In order to obtain QE. let us adjoin to QT

(xiv) (a -+- a) -+- b 2. b

(xv) Can Db < D(a n b)

(ii) is redundant in QE.

QR is obtained from QE by adjoining

(xvi) a 2. (a -+- b) -+- b

In QR (xi), (xiv), and (xv) are redundant.

Suppose that t is a constant, t E S, such that

(xvii) a < t -+- a

115
196 A. KRON ET AL.

(xviii) t ~ a ~ a;

then QRt = (S, <, ~, n, u, .L, t) such that (i) to (xviii) are satis-
fied, as well asMP', AD', and DI.

In order to obtain QRM, let us adjoin

(xix) a < a ~ a

to QR.

The structures QR + and QRt + are the (~,


..L
) -fragments of QR
and QRt respectively.

Now we shall show that + in all of these strucutres except QRM


has most of the desirable properties of an implication, and that there
is an interaction between ~ and <, n, U, .L. This will be done by
giving a list of theorems provable for these structures. Most of the
proofs are omitted, for they can easily be reconstructed from proofs
in the corresponding propositional calculi. Let us start with QT.

Tl. < is transitive.


.L.L
T2. a < a

T3. < is reflexive.

T4. L = <S, ~, n, V, L)is an OML.

Proof. It is clear that L is a lattice. L is orthocomplemented


for we have a 4 .L = a, and if a < b, then b.L < a.L. We shall show that
there is a smallest element in-So We have:
.L
al\a <a
.L
< a

< a.J. U (a n b)

~ a n (a .L U (a n b))
< b.

b n b. By 0 we denote the smallest and


.I. "-
It is easy to prove a 1\ a
by 1 the greatest element of S. The orthomodularity is proved using
(xiii). Therefore, L is an OML.

TS. a+(b+c)~(a~b)+(a+c)

T6. a ~ b < (a ~ (b ~ c)) ~ (a ~ c)

116
ENTAILMENT AND QUANTUM LOGIC 197

T7. a {\ (a -+ b) < b

T8 •. b -+ c < (a n b) -+ c

T9. a-+ (b -+ c)
-< (a n b) -+ c
TlO. a-+ b < (a -+ b''') -+ a .L
.&.
T11. a -+ a < a

T12. a-+ b ~ a
.. U b
Thus -+ in QT has many important properties: it is transitive
and self-distributive and there is a modus ponens (MP'). Now we. turn
to -+ in QE.

T13. a -+ b ~ «a -+ b) -+ c) -+ c

T14. a -+ «b -+ c) -+ d) < (b -+ c) -+ (a -+ d)

TIS. aa < a

T16 • a -+ b < 0 a -+ 0 b

T17. Qa < (a -+ b) -+ Cb

T18. c(a-+b)~Da-+Cb

T19. Qa < aca

T20. If a < b, then (a -+ b) -+ (a -+ b) < a -+ b

T21. If a < 0 b and a ~ a c, then a ~ a (b n c)


T22. If a -+ b < c, then a -+ b < Cc.

As expected, the implication in QE is modal; it has some of the


main properties of S4-strict implication.

QR and ARt will be investigated in the next section. Here we


show that QRM is uninteresting.

T23. L = <S, ~,n, u,.L) in QRM is a Boolean algebra •


.&. J..L ..
Proof. We have a < a -+ a and a < a -+ a; hence, a < a -+ a
and a~ < a -+ a. Therefore, b < a ~ a, (a -+ a) -+ a < b ~a, and
a < b -+ a. -But using a < b -+ a we-can prove distributi~ity; hence,
L is here a Boolean algebra.

117
198 A. KRON ET AL.

T23 rules out QRM from further consideration, and it shows that
we have no a priori reasons to believe that the OML extended by an
operation + is still an OML.

2. QR AND QRt

The richest and the most interesting structures are QR and QR t •

T24. a + (b + c) b + (a + c)

T25. a < (b + c) iff b < a + C

Let us define: aob = (a + b• ) ••


T26. (S, o} is a connnutative semigroup.
(S,
T27.
.:::.' 0> is a partially ordered semigroup.
T28. {S, o,f\,U} is a lattice-ordered semigroup.
.:::.'
T29. a < aoa

T30. ao(b u c) (aob) U (aoc)

T31. ao(b f\ c) < (aob) n (aoc)

T32. a n b < aob

T33. a < b + (aob)

T34. ao(a + b) .:::. b

T35. aob < C iff aoc"" < b"" iff c·ob < a.&.

T36. aoO < a.

Proof. o < a + a, a < 0 + a, a < a


.L
+ 1, a
.L
< a + 1, aoO < a.

T37. a < aol.

Proof. a + 0 = a + (a f\ aJ.) = (a + a) (\ (a + aJ.) ""


< a , a < aol.

T38. 000 001 1 + 0 = 0; 101 = 0 + 1 =0 + 0 =1 + 1 = 1;

a + 1 =0 + a 1.

Proof. Trivial.

118
ENTAILMENT AND QUANTUM LOGIC 199

Now we shall define a compatibility relation; we say that a and


bare R-compatible (aCRb) iff all of the following four conditions
are satisfied:

aob a f'I b

aob.L. a n b J.

T39. aCRb iff acRbJ. iff bCRa iff bcRaJ. iff a.L. CRb'" •

T40. If aCRb and aCRc, then aCRb nc and aCRb V c.

Proof. ao(b n c) < (aob) n (aoc) = a n (b n c) < ao(b n c).


ao (b ~ = aob'" IJ aoc~ = a n b ~ lJ a n c'" < a n (b f\ c)~ < ao (b n c)'" ,
and similarl] a L o(b n c) = a'" n (b n c) and a.&. o(b n c)~-;'
a.l. n (b n c) • Also, ao(b LJ c) = (aob) \J (aoc) = (a n b) \J (a ('\ c) <
a n (b lJ c) < ao (b lJ c) and ao (b lJ c)J. < (aob.&.) n (aoc~) = -
a 'n (b.l. n c.&.) < ao (b \J c)"", and similarly a.l. 0 (b lJ c) = a~ fl (b U c)
and a.l. o(b U c)'&' = aJ. ,,(b U c).l..

T41. If aCRb, then a fl (a"" V b) =a fl b.


L.l. .l.
Proof. If aCRb, then aob = a n b and a + b a U b; hence,
a n (a.l. V b) = a n (a + b) ~ b and the theorem follows.

Thus we see that R-compatibility implies compatibility in the


usual sense (if a and b are compatible in the usual sense, we write
aCb).

Let l.R be the center of QR, Le. i. R = {a ~ S : Vb E S aCRb},


and let i, be the center in the usual sense; then.lR!: £.

T42. If [R # 0, thenJrR is a Boolean algebra.

Proof. ObviouslY,.!R is an OML.


By T41 it is distributive.
If a & LR' then aCRb for all b E S. Hence, for all b E, S, a.... CRb,
and by T40, for all b E S, a n a"" CRb, and a u aJ. CRb. Therefore,
0, Ie- .cR.
t
We add some theorems about QR •

T43. a = t ~ a

T44. aot = a

T45. (S, 0, t) is a commutative monoid.

119
200 A. KRON ET AL.

T46. (s,~, 0, t) is a partially ordered monoid.

T47. (s,~, 0, fl, U,.!., t) is a lattice-ordered monoid.

It is interesting to note that if t = 1, then QRt is a Boolean


algebra, for in this case we have a = aot = aol = (a + O)~ and
a.&. = 1 + a.&., 1 < a + a, and a < b + a. On the other hand, if t = 0,
we obtain 0 = l~ For suppose t = 0; then a = aot = aoO, a.&. = (aoO)~
= 0 + a~ = 1 and a = 0, for all a ~ S. Thus, in order to have an
interesting QRt we must assume that 0 < t < 1. Also, suppose that
for all b < 1, b ~ t; then we have t + a < b + a and a < b + a, and
we obtain a Boolean algebra again. Hence,-in order to have an
interesting QRt, there must be some b E S such that b I t and b < 1.
Consequently, if tf ~R' then QRt is a Boolean algebra~ For suppose
that t ~ oCR; then aCRt for all a • S, aot = ant = a and a < t.

Completing this section, we notice that there is a connection


between + of QR and the implications Cl(a, b), C2(a, b), and C3(a, b)
satisfying Hardegree's minimal implicative conditions.

T48. (a + b) fl (a + a) ~ a.l. V (a () b) = Cl (a, b) •


.L
b) < (a n b) Vb =C 2 (a, b).
J.
T49. (a + b) n (b +

T50. (a + b) (l (a + a) fl (b + b) < (a n b) V (a.L() b) V (a· n tf)


= C3 (a, b).

3. QR.+ AND QRt.+

These are fragments of QR and QR t , respectively: QR.+ =


(S, ~, +, ..L) and QRt.+ = <S, ~, +,., t), where (i) to (iii), (x) to
(xii), and (xvi) are satisfied. QRt+ satisfies (xvii) and (xviii)
as well.

Let us define: a + b = a"'" + b•


.L .L.&.
T51. a + b = (a ob) •

T52. + is associative and commutative.

T53. a + a < a•
....
T54. boc < a iff b < a + c.

T55. ao(a.... + b) < b.

T56. ao(a + a.&.) = a

120
ENTAILMENT AND QUANTUM LOGIC 201

T57. a + (aoa·) = a.
T58. ao(a 4 + (aob)) = aob
T59. ao(b + c) ~ (aob) + (aoc)

T60. QR.+ and QRt.+ are partially ordered, orthocomplemented


and orthomodular.

However, QR.+ and QR t.+ are not lattices. Although we can


prove that if a ~ b and a ~ c, then a ~ boc, as well as that if a < c
and b < c, then a + b < c, we have none of aob < a, aob < b, a < a + b
or b <-a + b. Neither-do we have (aob) + (aoc)-< ao(b +-c). There
is no-minimal element in either of these two str~ctures.

The concept of R-compatibility can be re-defined in QR-+ and


QRt+ : aCRb iff all of the following eight conditions are satisfied.

aob < a

aob < b

aob..L < a
..L ..L
aob < b

a.J. ob ..L
< a

< b.1.

.J.
a

..L
Proof. ao(b + c) = ao(b + c) ~ aob.l. + a~c. Since aCRb and
--- .I.
aCRc, we have aob < a, aob < a and hence aob + aoc ~ a. But b£Rc;
hence boc"" < band b4 < b..L +-c. In the same way we obtain c < b + c .
.1. - .1. - .L .... - ..L
Since aob < band aoc < c, we prove aob ~ b + C and aoc ~ b + c.
Therefore, aob..L + aoc < b"" + c.
ao(b + c)..L = ao(b r + c)..L = ao(boc"") = (aob)oc..L. But aob < a
and hence (aob)oc..L < aoc..L < a. On the other hand, aob < b and hence
J. ..L -.L r
(aob)oc ~ boc = (b + c) = (b + c)..L.

In a similar way we prove the second half of the theorem.

121
202 A. KRON ET AL.

T63. If aCRb, aCRc, and bCRc, then ao(b + c) = (aob) + (aoc).

T64. If the center~R p 0, then it is a Boolean algebra.


Proof. Let aE£R; then aCRb for any b 6 S; hence, a· CRb
for aUbe Sand aJ. • .cR.
Suppose that a, b ~ ~ R; then for any
c E S, aCRc, bCRc, and aCRb; hence, aobCRc and a + bCRc for any
c E S. This shows thatC R is a lattice. By T63 it is distributive.
For any a ~ .c R we have aCRa and aCRa·. Hence, aoaJ. < a, aoa· < a.J.,
and for any b E S ao(aob)'" < a, aJ. < a -+ (aob) = a"'-+ (aob) , -
aoaJ. < a· + (aob), aoa.J. < ao(aJ. + (aob» = aob < b. Therefore,
aoaJ."< b for all b E S, and aoa.l. = bobJ. for b E-;(R. Also, we have
b 2. a- + aJ., for all bE .cR.
Hence, 0, 1 E R• .e
As to QRt.; we have a combination of results for QRt and QR'; •

4. ORTHOMODULAR LATTICES, QR AND QRt

In this section we show that there are OML's such that -+ of QRt
is definable in them. More precisely, we show that any OML L can be
extended to an OML Lt such that -+ is definable in Lt.

Let L be an OML; choose two new objects t and f, t, f S, t


let st = S U {t, f} and extend the relation and the operations of S
to st as follows:

(1) 0 < t < 1, 0 < f < 1

(2) a i t i b, a i fib, for any a, b e S - {a, l}

(3) t 2. t, f < f

(4) fit. t i f
J.J. J"J.
(5) t = t. f = f

(6) t.J. = f

(7) t n a ant = f n a a n f t n f f n t 0, for


any a E S - {l}

(8) t n 1 1 n t t

(9) f n 1 1 n f f

(10) t U a aut f U a auf=tUf f U t 1, for


any a E S - {a}

(11) t u 0 = 0 U t = t

122
ENTAILMENT AND QUANTUM LOGIC 203

(12) f U 0 =0 U f =f.
T65 . Lt = (s\ .::.,(\,u,·) is an OML.

Proof. It is easy to check the defining conditions for an OML.

Are the conditions (1) to (12) consistent? We give an example


of Lt. Let L be the well-known Dilworth lattice D16. When we
extend L to Lt according to (1) to (12) we obtain the result shown
in figure 1 . Hence, (1) to (12) are consistent.

If L is any OML and Lt is obtained from L according to (1) to


(12) , we say that Lt is a minimal extension of L. Let Lt be a minimal
extension of an OML L; we define the operation + on Lt as follows :
for a, bESt

(13) a + b 1 if either a = 0 or b = 1
(14) a + b t if 0 F aFt, b F 1, and a < b

(15) t + b b

(16) a + f = a

(17) a + b 0 if aFt, b F f, and a { b .

T66. The minimal extension Lt of L, with + defined by (13) to


(17) is a QRt structure.

Proof . It is not difficult to verify the defining conditions


of a QRt structure.

Fig. L

123
204 A. KRON ET AL.

5. THE PROPOSITIONAL CALCULI

We write p, q, r, ••• for propositional variables and +, &, V, -


for connectives. We also assume that there iR a propositional con-
stant t in our alphabet (we use t both for the propositional constant
and for the object t in QRt; no confusion will arise therefrom).
The set of formulas is defined as usual; let A, B, C, ••• range over
the set of formulas. A formula A is a zero-degree formula if it
contains no occurrence of either + or t; it is a first-degree
formula if it is of the form B + C, where Band C are zero-degree
formulas. The systems T, E, and R are defined inl, pages 340-341.
The language of T, E or R has not t. Rt is obtained from R by adding
t and axiom-schemata .

(t + A) + A

A + (t + A).

The system Efde is defined inl, page 158.

Let TQ, EQ, RQ, RtQ, and EfdeQ be obtained from T, E, R, Rt ,


and Efde, respectively, by omitting the axiom-schema of distribution,
viz. A& (B V C) + (A&B) V C, and by adjoininf the schema
A& (A V(A&B» + B. Let TQ', EQ', RQ', and R Q' be obtained from
TQ, EQ, RQ, and RtQ, respectively, by omitting MP and AD, and by
adjoining the following rules:

MP' From A + B and A + B + .C + D to infer C + D

AD' From A + B and A + C to infer A + .B & C

DI From A + C and B + C to infer A V B. + C.

It is easy to show that EfdeQ is contained in any of the systems


defined. We have

TQ' C EQ' C RQ' c RtQ'


U
Efde Q n n n n
~ C RtQ
TQ C EQ C RQ

where C means inclusion.

In this section we prove that all systems defined (except EfdeQ)


are conservative extensions of EfdeQ.

Let~,Q~,Qc,Q~, andQ,t be the classes of all L, QT, QE, QR,


and QRt structures, respectively, let X e {EfdeQ, TQ', EQ', RQ', RtQ'} ,
and let flJx
E{~,Qr,Qt ,Qt,Qrtt} be the class that corresponds to x.

124
ENTAILMENT AND QUANTUM LOGIC 205

For any X and Y€ ~~, let Hom(X, YX) be the class of all homomorphisms
h : F + Y, where F 1S the set of all formulas of X, such that:
(1) h(p) € Sand h(t) = t. Of course, if t ~ F, then (2) is omitted.
Also, if + is not among the connectives (if X = EfdeQ) then h(A + B) =
h(A) + h(B) is omitted.

T67. If X ~A, then A is of the form B + C for some Band C.

Proof. Induction on the length of a proof of A in X.

T68. If X r A + B, then for any Y € Vx and any hE Hom(X, Y)


h(A) .::. h(B).

Proof. The theorem holds for any axiom of X, and it is easy to


see that if the theorem holds for premises of a rule of X, then it
holds for the conclusions, too.

In order to prove the converse of T68, we proceed as follows.


Let us consider X as an abstract algebra X* = (F, +, &, V, -, t) (if
either t ¢ F or + is not among the connectives, delete t or +).
The set V of propositional variables is the set of free generators of
X*. Moreover, X* is free over {X*}. Let us define the relation t
on x* : A t- B in x* iff XI- A + B and XI- B -+A. The relation t-
is a congruence relation. The replacement theorem (provable for any
X) shows that ~ is congruent with respect to any endomorphism of X*,
i.e. t- is a fully invariant congruence. Hence, by a well-known
theorem (cf. 3 , page 166, Theorem 6), the Lindenbaum algebra ~X = x*/t
is a free algebra with the set {[p] : P E V} of free generators «(p]
is the equivalence class of p under t). We know that ~X e OntJX'
the other hand, 11x is closed under the construction of sub-algebras
as well as under the construction of products of algebras. Thus,~X
is an equational class generated by ~X. By a theorem in 3 (page 165,
Theorem 5), ~X is free over '1lX' Since ~X has w generators, by
another theorem in 3 (pages 169-171, Theorem 1), [AJ = [B] in ~X iff
for any Y € 'l1x and any h E Hom(~X, Y), h([A]) = h«(B]). (Of course,
[A] and [B] are equivalence classes of A and B respectively, and
tA] = [B] iff A ~ B in X*). This amounts to:

T69. A:t B in x* iff for any Y € "1x and any h E Hom(X, Y)


h(A) = h(B). Now, we have XI- A + B and (A) < h(B) in Y iff
A :t A& B in x* and h(A) = h(A) () h(B) in Y, respectively. Therefore,
if not X I- A + B, then not [A] = [A] () [B] in ~X, and there is a
tt1
Y E x and an h E Hom (X, Y) such that h (A) { h (B). Thus, we have

TiO. X t- A + B iff for any Y E 'l1x and any h E Hom(X, Y)


h(A) .::. h(B).

T7l • RtQ" 1S a conservat1ve


. .
extens10n 0 f EfdeQ.

125
206 A. KRON ET AL.

Proof. Suppose that not EfdeQ I- A -+ B; by T70 there is an OML


L and a homohorphism h such that h(A) 1 h(B). By T66 the minimal
. t . t - t
extens10n L of L 1S a QR structure. Let F be the set of all
formulas of RtQ' and let us define a homomorphism ht : Ft -+ Lt such
that

h(p) .

By induction on the number of occurrences of connectives, we can prove


that for any zero-degree formula e, ht(e) = h(e). Therefore, there
is a QRt structure (Lt) and a homomorphism ht : Ft -+ Lt such that
ht(A) !ht(B). Hence, by T70 not RtQ'~ A -+ B.

It is clear that we have:

T72. TQ', EQ', and RQ', are conservative extensions of EfdeQ.

T73. RtQ ~ A iff RtQ'r-t -+ A

Proof. *== is trivial. Suppose that RtQ ~ A. If A is an axiom


of RtQ, then A is an axiom of RtQ'. If A is obtained in RtQ by MP
from B -+ A and B, suppose that RtQ'1- t -+ .B -+ A and RtQ' I - t -+ B.
Obviousl t , RtQ' ~ t -+ B -+ .t -+ A (since -+ is self-distributive).
Hence, R Q'\- t -+ A by MP'. I f A is B & e and if it is obtained by
AD from RtQ t- Band RtQ /-e, suppose that RtQ' I- t -+ Band RtQ I - t -+ e.
Using AD' we can prove RtQ'~ t -+ A. Therefore i the theorem follows
by induction on the length of a proof of A in R Q.

T74. RtQ I- A iff for any Y ~ Q 'at and any h t


~ Hom(R Q, Y),
h(A) ~ t.

Proof. Apply T70 and T73.

T75. RtQ I- A -+ B iff RtQ'J- A-+ B.

Proof . ~ is trivial. Suppose that Rt Q I- A -+ B; then


RtQ' I- t -+ .A -+ B (T73). But t -+ (A -+ B) -+ (A -+ B) is an axiom of
RtQ'. Hence, by MP', RtQ'\- A -+ B.

T76. RtQ is a conservative extension of EfdeQ.

Proof. Apply T71 and T75.

T77. TQ, EQ and RQ are conservative extensions of EfdeQ.

This completes our investigations.

126
ENTAILMENT AND QUANTUM LOGIC 207

REFERENCES

1. A.R. Anderson and N.D. Belnap Jr., "Entailment: the Logic of


Relevance and Necessity, Vol. I", Princeton University Press
(1975) •
2. G.N. Georgacarakos, Orthomodularity and Relevance, J. Philos.
Logic 8 (1979), pp. 415-432.
3. G. Gratzer, "Universal Algebra", Van Nostrand (1968).
4. R.J. Greechie and S.P. Gudder, Quantum logics, in "Research in
the Foundations and Philosophy of Quantum Theory", C.A. Hooker
ed., Reidel, Dordrecht (1974).
5. G.M. Hardegree, The conditional in abstract and concrete quantum
logic, in "Logico-Algebraic Approach to Quantum Mechanics,
Vol. II~ C.A. Hooker ed., Reidel, Dordrecht (1979), pp. 49-
108.

127
128
Zeitsehr. f.math. Logik und Grundlagen d . Math.
m. 31, s. 423 - 4 3 0 (1985)

A ('OSSTRCCTIVE PROOF O F d THEOREM I N RELEVANCE LOGIC

b-A l ~ K
~ ~ ~o in
s ~Belgrade
~ t (Yugoslavia)')
~ ~ ~ i ~

In this paper we investigate a propositional systeni L related t o T,-W of relevance


logic. It has been conjectured that for any formulas A and B

(A) if A .+ B and B --f A are provable in T,-W, t'lien A and B are the same
formula
(ef. [l]. p. 95). (A) is known as BELXAP'S conjecture and it was proved by E. P. MARTIN
and R. I<. J f E m R who used a suitable semantics (cf. [3] and [4]).w e give an indepen-
clent and purely constructive proof of (A).
1. T + - W and the Dwyer's systems. The' only connective in T,-W is -+ and the
set of formulas is defined as usual. A ?B , C, . . . range over the set of formulas. Instead
of (-4 -+ R ) we shall write ( A B ) and we shall use conventions about oniitting paren-
thews as in [2]. The axiom-schemes of T,-W are:
(ID) .4d,
(ASL-) -4B.BC.AC.
(XPR) BC.AB.AC.
Tlic. onlj- rule of inferencc is niodus ponens
(NP) From A B and A to infer B .
LrJt S be the system obtained from T,-W by deleting M P and by adjoining the
rules
(SU) From A B to infer BC.AC.
(PR) From BC to infer AB.AC.
(TR) From A B and BC to infer A C .
T h e o r e m 1.1. The systems T,-W a i d N are e p u i v a l e ~ ~ 2 t .
Proof. (SU), (PR),and (TR) are derived inT,-W. I n order to show t h a t N is closed
under (NP). suppose that A B and -4 are provable in N and proceed by induction on
the length of the given proof of -4B only. This proof is due t o Dr. MILAN Boi16.
Let T,-W-ID and M be the systems obt,aincd from T,-W and N, respectively. by
deleting the schrma (ID).
T l i e o y e m 1.2. The systems T,-\.Tr-ID aud M are eqzciwaZe?zt.
Proof. Similar to the preceding one.
The systems r\' and hZ are known as DWYERS'S
systems.
I) I wish to thank Dr. MILAN Boi16 for his careful1 readings of the manuscript of this paper.

129
424 A . KRON

Let us write A = B if A and B are the same formula, and A + B if otherwise. We


shall write X t A (X,f A ) iff A is provable (unprovable) in the system S under con-
sideration.
T h e o r e m 1.3. If N t A B and A + B , then M t AB.
P r o o f . We proceed by induction on the length of the given proof of A B in N. If
A + B and A B is a n axiom of N, then it is an axiom of M as well. If A B is either
CD.ED or CD.CE, obtained f r o m EC by (SU) or from DE by (PR), respectively,
then either E + C or D + E , respectively, and hence either M t EC or M t DE, by
induction hypothesis. We obtain M t A B by using either (SU) or (PR). Suppose that
A B is obtained from ,4C and CB by (TR). Obviously, either A + C or C + B. If
+
A = C and C B (if A If. C and C = B ) , then M I- A B by induction hypothesis.
If both A + C and B + C , then both M t AC and M I- CB, by induction hypothesis;
hence, M t A B by (TR).
Let us consider the propositions
(A’) If N t A B and N t BA, then A = B ,
(B) For any A , T,-W-ID ,f A A , and
(B’) For any A , M,f A A .
T h e o r e m 1.4. (A), ( A ’ ) ,( B ) and (B’) are equivalent
P r o o f . That (A) and (A’) ((B) and (B’)) are equivalent follows by 1.1 (by 1.2).
(B’) (A’). Suppose that there are formulas A and B such that A +
B and that
both N t. A B and N I- R A . By 1.3 we have M t A B , M I- BA, and M I- A A , contrary
to (B‘).
(A’) 3 (B’). Suppose that there is a formula A such that M t A A ; let BB he the
first instance of (ID) occurring in this proof; it is clear that BB must have bern ob-
tained by (TR) from BC and C B for some C such that B =+ C. Hence, there arc for-
mulas B and C such that B +
C and that both N t BC and N t CB, contrary to (A‘).
The theorems proved so far have been known as the Dwyer-Powers theoremh. but
the proofs given here seem to be new. Even if they are not, they deserve to be here,
for the use of 1.4 in the sequel is essential.
2. The system L. We shall show that no instance of (ID) is provable in a proper
extension L of M. Before we describe L, let us give some definitions.
Let X, Y , 2, . . . range over the set of finite (possibly empty) sequences of formnlas.
If X is empty, then X . B denotes B . If X is A . . . , A , , then X . B denotes the formula
A , . . . . .A,B. Notice that any formula is of the form A , . . . . .A,p, for some A . . . , A ,
and a propositional variable p . Moreover, since M is closed under uniform substitution,
it suffices to prove (B)under assumption that there is only one propositional variable p .
For any X by n ( X ) we denote the set of all permutations of X ; if X is empty, so
is z(X). By a ( X ) . B we denote any formula 2 . B for 2 EZ(X). Thus, by L I- a ( X ) . B
we understand that L I- Y.B for any Y EZ(X).
Let C.DE be a subformula of A ; suppose that B is obtained from ,4 by substitution
of D.CE for C.DE, a t one and only one occurrence of C.DE in A ; then we shall say

130
A CONSTRUCTIVE PROOF OF A THEOREM IN RELEVANCE LOGIC 425

that B is obtained from A by an application of permutation (PERM). We shall write

of PERM. We shall write X -


A N B iff B can be obtained from A by a finite (possibly zero) number of applications
Y , for any sequences X and Y , iff Y is obtained from

X , by X * we denote any sequence Y such that X Y. - -


n ( X ) by a finite number of (repeated) applications of PERM to the members of n ( X ) .
For any A , by A* we shall denote any formula B such that A B. Also, for a n y

The axioms of L are given by the following scheme:


(ASU*) n((AB)*,B p , A ) . p .
The rules of L are:
(LSU) From n ( X , A , Y ) . p to infer z ( X * , (Y*.p) p , A*).p.
(LPR) From n ( X ,B).p to infer z(X*, (AB)*,A*).p.
(LG) From n ( X , A , Y ) . p and n(2,B).p to infer z(X*, Z*, ( ( Y . p )B)*,A*).p.
The rule (LG) has to be understood as follows: if there are V EZ(X,A , Y ) and
W E n(2,B) such that L t V.p and L k W.p, then for any W' E ~ ( X *Z*,
, (( Y . p ) B)*,A*)
we have L k W'.p. I n a similar way we understand (LSU) and (LPR).
We shall assume that a proof in L can be given in the form of a tree with usual
properties. By the weight of a node A in such a tree we mean the number of nodes
above A , on all branches containing A . We shall say that A is provable in L with
weight rn iff there is a proof of A in L such that A is of weight m in this proof.
Suppose that A and B are provable with weights m and n, respectively; we shall say
that ,4 and B are provable with combined weight k if k = m + n. This definition
is due to SMULLYAN (cf. [ 5 ] , p. 113). By the degree of a formula A we mean the
number of occurrences of -+ in A . Finally, let us agree to write (a) L t A to denote
both the fact that A is provable in L and the formula A , if no confusion arises
therefrom.
T h e o r e m 3.1. If A i s provable in L with weight w, is A*.
-
SO

Proof. If A is an instance of (ASU*), then so is B, for any A B.


Let us consider (LG); suppose that ( z ( X ,D, Y).p)* and ( ~ ( E).p)* 2, are provable
in L with the same weights as z ( X , D, Y).p and z ( Z ,E).p are, respectively. It is
clear that (z(
n ( X * * ,Z**,
-
V , W).p)* TC( V , W)*.p N Z( V*, W*).p for any V and W ; hence,
(( Y*.p) E*)*,D**).p is provable in L with the same weight as
n ( X * . Z*, (( Y . p ) E)*, D*).p is. It is also clear that ((Y*.p)E*)* N ( ( Y . p )E ) * ; hence,
( n ( X * , Z * ,( ( Y . p )E)*,D*).p)* is provable in L with the same weight as
iz(X*, Z*, (( Y . p ) E)*, B*).p is.
I n a similar way we take care of (LSU) and (LPR). Hence, the theorem follows by
induction on the weight of a given proof of A .
T h e o r e m 2.2 If L k n ( X ,A , Y ) . p , then L t- n ( X * , ( ( Y . p ) (Z.p))*,-4*,Z*).p.
Proof. If 2 is empty, we use (LSU). Suppose that 2 is B , , . . ., B,, and that
L t z ( X * , (( Y.p) ( ( B z ., . . , B,,).p))*,A*, BZ, . . . , BX).p. By using (LPR) we obtain
L 1;Z(X*. (( Y . p ) (Z.p))*,Z*).p.
This shows that the theorem is proved by induction on the number of members of 2.

131
426 A.KRON

T h e o r e m 3.3. If (a) L k x ( X , A , Y).p and (11) L t n(2,( Y * . p ) ) . p , theti (c)


L t x(X*, z*, A*).p.
P r o o f . Case (I): (1)) is an instance of (ASU*). We must have (a) L k n ( X , A , Y ) . p
and (b) L t n((CD)*,Dp, C).p.
Subcase (T.1): C‘ = (Y*.p). From (a) we obtain (a,) x(S*,((Y.p) (Z.p))*,A*).Z.p
by 2 . 2 , where D = 2 . p .
Subcase (1.2):D p = (Y*.p); hence, Y is D*. From (a) me obtain (c)
L i- ,z(X*, (CD)*, A*, C*).p by (LPR).
Subcase (1.3): (LID)* = Y*.p. From (a) we obtain ( c ) L t n ( X * ,A*, D*p. C*).p by
(LSC).
If (b) is not an instance of (ASU*), we shall proceed by double induction, as usual.
Suppose that (a) and (b) are provable with coinbined weight k and that Y.p is of
dcgiee d. Our induction hypotheses are:
(Hyp. I ) The theorem holds for any Y’.p of degree d‘ < (1 and any combined weight ;
(Hgp.2) The theorem holds for any Y . p of degree d and any combined weight
k’ < k .
Case (11): (11) is o h t a i n d by (LSU) from (b,) x ( V , P , W ) . p , where n ( V * , ( W * . p )p ,
P*).p = n(2,(Y*.p)).p.
Subcase (11.1): Y*.p = P*. By 2.1 we have (b2) z(V, (Y*.p), W ) . p ; hence, from
(a) and (b2) we obtain (a,) n ( X * . A*, W*).p by (Hyp. 2 ) . Hence, by using (LSU), we
obtain (c) n;(X*, T’*, (W*.p ) p . A*. ?Y*).p.
Subcase (IT.2): Y*.p = (W*.p)p : hence, 1’*= W*.p. From (13,) and (a) we obtain
(c) x ( X * , V*. P*, A*).p by ( H J ~1 .) .
Subcase (T1.3): Y*.p is a member of V*, say T’* is J”, I’*.p, V”. From (a)
and (b,) we obtain (a,) z(X*, V’*, A*, V”*, P*, W*).p by (Hyp. 2), and hence (c)
x ( S * . V’*, A*, V”*, (W*.p)p , P*).p by (LSU).
Case (TIT): (b) is obtained by (LPR) from (b,) n;( V , P).p, where n(V*, (&P)*,&*).p =
Y*.p).p.
= n(2,
Subcase (TTI.1):Y*.p = &*. From (a) and (b,) we obtain x ( S * , V*, ((Y.p) P)*,A*).p
by (LG).
SulJcase (111.2): Y*.p = (QP)*.From (a) and (b,) we obtain (c) n(X*,V*, A*, &*).p
by (Hpp. 1).
Subcase (111.3): Y*.p is a member of V * , say V* is V’, Y*.p, V”. From (a) and
(b,) we obtain n ( X * , V’*, ,4*, V”*, P*).p. Hence, by using (LPR) we obtain ( c )
n ( X * , I”*, A”, V”*, (&P)*,
Q*).p.
Case (IV): (b) is obtained by (LG) from (b,) n(T‘.P , W ) . p and (b,) n ( V , , &).p,
where n(V*, VT, (( W.p) &)*, P*).p = ~ ( 2Y*.p).p.
,
Subcase (IV.1): Y*.p = P*. From (a) and (b,) we obtain (a,) n ( X * , V*, A*, W*).p
by (Hyp. 2). Now from (a,) and (I),) we obtain (c) %(X*.V*, V i , A*, ( ( W . p )Q)*).p
by

132
A COXSTRUCTIVE PROOF OF A THEOREM IN RELEVANCE LOGIC 427

Sulxase (IV.2): Y*.p = ((1V.p)Q)*. Let Q = V 2 . p ;hence, Y*.p = z ( ( W * . p ) V;).p ,


and Y* = x ( ( W * . p , V:). From ( b l ) and (a) we obtain ( a l ) n ( X * , V*, P*,V l ) . p by
(Hyp. 1). From ( a l ) and (b,) we obtain (c) z(X*,6'*, P*, ?'T).p by (Hyp. 1) again.
Subcase (IV.3): Y*.p is a member of V :, say V: is V ; , Y*.p, V y . From (a) and
(b,) n-c obtain ( a l ) n(X*,T';*, d*,IT:*. Q*).p by (Hyp. 2). Froin (b,) and (al) we
TI;*, A*, V;'*, (( W . p )Q)*, P*).p by (LG).
obtain (c) n ( X * , i7*%
Subcase (IV.4): Y*.p is a member of V*> say V* is V ' , Y*.p, V". From (a) and (bl)
we obtain (al) n(X*,Ti'*: A*, I"'*. P*,W * ) . p by (Hyp.3 ) . From (a1) and (b,) we
obtain (c) n ( X * , V'*, A*? V"*> V f , (( W . p ) Q)*: P * ) . p by (LG).
This completes t>heproof.
T h e o r e m 2.4. I, contains IT.
P r o o f . 2.2, (LPR),and 2.3 show that I, is closed under the rules of M. Let' us derive
(4SU) and (APR). We already have ,z(dB.Bp, - 4 ) . p as a n instance of (ASU*). Sup-
pose that n(AB,BD,A ) . D is provable in L. By using (LPR) we obtain
z ( A B , B.CD, a ) . C D . Hence, (-4SU) and (APR) are proved by induction.
It is easy to see that thcre is no theorem of M of the form p A : on the other hand,
t.here are such t,lieorems in L(take a suitable indance of (ASU*));hence, L is a proper
ext'ension of M.
T h e o r e m 2.5. There is no theorem. of L of the form A p .
Proof. Xo axiom of L is of the form A p . By inspectZionof the rules, it follows that
the conclusion of a rule cannot bc of the form -4p.
Let us define Ap" as follows: A p o = il ; L4p"+1= (AP")P.
T h e o r e m 2.6. There i s no theorem of L of the form n ( ( n ( X ,Y ) . p )p2", Y*).p, k E cu.

Proof. If I' is empty, the theorem follows by 8.5. If t,here is a formula of this form
provable in L, then
(Hyp. 3) There is a formula (a) n((n(X,Y ) . p )pZ",Y * ) . p of mininial degree provable
in L.
Let us consider how (a) could have been obtained.
Case (I):(a) is an axiom of L. Hence, n((n(X, Y ) . p )p2':, Y * ) . p = n((CD)*,Dp,C ) . p
for some C and D.Obviously, either (1.1)( n ( 9 ,Y ) . p )p2': = (CD)* and Y* = n ( D p , C)
or else (1.2)(n(X, P).p)23'' = D p and 1'* = x((CD)*,C) or else (1.3) ( ~ ( Y 9) ,. p )p 2 k = C
and 1'* = n((CD)*,B p ) . But none of (1.1)-(1.3) is possible.
Case (11): (a) is obtained froin (b) z ( U , A , V ) . p by (LSU); hence,
n ( ( n ( X ,Y ) . p )p2k> Y * ) . p = n(u*,(V*.p)p , A*).p.
Subcase (11.1): ( n ( X , Y ) . p )p 2 k is a member of U*. say U* is U ' , ( n ( X , Y ) . p )p 2 k ,U"
and Y* = z ( U ' , U", ( V * . p ) p ,A*). By using 2.1, from (b) we obtain (c)
jz((n(X, n(U', U " , ( V . p ) p , A ) j . p )p 2 k , U ' , U", A , V ) . p . On the other hand,
L 4 ( V . p ) . ( V . p ) p . A is
p an instance of ( M U * ) ; hence, by using (LPR) we obtain
(d) (n(Y,n(U ' , U", A , T7)).p).n(X, n(U', U", ( V . p )p , A ) ) . p , and by using (LSU) (if
k > 0) we obtain (e) ( n ( X ,?( U ' , U", A , V ) ) . p )P ' ~ . ( z ( Xn( , U ' , U", ( V . p )p 3A ) ) . p )p 2 k .
By using ( e )and (d)and 2.3, we obtainz((z(X:z ( U ' , V',A , V ) ) . p )p 2 k ,U', U", A , V ) . p ,
contrary t o (Hyp. 3).

133
428 A.KRON

Subcase (11.2):( n ( X ,Y ) . p )p z k = (V*.p) p and Y* = n(U*, A*).


Sub-subcase (11.2.1): k = 0 ; obviously, X and U are empty, and A = V . p . By
using 2.1, from (b) we obtain A A , contrary to (Hyp. 3).
Sub-subcase (11.2.2):k 0 ; by using 2.1, from (b) we obtain n ( ( n ( X ,U , A ) . p )p 2 k - 2 ,
U * , A*).p, contrary to (Hyp. 3).
Subcase (11.3):( z ( X , Y ) . p )p Z k = A* and Y* = n ( U * , (V*.p)p ) . By using 2.1, from
(b) we obtain (c) ( ( n ( X n, ( U , (V.p)p ) ) . p )p 2 k ,U , V).p. If both X and U are empty,
(c) contradicts (Hyp. 3).
Suppose that X is non-empty; then by using (ASU*) and (LPR) we obtain
( n ( X , n ( U ,V ) ) . p ) . n ( Xs, ( U , ( V . p )p ) ) . p . If k > 0, by using (LSU) we obtain
( n ( X ,Z( U , V ) ) . p P ) ~ ~ . ( ~z ( (UX, (,V . p ) p ) ) . p )p 2 k ; hence, by using 2.3, we obtain
~((fi(X z (,U , V ) ) . p )p Z k ,U , V).p, contrary to (Hyp. 3).
Suppose that U is non-empty; then by using (ASU*) and (LPR) we obtain
( n ( X ,n(U , V ) ) . p ) . n ( Xn, ( U , (V.p)p ) ) . p ) and we proceed as before.
Case (111): (a) is obtained from (b) z ( U , B).p by (LPR); hence,
n((n(X,Y).p)p 2 k , Y*).p = n ( U * , (AB)*,A*).p.
Subcase (111.1): ( n ( X ,Y ) . p )p Z kis a member of U * , say U* is U ' , ( n ( X ,Y ) . p )p 2 k , U"
and Y* = z ( ( A B ) * ,4*,
, U ' , U " ) . By using 2.1, from (b) we obtain
(c) z((n(X,n ( A B , A , U ' , U " ) ) . p )p Z k ,U ' , U", B).p. On the other hand, Bp.AB.Ap is
a n instance of (ASU"); hence, by using (LPR) wc obtain
( n ( X , U ' , U").p).n(X, U ' , U", A B , A).p. If k > 0, by using (LSU) we obtain
(d) ( n ( X ,n ( U ' , U", B ) ) . p )p Z k . ( n ( Xn,( U ' . U", d B ,A ) ) . p )p Z k . By using (d), (c) and
2.3, we obtain n ( ( n ( X n , (U ' , U", B ) ) . p )p z k , U ' , U", B ) . p , contrary t o (Hyp. 3 ) .
Subcase (111.2): (z(X, Y ) . p )p 2 k = (AB)*and 1'" = n ( U * , A*)
-
Sub-subcase (111.2.1): k = 0, A B A . U . X . p and B
(b) we obtain (c) n(n(X,U ) . p , U ) . p , contrary t o (Iiyp. 3).
-
U.X.p. By using 2.1, from

A -
Sub-subcase (111.2.2): k > 0; hence,
( A . 9 . U . p )p 2 k - L which
, is impossible.
.JB - ~ ~=, p ,
(A.X.U.~)PB and

Subcase (111.3): ( n ( X ,Y )p ) p 2 k = A* aiid Y'$ = n(Ul*,(MI)*),


which is impossible.
Case (IV): (a) is obtained from (b) n(L'. A , V ) . p and (c) n ( W ,B).p by (LG); hence,
z((z(X, Y ) . p )p2k, Y*).p = n(U*, W * , (( V . p ) B)*,A*).p.
Subcase ( I V . l ) :(z(X, Y ) . p )p 2 kis a member of li*, say U* is U ' , ( ~ ( Y 9) ,. p )p 2 k ,U",
and Y* = n ( U ' , U", W * , ( ( V . p )B)*,A * ) By using 2.1, from (b) we obtain (d)
n((n(X, z ( U ' , U", W ,( V . y )B, A ) ) . P ) U ~ , A , Y).p. By using (LPR), from (c) we
~ '~, U".
obtain ( e ) ( n ( X ,z ( U ' , U " , ( V . p ) ( W . p ) ,A , W ) ) . p ) . n ( X ,n ( U ' , U", ( V . p )B , ,4, W ) ) . p .
Ry (ASU) we have (A.V.p).(Vp ) ( W . p )A.W.p; hence, by using (LPR) we obtain
(f) ( n ( X ,z ( U ' , U", A , V ) ) . p ) . n ( Xa(
, U ' , C"', ( V . p )( W . p ) , -4, W ) ) . p .Now by using (f),
(e) and 2.3, we obtain (g) ( n ( Sn, ( U ' , U " . 8 ,V ) ) . p ) . n ( X n, ( U ' , U", ( V . p ) B , A , W ) ) . p .
If k > 0, we use (LSU) to obtain
(h) ( n ( X ,n(U ' , U " , A , V ) ) . p )p z k . ( n ( X n(
. U ' , C i " , ( V . p ) B , A , W ) ) . p )p 2 k . Hence, by
using (h), (d) and 2.3, we obtain z((n(S,n(U', U", A , V ) ) . p )p 2 k ,U ' , U", A , V ) . p ,
contrary to (Hyp. 3).

134
A CONSTRUCTIVE PROOF OF A THEOREM IN RELEVANCE LOGIC 429

Subcase (IV.2): ( n ( X ,Y ) . p )p z kis a member of W * , say W* is w’,( z ( X ,Y ) . p )p z k , W”


and Y* = n(U*, W’, W”, ( ( V . p )B)*,A*). By using 2.1, from (c) we obtain (d)
z ( ( n ( Xz,( U , JV’, W“, ( V . p ) B, A ) ) . p ) . p z kW
, ‘ , W”, B).p. On the other hand,
z ( B p , ( V . p )B, V . p ) . p is a n instance of (ASU*). By using (LPR), we obtain
s ( B p , ( V . p )B, U.V.p, U ) . p . We also have (b) n ( U , A , V ) . p (it is clear that either U
or V is non-empty); hence, by using 2.3, we obtain n(Bp, ( V . p )B, A , U ) . p . By using
(LPR) again, we obtain (e) ( z ( X ,a(W’, W”, B ) ) . p ) . z ( X n, ( U , W’, W”, ( V . p )B, A ) ) . p .
If k > 0, by using (LSU), we obtain
(f) (z(S,n(W’, W”, B ) ) . p )p z k . ( z ( Xa, ( U , W’, W”, ( V . p )B , A ) ) . p )p z k .Hence, by (f), (d)
and 2.3, we obtain z((a(X, n(W , W”, B)).p)p z k , W’, W”, B).p, contrary to (Hyp. 3).
Subcase (IV.3): ( z ( X ,Y ) . p )p Z k= A* and Y* = z ( U * , W*, ( ( V . p )B)*). By using
2.1, from (b) we obtain (d) z ( ( z ( Xz,( U , W , ( V . p )B)).p)p z k , U , V ) . p .
Sub-subcase (IV.3.1):X is non-empty. We can easily prove ( X .V . p ) . (V . p )B.X.B.
Hence, by using (c) and 2.3, we obtain (e) ( n ( X ,n ( U , V ) ) . p ) . n ( X n, ( U , W , ( V . p )B)).p.
If k > 0, we use (LSU) to obtain (f) ( z ( Xn,(U , V ) ) . p p) Z k . ( n ( Xn ,( U , W , ( V . p )B)).p)p Z k .
By using ( f ) , (d) and 2.3, we obtain n ( ( n ( X n, ( U , V ) ) . p )p z k , U , V ) . p , contrary to
(Hyp. 3 ) .
Sub-subcase (IV.3.2): X is empty. Suppose that U is non-empty. We can easily
prove ( U . V . p ) (. V . p )B.U.B and hence, by using (c) and 2.3, we obtain
n(U , I,’).p).n((V . p )B, U , W ) . p .If k > 0, we use (LSU) to obtain (g)
( n ( U ,I ‘ ) . p ) p Z k . ( ~ ( ( V . pU) ,BW, ) . p )p z k ; hence, by using (g), (d) and 2.3, we obtain
z((n(U , V ) . p )p Z k ,U , V).p, contrary t.0 (Hyp. 3).
Suppose that U is empty; obviously, V and W are non-empty. From (c) we
obtain ( V . p )B.( V . p ) ( W . p ) by (LPR); hence, ( V . p )(1V.p) ( W.p).(V . p )B( W.p) is
obtained by (LSU). Since W is non-empty, by using (ASU*) and (LPR), we obtain
( V . p )pp.(T“.p)( W . p ) ( W . p ) ; hence, ( V . p ) p p . z ( ( V . p )B, W).p by 2.3. If k > 0, by
using (LSU), we obtain (h) (( V . p )p p ) p z k . ( z (V( . p )B, W ) . p )p Z k .Hence, by using (h),
(d) and 2.3 we obtain ( V . p )p2k++2. V . p , contrary t o (Hyp. 3).
Subcase (IV.4): (z(X, Y ) . p )p z k = ( ( V . p )B)* and Y* = n ( U * , W*, A*).
Sub-subcase (IV.4.1):k > 0 ; hence, B = p and (8.23) = ( n ( X ,Y ) . p )p2”I. By using
W y,z ,( U , A ) ) . p )p Z k - * ,U , A ) . p , contrary t o (Hyp. 3).
2.1. from (b) we obtain (d) ( ( ~ ( ~
Sub-subcase (IV.4.2):k = 0 ; hence, n ( X , Y ) . p = ( ( V . p )B)* and Y* = n ( U * , W*,A*).
Let B = U , . p for some U , ; hence, n ( X , Y ) . p = n ( X , n ( U * , W*, A*)).p =
= ( ( V . p )( U , .PI)* = n ( ( V * . p ) ,U , ) . p .
(IV.4.2.1): (V*.p) is a member of X*, say X* is X’, V . p , X ” ; hence, UT =
= n ( U * , W*, A*, X ’ , X”).By 2.1, from (c) we obtain (e) z(n(X’,X ” , U , W).p, W).p,
contrary t o (Hyp. 3).
(IV.4.2.2): ( V * . p ) is a member of U * , say U* is U‘, V*.p, U“ and U r =
= z(X*,W*, A*, U‘, U”). By using 2.1, from (c) we obtain
(f) n(n(9,U ‘ , U“, A , W ) . p , W).p, contrary to (Hyp. 3).
(IVA.2.3): V*.p is a member of W*, say W* is W’, V*.p, kV“ and U : =
= n ( X ,U*, A*, W’, W ” ) . By using 2.1, from (c) we obhin
(g) n ( n ( S U, , W’, W”, A ) . p , W’, ( V . p ) ,W”).p. Now by using (b),(g) and 2.3, we obtain
n ( n ( Sn, ( U , W’,. W”, A ) ) . p ,U , W’, W”, A ) . p , contrary to (Hyp. 3).

135
430 A. KRON

(IV.4.2.1): V*.p = A* and UT = z(S*. U*. IV*). By using 2.1, from ( c ) we obtain
n ( z ( X , U , W ) . p , W ) . p , contrary t o (H5p. 3).
This completes tht, proof of the tlitorem.
As a consequence we have
C o r o l l a r y 2.6.1. T h e r e i s no theoreni of I, o j the forin A R .
P r o o f . Let in 2.6 S be empty.
C o r o l l a r y 2.6.2, There is 110 fileorem of L of the f o r m ABB.
Thus, no forinulas of either of the form A(AR).,4Bor of thr forin AABB are prov-
ahlc in L.
C o r o l l a r y 2.8.3. There i s tzo theorem of L of t2ither of t h forni
~ d.Al3B or of the f o r m
ABBA.

References
[l] ANDERSON, A. R., and N. D. BELNAP,JR.,Entailnlent, the Logic of Relevance and Kecessity,
Vol. I. Princeton University Press, Princeton 1975.
[2] CIIURCH,A,, Introduction t o Mathematical Logic. Princeton University Press, Princeton 1956.
[3] MARTIN,E. P., The P - W Problem. Ph. D. Thesis, Australian National University, Canberra
1978.
[4] MARTIN,E. P., and R. li. RIEYER,Solution to the P-LV problem. J. Symb. Logic 47 (1982),
869 - 887.
[ 5 ] SNIULLYAN, It. & I.,
First-Order Logic. Springer-Verlag, Heidelberg -Berlin-New To& 1968.

Aleksandar Kron (Eingegsngen a m 12. Okt,ober 1983)


l’ohorska 10
11070 Belgrade (Yugoslavia)

136
Steve giambrone Four Relevant Gentzen
AND
Systems
Aleksandar Kr?n

Abstract. This paper is a study of four Gentzen


subscripted systems GUR+,
GuTjr, GURW+ and GUTW+. [16] shows that the first three are equivalent to the se
milattice relevantlogics and and conjectures that is equiva?
UR+, UT+ URW+ GUTW+
lent to we prove
Here Cut Theorems for these systems, and then show that mo?
UTW+.
? so trivial as one normally we
dus ponens is admissible which is not expects. Finally,
decision procedures for the contractionless systems, and
give GUTW+ GURW+.

Introduction

This paper is a study of four subscripted Gentzen systems (the G


-systems), GUR+, OuT+, GURW+ and GUTW+. The claim made in the
title that they are relevant logics is based upon [1] and [2] where it is
shown that GUR+ is equivalent to UE+, the positive semilattice system
of relevant implication first studied in print in [3] and [4], and axiomati
zed in [5] and [6]. [2] also plausibly conjectures that the remaining G
-systems UT+,are URW+ and UTW+/respectively.1 Here we prove an
appropriate Cut Theorem for the Gr-systems and also show that modus
ponens is admissible in each. To these ends, the systems are formulated
with a placeholder as is usual with relevant Gentzen systems. We likewise
show that the placeholder can be done without.
As a matter of history, subscripted Gentzen systems (presumably
equivalent to those given here) were formulated in a different style in
[4]. But our systems were
developed in [1] based on [7] and [8], which
this paper should beseen as correcting and extending. (See [9] and [10].)
In what follows, we refer to GUTW+ and GURW+ as the GtF-systems,
and 'G-T-systems' and 'GH-Systems' are used in the obvious way.

1
Broader results have been obtained since the completion of this paper. [2] now
shows that and are likewise to their namesakes.
GUT+ GURW+ equivalent respective
Note that and are "contractionless" relevant
UTW+ URW+ logics. Axiomatically speak?
ing, they do not have Contraction (W), i.e., (A->.A->B)-+.A-*B as a theorem.

Semantically speaking, their model structures are "semilattices" with an identity


element but ? are
without idempotence of the operation i.e., they commutative
monoids. (But see [16].)
137
56 S. Oiambrone, A. Kron

The G-Systems

A subscripted formula (an sf) is an ordered pair, the first member of


which is a formula or the structural constant 1,2 and the second member
(the subscript) of which is a finite subset of the natural numbers ({1, 2, 3,
...,}). We use a,b,c,d,e,u,v,w,x,y,z with or without subscripts
(in the ordinary syntactic sense of 'subscript') and/or superscripts as
variables ranging over subscripts. In practice, we generally write 'AJ
for '<J., ay. Max (a) is the numerically
largest member of a, if a is not
empty; otherwise, is a (possibly
it is 0. A structure empty) sequence of

sfs, and W, X, Z,Z(with or without scripts) are variables ranging over


structures. Then a consecution or sequent is anything of the form XY Aa,
provided that 0 is not the subscript of any sf in X. Let us call X the
antecedent and Aa the
consequent of such a consecution.
We will speak of an occurrence of I, a formula, a subscript, an sf, a struc?
ture or a consecution in the normal way. We write 'ha' for CYA0\ In con?
text we often use oo(y,w) to denote the union of all subscripts occurring
in X(Y, W), and we use S with or without scripting as a variable ranging
over consecutions.
With these definitions in mind, the G-systems can be formulated as
follows :3

AXIOMS
YAa for any A and non-empty subscript a.
Aa formula
RULES
Structural Rules

X, Y, YYCC
cvX,Z,W,YYCc wv
X,W,Z, YYGC X,YYG0
X Y Gc provided (1) y ^ 0 and 0 does not
c c.
X Y Y G occur in Y; and (2) y

Logical Rules

X,AaYCc X,BaYCc XYAaXYBa


X, (A &B)a YCc X, (A &B)aYCc XY (A & B)a
X,AaYOc X,BaYOc XYAa X YBa
Vv
X, {AwB)a YCC XY (AvB)a X Y (AvB)a
Y, avby YCc ., , ,_ , ^ _
XYAa <B,
-?-LA-l?,?y??5- provided (1) b ^0;
X, Y, (A->B)b YC0 (2) for the GW_SJStems,
br\a ? 0;
(3) for the 6?T-systems,
max [a)> max(b).
2 The use of I discussed in the section below on I-ordering and vanishing-/.
3
The structural rules are named 138 the
after corresponding combinators.
Four relevant Gentzen systems 57

X,Aa Y (B, \jay


1-_^???a- provided (1) ?ma = 0;
X h (A-^B^ for the c?T-systems max (a)>
(2)
ma#(#).

_?
^X,<?,auft)hOc provided (1) a ^ 0 and 6 0;
Z'/^i?1*^ (2) &$a;
(3) for the 6?T-systems, max(a) > max(b) ;
(4) for the GW-systems, br\a =0.
?
Note that A can be the structural constant I in the statement of I h
but not in the other rules. Also, note that condition (4) guarantees condi?
tion (2) in J Y , and that when b^ a, the conclusion follows from the pre?
mise by KY.
A derivation of a sequent E is a finite tree,
branching upward, with
the usual properties. The notion of immediately above (below) is taken
as primitive. The notion of above (below) is its transitive closure. And
we say that J. is provable iff Y
A is derivable.
Where Der is a derivation and o is a particular occurrence of some
the subderivatiofb determined by derivation that o is the
sequent therein,
one would from Der all sequent occurrences except o and
get by deleting
those above it. A sequent occurrence o (immediately) precedes a sequent
occurrence o' in a derivation just in case o is (immediately) above o'; simi?
larly for (immediately) succeeds. And predecessor and successor are used
in the obvious way. Then a branch of a derivation is a sequence o19 ...9on
of sequent occurrences such that o1 has no predecessors and on has no
successors, and for all 1 > i > n, o{ immediately precedes oi+1. A branch
segment is a subsequence of branch.
The weight of a derivation say Der, is the length of a longest branchy
and the weight of a sequent occurrence o in Der is the weight of the sub
derivation determined by o. The conclusion (bottom node) of a derivation
that has weight n is said to be derivable with weight n.
Finally, the
height of a sequent occurrence, say o, in a derivation is
the length of the branch segment consisting of o and all sequent occurrences
below it.
?Tow th? following simple facts can be established by straighforward
inductions on weight.

Fact 1. Let E be a derivablein a Gsystem. Then


consecution
(1) The stmctural constant I never
in the occurs
consequent of Z.

(2) The subscript of the consequent of Z is equal to the union of the sub?
scripts occurring in the antecedent of T.
(3) The null set does not occur in the antecedent of S.

(Hereinafter, references to consecutions of a G-system are to ones satis?

fying the conditions of the above 139


fact.)
58 S. Giambrone, A. Kr?n

Fact 2. &Yis invertible, i.e., X,A&Ba, Y YGc is derivable iff X,


Aa,Ba, YYGC is.

Fact 3. Y ~> is invertible, i.e., it XY A-+Bx is derivable, so is X,


Y (B,x\j?y, for some a satisfying the appropriate proviso(s) of Y ->.
Aa

Proving Cut will require the


ability to rewrite subscripts in certain
ways. The following strong rewriting Lemma will help us prove the facts
that are needed. For the sake of convenience let us allow formula variables
to range over formulae and I except where I would obviously not be per?
mitted.
For ..., ..., let a = ..., and a'
any subscripts a1, an, a[, an {a?, an}
= a =Ua a&d a' =Ua'- ^n(l Ie* ^ an^ 7 r^nge over
{a[, ..., a'n}. Let
the various unions of the a{, i.e., over elements of {?/? ? a and ? ^ 0}.
And where ? is a^ ... for let ?'be ...
\ja?, example, a[\j va^; similarly
for y and y'. Note that in a degenerate case, ? might simply be a19 for
instance. Then:

Lemma 1. Eewriting Lemma,


any non-null subscripts For
ax, .., an,
and Z =
a[, ..., an any formulae A, Ax, ...9An9 if <A19 a{y, ..., (An, any
Y <A, ay is derivable in a particular G-system with weight n, then so is Z'
= Y for any ? and y,
(Al9 a[y, ..., (An, any <Jl, a'>-provided,
(1) if ?^y, then ?' ?y';
the GW-systems, = then = and
(2) for if ainaj 0; ainaj 0;
(3) for the GT-systems, if max(ai)'^ then max(ai)'^ (a'j).
max(aj),
Peoof. induction on weight of derivation of Z. The base step
By
is straightforward. We show only one case of the inductive step, leaving
the rest to the reader.
Gase 1. For ->
h, assume that Z =(A19 a?y, ..., <An9 any, (B?,
cy Y <D, aubuey is derivable, following from
&!>,..., <Bm, bmy,XA->G,
= YAa and ==
Zx <JLX, ?!>,..., <An,any Z2 <S1? &!>,..., <Bm,bmy,
a a \jb uc>. Then choose a[, ..., an, b[, ..., b'm, c' satisfying the
<0, uc> Y(D,
on inductive Z'
provisos. Obviously, Z[ is derivable hypothesis. Further,
follows from it and-S? =<#i> b'my, <G, a'vc'y Y <Z>, a'vb'vc'y
b[y,...,<?m,
~> Y. So it will suffice to show that Z'2 is derivable, for which it
by
will suffice to show that b19 ..., bm, (a\jc), b[, ...,b'm,(a'vc') satisfy
? ... \jan\jc
the But this is the case, since a\jc ax\j
proviso. clearly
and a'Kjc' = a[u... Kja'n\jcr, and since obviously max(a')^max(c')
and anc = 0 in the appropriate cases; so we are finished. (Note
that the case for J h is handled similarly. However, when V <=,a', use
KY rather than II-.)

This is quite a powerful rewriting lemma, which has the useful corol?
laries :

Corollary 1. For any structure X, for any formulae B and G, for

any subscript b such that bnx = 0 140and for any subscript d whatsoever.
Four relevant Gentzen systems 59

X, Bb Y <C, x\jby is derivable in GUR+(GUT+) only if X, Bd Y (G, xudy


is ?provided, in the case of GUT+, that for all c occurring in X, max(c)
^max(b) iff max(c)^max(d).
Corollary 2. For any structure X, for any formulae B and G, and
for any subscripts b and d such that br\x = 0 and dr\x = 0, X, Bb Y(G, xkj
\jby is derivable in a particular GW-system only if X,BdY (J3,XKjdy
is ?provided, of course, for GUTW+ that max(b)^max(c) iff max(d)
^max(c) for all c occurring in X.

Some particular little facts, all corollaries of the rewriting lemmas,


be useful ? the first for the Vanishing-/ Theorem to come
will and the
second for handling J Y in the proof of Out.
Fact 4.
Let a1,...,an, b be subscripts such that for all !<?<w,
either bnai = 0orb c ai. Then for any formulae A, Ax,..., An, (Ax, axy,...
in a G-system ? ...
..., (An, any Y (A, ay is derivable only if (Al9 ax by,
Y a? ? case
..., (An ,an ?by (A, by is provided in the of the GT-systems that
for all 1 < i < n, max(ai)> max(b).
... ,
FactS. Let ax, am, bl9 ..., bn, c (n^l and m>0) be sub
scripts satisfying the following conditions:

(1) forl^i,j^.n and 1 < h < m, max(b{) = max (bf) and max(ak)
^maxfa);
(2) in the case of the GW-systems, for l<i<w and l<j<m, b{r\
nc = = 0.
a^nc

(3) in the case of the GT-systems, max(bx) > max(c).

Then for all A19 ..., Am, Bx, ..., Bn, D, if <A1? axy, ..., (Am, am>, (Bx,
&!>,..., (Bn, bny Y <D, a Kjby is derivable in a particular G-sy stems, so
is (Ax, a?y, ..., (Am, amy, <fix, bx,ucy, ..., (Bn, bnucy Y <D, auiuc).

We now have sufficient control over subscripts to prove the desired


Cut Theorem. But first we will want to show some special qualities of
our structural constant and placeholder, I.

Complete /-Ordering and Vanishing-/

A few words are


concerning the structural
in order constant /. In pro?
ducing Gentzen
systems for relevant logics with the full power of truth
-functional 'and' and 'or' it is customary to insist that the antecedent
of a consecution be non-empty, and then use the sentential constant t,
as in [11] and [12], and/or its structural analogue /, as in [13], as a place?
holder. The reason for this is that when consecutions are allowed to be
empty on the left, a Cut Eule in its most general form is not admissible.
For such a rule leads from the derivable Yp-^p and q Yp-?p to
(p->p),
the gross modal Y 141 or without as the case
fallacy q p->p (with subscripts,
60 S. Giambrone, A. Kr?n

may be). Indeed, being empty on the left causes problems for a proof
of a more restricted Cut Theorem for JR-systems. (See the use of the Va?
nishing-/ Theorem in the proof of Cut below.)
However, in the contractionless (i.e., W-) systems, the use of t or /
causes problems in showing that modus ponens is admissible, and hence
in showing equivalence to a Hubert-style axiomatisation. This led to
proving a Vanishing-^ Theorem in [1] and [14], which becomes a Vanish?
ing-/ Theorem in our present conext.
But, in subscripted Gentzen systems the rule IY introduces a new prob?
lem. The difficulty is this: it is abundantly clear from an analysis of tY
in [12] and [14] that the appropriate occurrence of (A9 a\jby should be
classified as a parametric ancestor of Aa ?but Aa and (A,a\jby are

different sfs, which causes a problem in the proof of cut.4 It was shown
in [1] that thisproblem could be overcome in GUT+ and GUR+ on the
strength of the Bewriting Lemma.
However, that maneuver could not
be straightforwardly duplicated in the OFF-systems. So here we show
that the 6?-systems can be completely /-ordered, which solves the pro?
blem quite neatly for all of the systems.
So let us say that a derivation is I-ordered just in case no application of
IY is preceded by an application of any other rule. So the premise of an
application of IY in an /-ordered derivation is either an axiom or itself
the conclusion of an application of IY. (For the sake of simplicity, we
hereinafter identify sequents which differ only in the order of occurrence
of the constituents of their antecedents.5)

Fact 6. Z is derivable just in case it has an I-ordered derivation.

Proof. Note that a derivation is /-ordered iff each of its sub-deri?


vations is. Then note that a derivation
ending of Z
application with an
of IY immediately preceded by some other rule, Bu, can be transformed
into a derivation of Z ending with one or more applications of Bu imme?

diately preceded by one or more applications of IY to one or more of the

original premises of Bu. So one can transform an I-unordered derivation


into an /-ordered derivation by more or less "pushing application of IY
to the top."6

Now let us say that a derivation is completely /-ordered just in case


it is /-ordered and satisfies the following condition : for each instance of

IY, either the premise is an axiom (top node) or else has as its appropriate
instance of {A, auby an instance of (I, aui>). (Speaking loosely, comple?
teness demands that /'s be "split off from" a formula only in axioms.)

4 Cf. Conditon C2 in [13].


(shaps-alilceness of parameters)
5 That are now firesets or multisets. So in essence there
is, structures (See [15].)
is no a rule OK
longer
6 this is not correct. In pushing an application of I h up through
Strictly speaking,
some instances of -> \-, one actually
142
exchanges 11- for K\-.
Four relevant Gentzen systems 61

And let us call a consecution occurrence in a derivation an offender just


in case it is the conclusion of an instance of IY which violates the comple?
teness stipulation given above.

Lemma 2. Complete /-Ordering.


I-ordered derivation Z has an
iff it has a I-ordered derivation.
completely (Hence., by the previous fact,
Z is derivable iff it has a completely I-ordered derivation.)

Proof. Bight to left is simple. Left to right proceeds by induction


on the minimum weight of an offender. The base step is vacuous. By
/-ordering and minimality of weight, the inductive step has but two cases
which are left to the reader.

The following useful fact can now be shown by cases using Lemma 2 :

Fact If there is a derivation


7. of Z ending with an instance of some
other than IY, then there is a completely I-ordered derivation
rule, Bu, of Z
ending with the same instance of Bu as before.

Now we turn to showing that IY is unnecessary from the point of view


of provable formulae.

Lemma 3. Vanishing-/ Lemma. Let X and Y be arbitrary structures


such that, for some subscript y,
(1) I y is the only sf occurring in Y;
(2) for each a occurring in X, either = 0 or y a
subscript any a;
(3) for the GT-systems, if X is non-empty, then max (a)> max(y), for
each a in X.
If X, Y YGc is derivable, so is X~ Y (fi, c? yy, where X~. is the result of
replacing a by a ?y in each a in X.
X, for
Proof.By strong induction on weight of derivation. (Note that if
Y is empty, the lemma holds by Fact 4.) The base is vacuous. So
step
choose an arbitrary j > 1 and assume:

Inductive Hypothesis (H). For any X', Y',y' the conditions


satisfying
of the lemma and for any if X', Y' YGc is derivable with of
Gc, weight
j'<j, then X'~ Y (fi,c ? y'y is derivable with some weight ft< j\
Next choose and assume
arbitrary X, Y,y arbitrary Gc, and
Conditional Hypotheses =
(C). Z X, Y, YGc is derivable with weight j,
and X, Y and the conditions of the lemma.
y satisfy (Let DerG be the
derivation of Z.)

It will then suffice to show that Z~ = X~ ?


Y(fi,c yy is derivable
with some The inductive
weight fc<j. step proceeds by cases.
Case 1. DerG ends with an application of IY, say
=
Zx W,(A,aKjby,ZYGc
Z = 143
W,Aa,Ib,ZYGc
62 8. Giambrone, A. Kr?n

are three subcases: = = =


There (1) X W, Aa and Y Ib, Z (b y)-, (2) X
= and Y = and for the X =
W,Aa,Ib Z; (3) 6?Z?-systems only, W,Ib
and Y = J.a, J? (JL is / and a = #).
Oase 1.1. X = W,A and Y =Ib,Z (b =y and 6 $ a). Note that in
the a a or a ? ?
accordance with (2) of lemma, y yr\a= 0. So y= (a\jy)
? =
y a, and 2/c (a\jy). Further, the subscripts in 27x satisfy condi?
tion (3) for the lemma when applicable. So apply (H) to Zx to finish the
case.

Gases 1.2. and 1.3. These are relatively straightforward and left
? ?
to the reader. Note only for 1.2 that K Ymust be used when (b y) ^(a y).
Gase 2. DerG ends with an of -> Y :
application

Zx
=
WX,W2 Y (A,wx\Jw2yZx,Z2,(B, buwxKjw2y Y Gc = Z2
27= Wx,Zx(A-+B)b,W2,Z2YGc

with X = and Y = thus = Note


Wx,Zx,(A->B)b 'W2,Z2, y iv2 =z2.
that either wxny = 0 or y and = 0 or c b. It is clear by
wx, bny y
=
(C) that 27x satisfies the conditions of the lemma. So by (H), Zx Wf
I-<J., wx ? w2y is derivable with appropriate weight. It is equally clear
the required and that = 0
by (C) that Z2 satisfies conditions, (wxnw2
or w2 cz wx) and = 0 or w2 c= b). So again by (H), Z2 =
(br\w2 Zx , <?f
? ? Y c? is with
(b w2) \j(wx w2)y <<7, w2y derivable appropriate weight.
Then note that in the case for GUT+, max(w1uw2) > max(b) by proviso
? condition ?
(3) on -> h; but by (C) (3), in particular max(b) > max(w2).
max (b). Also note that for the GB-systems =
Whence, max(wx)^ (Wj?w2)r\b
= = -> h. So Z~ follows
0, since (wxuw2)nb 0 by proviso (2) of from Zx
?
and Z2 by -> Y, and hence is derivable with appropriate weight which
finishes the case.
Gase 3. If Der G ends with an of any other rule, the ar?
application
gument is straighforward and left to the reader.

Now let us say that a sequent is I-free just in case / does not occur in

it, and that a derivation is /-free just in case each sequent occurring in
it is /-free. The Vanishing-/ Lemma then makes short work of

Theorem 1. Vanishing-/ Theorem. Ia YAa is derivable iff there is


an I-free derivation of YA.

Proof. Left to right is immediate by the Vanishing-/ Lemma. Bight


to left is by induction on the weight of derivation of YA. If A is, say,
B-+G, then Bb Y Gb is derivable for some b, by Fact 3. But by Corollary
1 or 2, as the case maybe, <_B,au{max(a) + l}y Y <fi, au {max (a)+ l}y
is also derivable. Whence by IY,Ia, <B, {max(a) + l}yY (fi, au {max(a)+
is derivable as required. The
+ 1}> is derivable. So by h->, Ia Y (B-+G)a
other cases are straightforward on inductive hypothesis.
144
Four relevant Gentzen systems 63

Cut and Modus Ponens

A Cut Theorem in the style of Theorem 4.1 of [8] CGuld now be shown.
But we prefer one along the lines of [11]. So we begin with an analysis
of the rules.
First, an
inference is an ordered pair consisting of a finite (non-null)
? the ? as left
sequence of consecutions premises member and a con?
? the conclusion ? as A rule is a set of infer?
secution right member.
ences, and its members are called instances thereof. A calculus or system
a ? the axioms ? a set
is set of consecutions together with of rules.
Let o be a sf occurrence in a premise of an inference Inf. The imme?
diate descendant of o is the sf occurrence in the conclusion of Inf which
"matches" o in the sense which is obvious from the statement of the
rules of which Inf is an
instance.7 An sf occurrence in the premise of an
inference is the immediate ancestor of its immediate descendant. This

terminology is taken over in the obvious way to derivations. The relation


of ancestor is the transitive closure of immediate ancestor.
An sf occurrence in the conclusion of an
instance inference which is an
of a logical rule is a principal constituent thereof just in case it is the "newly
introduced" sf occurrence (Ib in the case of / Y). The immediate ancestor(s)
of a principal constituent are subaltern(s). All other sf occurrences in a
premise or in the conclusion of an inference are parametric constituents,
either premise parameters or conclusion parameters, as the case may be.
Note that all immediate ancestors of a conclusion parameter have the
same subscript that it has, except the obvious occurrence of <J., auby
in an instance of IY.
Now we shall say that a rule Bu is closed under parametric substi?
tution if it satisfies
following the conditions. Let Inf be an arbitrary
instance of Bu. 31 be a set containing
Let some conclusion parameters
in Inf and all of their immediate ancestors, such that each conclusion
parameter in 31 and each of their immediate ancestors has the same sub?
script, say x. And for an arbitrary structure X, the union of the subscripts
occurring in which is of course .x, let Iw/[X/3?] be the result of substi?
tuting X (in the premise(s) and conclusion of Inf) for each member of 3t.
Then Iw/[X/3?] is an instance of Bu.
This definition is an adaptation of the analogous definition of left
regularity in [11]. And, as should be expected, the following lemma
can be verified by inspection of the rules.

Lemma 3. Closure Under Parametric Substitution. The rules


of the G-systems are closed under parametric substitution.

7 or else
To be totally precise we should stop using multisets for structures give
a far more of this notion. But to do either would
complex explanation needlessly
what is
complicate quite straightforward. 145
64 S. Giambrone, A. Kr?n

Next we say that a Bule Bu is antecedent expandable if it satisfies the


Assume that for 1 < i < n, Zi isXi Y <fii, e y and = X Y Gc,
following: Zn+1
and that Y, Bc, Z YDd is a sequent. Then suppose that

<d %^A
is an instance of Bu, with the displayed occurrence of Gc in para?
Zn+1
metric. Then

*? "'**
(2)
Al + 1
YDd and for 1 < i < w 2^
is an instance.5%, where 2'n+1 is Y, X,Z
of
on whether or not
is either Y ,Xi,ZY Da or 27^ depending <Gi,ciy is
an immediate ancestor of Gc in (1). This definition is the appropriate ana

logoue of right regularity in [11].

Lemma 3. Antecedent Expandability. The rules of the G-systems


are antecedent expandable.
Now for the needed notion of rank in a derivation. Let Der be a deri?
vation of Z. Unless Z
top node of a branch
is the of Der, let Inf be the
inference is the
of which Z and let 31 be a set of consti?
(in Der) conclusion,
tuents of Z. Then define the rank of 31 in Der as follows. If 31 is empty,
its rank in Der is 0. If 31 is non-empty but either contains no parameters
this 31 is in fact a singleton!), or Z is a top node, then the rank
(in case,
of 31 in Der is 1. Otherwise let inf be

(i)
E?n^I?
with the subderivation determined by Zi for each 1< i< n, and
Derx
let a? be the set containing all and only immediate ancestors in Zi of
members of a. (Note that if all members of a were weakened in, then at
= Let Tcbe the maximum rank of any a{ in its corresponding Deri.
0.)
Then the rank in Der of a is k+1. And following [12] we talk of the con?
as the rank of a in Der when a is the contain?
sequent rank of Der singleton
ing the consequent of the conclusion of Der.

Then, where a is a set of sf occurrences in Y(Z), let Y[X?a~\ (ZlX/aJ)


be the result of X in Y(Z) for each member of a. (When a
substituting
is a singleton, = Z[X/a].)
say {o} we let Z[X[o]
We are finally for the Cut Theorem which can be stated as follows :
ready

Theorem 2. Cut Theorem. Let a be a set of occurrences of any sf


in a structure and Y Y Gc are derivable, then so is Y[X/a]
Ax Y. If XY Ax

8 Recall that no consecutions of the form X h lx are derivable, and note that
x 146 a is not empty.
=?0 in the statement of the theorem when
Four relevant. Gentzen, systems 65

Proof. The proof proceeds as in [11] by a double induction. So


choose arbitrary m > 0, j and k such that j + k > 0, and assume

Outer Inductive Hypothesis (OH). For all X, Y, Gc, Ax and a (a set of


occurrences of Ax in Y), if the complexity of A is less than m, then if
X YAx and Y YGc are derivable, so is Y\X?a\ YGc; and
Inner Inductive
Hypothesis (IH). For all X, Y,GC,AX of complexity
m, and a (a set of occurrences of Ax in Y), if X YAx is (completely /-order?
ed) derivable with consequent rank j' and there is a (completely /-ordered)
derivation of Y YGc in which the rank of a is ? and j' + k' <j + k, then
Y\_X?a] Y Gc is derivable.

Next choose arbitrary Ax with A of complexity m and arbitrary X,


Y, G and a (a set of occurrences of JL^ in Y), and assume

Conditional Hypothesis (CH). iD^f is a completely /-ordered derivation


of X YAx with consequent rank j and JSDer is a completely /-ordered
derivation of Y YGc in which the rank of a is k.

It will suffice to show that Y\_X?a\ YGc is derivable. For the sake of
notational let i-premise = X YAx, = Y Y Gc and
convenience, JS-premise
Conclusion = Y Gc.
Y[X/a]
We now proceed by cases.
Gase 1. k = a is empty. is never
0, whence (Note that j 0.) Then
Conclusion is i?-preniise and we are finished by CH.
Gase 2. k = 1. There are three subcases.
Gase 2.1. i?-premise is an axiom. Then Conclusion is ?-premise,
and we are finished by CH.
Gase 2.2. i2-premise follows from a sequent, call it Z, by KY. Then
each member of a was weakened in and is parametric, (but has no im?
mediate ancestor;) whence by closure under parametric substitution,
Conclusion follows from Z by KY.
Gase 2.3. JS-premise follows by a logical rule on the left, call it Bu.
Then a is a singleton containing a principal constituent of JS-premise.
There are two subcase^.
Gase 2.3.1. 1. Then = is either an axiom or follows
j
Z-premise
by a logical rule on the right Bu. In the first instance, Con?
"matching"
clusion is i?-premise, whence it is derivable by CH. So assume Z-premise
follows by a logical rule, call it Bu'. There are three subcases one for each
pair of matching logical rules.
The subcases for v and & are straightforward and left to the reader.
So suppose LDer ends thus:

27!= X,Aa Y<B,x\jay


= X YA->BX
i-premise
147
5? Studia L?gica 1/87
66 S. Giambrone, A. Kr?n

and that BDer ends thus:

=
272= ZYAZ Y,(B,zvxyYGc Z3
22-premise= Z, Y, A->BX Y Gc

We must show that Conclusion =


Z, Y, X Y Gc is derivable.
Now, in the GT-systems, note that by provisos (2) on h -> and (3)
on -> h, max (a) ^ max(x) and max(z) > max(x). max (a) >
Whence,
^max(b) and max(z) > max(b), for any b occurring in X. (Note that
z ^ 0.) And for the GPT-systems, by proviso (1) of Y -> and proviso (2)
of -> F, ?m# = 0n? = 0. So by Corollary 1 or 2, as the case may be,.
=
Z[ X, Az Y(B, XKjzy is derivable, since Zx is. Then, applying OH to
=
Z[ and Z2, it follows that Z[' X,Z Y(B,x\jzy is derivable. So, now
applying OH again to Z" and Zs, it follows that conclusion is derivable.
We proceed in a similar fashion in the GR-systems when z ^ 0. So
assume z = 0. Then by the Vanishing-/ Theorem and we have that
Z2,
(3) Ia YAa is derivable. Then by applying (OH) to (3) and Zx, we get
(4) X,Ia Y(B,x\j?y. From this, by the Vanishing-/ Lemma, we get (5)
X YBx. Now, since z = 0 on assumption, x = #u#. So by (OH), (5) and
273, we have X, Y Y Gc as required.
Case 2.3.2. j > 1, whence i-premise follows by a rule, call it Buf
and its consequent is parametric. First apply IH to the appropriate pre?
mise of Bu and B-premise. (If Bu is v I-, apply IH to each of its premises,
individually with J2-premise). Then use antecedent expandability (Lemma
3) to guarantee Conclusion.
Case 3. k>l. Then suppose BDer ends with the following instance
of some rule Bu other than / Y :

= ' K '
JB-premise YYC
and let Der{ be the subderivation determined by Zi. Then let a' be the
set of all conclusionparameters in a, and for 1 < i < n let at be the set
containing all immediate ancestors in Zi of members of a'. Then note
that for each such %, the rank of a{ in Deri is less than k, whence j+ (rank
of at) is less than j + k. So by IH we see that Zi [X/aJ is derivable, 1< i
< n. Whence by closure under parametric substitution, Z = Y\_X\a\ YG
is derivable Bu. the derivation of Z If a' = we are fini?
by (Call Der'.) a,
are of the form W,
shed. Otherwise Z and Conclusion Ax, Z Y Cc and
W, X,ZYCC, respectively, with the displayed occurrence of Ax being
principal in the application of Bu ending Der', and a ?a' is a singleton

containing the displayed occurrence (in Z) of Ax. Further, by Fact 7,


we may assume that Der' is completely /-ordered. Whence we advert
to Case 2.3 to complete the proof.

Finally, assume that BDer ends with an instance of Zh By complete


/-ordering, a is a singleton. If the parametric ancestor of Ax (in JB-premise)
148
Four relevant Gentzen systems 67

is Ax, then apply (IH) to X-premise and the premise of E-premise from
the result of which, by closure under parametric substitution, Conclusion
will follow by IY. Otherwise, BDer ends as follows:

(A,x\j?y Y (A,x\jay
K^a ?<A,xvay
Without loss of generality, assume that X-premise is <J51? bxy,..., <Bm,
"
- ?(fini cn} A>
*m>><fixi c?i with m > 0, n ^ 1, max(b{) < max(x) and
= So
Conclusion is ...
max(Cj) max(x). <JBX, 6X>,..., <J5m, 6m>, <(71?O,
..., <?ft,0n>, /a I- <J.,#ua>. Note that by the appropriate provisos of
Zh: for the GIF-systems #na = 0, whence = = and for
an^. ancj 0;
the GT-systems, max(x) ^ max (a), whence max max (a). So by
(c?)^
(CH) and Fact 5, <jBx*x>, ..., <Bm, 6m>, <0X, cxuay,..., (Cn, cnuay Y (A,
x\jay is derivable. Conclusion follows therefrom by n applications of IY
followed by n-1 applications of W Y. So we are finished.

It is worth noting that IY was


needed not
in this proof for the GT-sys?
tems. This is because
proviso (3) on -? Y that no premise of
guarantees
an instance of that rule will have an empty antecedent.
The reader can now use the invertibility of Y ->, the Vanishing-/ The?
orem and the Cut Theorem to show

Theorem 2. Modus ponens is admissible in the G-systems.

Before moving on, let us note that by inspection of the rules, / can
be dropped from the vocabulary without loss of and hence
provability,
without loss of the admissibility of modus ponens. So the are
G-systems
henceforth taken to be formulated without /.

G TF-Decidability.
The essence of Gentzen's original for decidability
argument in [17]
lies in getting control over the length
(complexity), and hence over the
number, of structures that in a proof
can occur search tree for a given
formula. And we will do likewise. However, as [13] puts it, relevant
logics are "hybrid", i.e., they contain not only relevant (intensional)
connectives, in this case ->, but also Boolean (extensional) connectives,
in this case & and v. The reflection of this fact in Gentzen systems makes
our task more complicated.
In non-subscripted Gentzen systems, the various intension?l and ex?
tensional relations among formula constituents of the antecedent or
consequent of a consecution are rather
straightforwardly represented with
the use of two kinds of sequences, as in [11], or of two kinds of structural
connectives, as in [13], or of a combination of sequences and structural
connectives, as in [14]. But in subscripted Gentzen these inten?
systems,
sion?l and extensional relations are only rather reflected in
obscurely
149
68 S. Giambrone, A. Kr?n

the inter-relationships of the subscripts. But with respect to showing


decidability, the problem is conceptually the same as in [14]: we must

get control over both the extensional and intensional complexity of the
structures that can occur in a derivation of a given consecution.
Let us now turn to the technical solution of the decision problem for
the GTF-systems and put further discussion in context. The overall stra?
tegy of th? proof is to define a complete and procedure effective for build?
ing proof search trees and then show that such trees are finite via K?nig's
Lemma. So, let us specify as follows a proof search procedure which pro?
duces the GRW+(GTW+) proof search tree of Z for any sequent Z:

(1) Enter Z as the bottom node;


(2) above each consecution Z'
occurring with height k (in the tree so
far constructed) (a) enter nothing, if 27 is an axiom, (b) otherwise enter
(in some assumed order) all consecutions Z" such that Z" is a premise
of some GFF-inference of which Z' is the conclusion and such that the
tree remains irredundant.
For once, we actually have to worry (a little) about the Finite Fork

Property. So let us dispense with that worry first off.


The problem is that with the rule h -> as it stands, the search for
a proof of YA-^B, for instance, is immediately infinite. For, on the face
of it, we must check Aa YBa (for every a) as a possible premise. But this
is simple to remedy. Noting Corollaries 1 and 2, we see that proviso (1)
of r- -> can be changed to insist that a = {max(x) +1}. (This is true of
all of the G-systems.) So we take the GIF-systems to be so formulated.
And let us now say that {max(x) + l} is discharched by an application of
h ->.

To prove the Finite Branch


Property, we will take the straightforward
approach of showing that at most finitely many distinct consecutions
can occur but finitely many times on a branch of a derivation. Noting
the we could next reduce structures more
difficulty presented by WY,
or less as in [17], thus restricting the number of times that an sf can occur
in a consecution. But given the simplicity of our structures, why bother?
We have already taken them to be multisets thus doing away with Oh
Given KY and WY, nothing stands in the way of going to sets simpliciter,
thus doing away with WY. (Again, this is true of all of the G-systems.)
and,
So we take the GIF-systems to be thus formulated. Structural variables
now range over sets of sfs. And for the sake of notational convenience,
let us identify singletons with their sole member and use c,' for 'u' where
(Thus, for instance, X, Aa Y Cc is Xu{Aa} Y {Gc}.) Imme?
appropriate.
diately we have:

Lemma 4. Beduction Lemma. sf occurs more


No than twice, once
in the antecedent and once in the consequent, in any consecution, derivable
or otherwise.
150
Four relevant Gentzen systems 69

If the elements of our structures were formulae^ the Subformula

Property, which all of the G-systems obviously possess, along with the
Beduction Lemma would guarantee the Finite Branch Property. But,
alas, our elements are sfs. So, although the Beduction Lemma puts a de?

finite, finite upper bound on the number of times a formula can occur
with the same subscript, it remains to be shown that there is a finite
upper bound on the number of times a formula can occur with different
subscripts.
It will be helpful at this point to think again in terms of the extensional
and intensional relations among formula occurrences in a consecution.
If we knew of some way of uniformly translating consecutions into for?
mulae (with fusion, i.e., relevant conjunction, in the language) the no?
tions of intensional and extensional complexity would be straightforward.
(Cf. [14].) But the occurrence in the antecedent of formulae with unequal
but non-disjoint subscripts renders such an undertaking highly proble?
matic ? even cf a given
in the context derivation.
However, Fact 2 clearly indicates that two formulae with the same
subscript can to be truth-functionally
be taken conjoined, i.e., extensio
nally related. So
Subformula the Property and the Beduction Lemma
give us adequate control over the extensional of the conse?
complexity
cutions that can occur in the proof search tree of a given consecution.
Further, I- -> (and ~> Y in the GIF systems) obviously indicate that
formulae with disjoint subscripts are to be taken as being fused. Hence,
we can at least calculate the minimum intentional of any
complexity
formula into which a consecution could be plausibly translated. Happily,
this will suffice for control over subscripts.
So, first define the degree of Ax (deg(Ax)) as the number of ->'s occurring
in A. Then let us say that Y is an intensional barometer of X if (1) Y ^ X
and (2) the subscripts occurring in Y are pairwise disjoint. And for any
structure Y satisfying (2), define the indicator of Y as the sum
(ind(Y))
of the degrees of its elements. And for any structure X, define as
deg(X),
the maximum of the indicators of its intensional barometers. Obviously,
if X is empty, the degree of X is 0. Finally, for any sequent let deg(Z)
Z,
be the sum of the degrees of its antecedent and consequent.

Lemma 5. DegreeLemma. The rules of the GS-systems are degree


That the the
preserving. is, degree of conclusion of an instance of any rule
is at least as great as that of any of its
premises.
Proof. By cases, only two of which are worthy of note.

Case 1. Let the following be an instance of Y ->:


arbitrary

Zx~X,AaY<B,XKjgy
Z = XY (A->B)X
Let Y be an intensional barometer of X with maximum indicator. Then
151
70 S. Giambrone, A. Kr?n

deg(Z)
=
ind(Y) + deg(A) + deg(B)+l. But since anx = 0, deg(Zx)
= So is greater than deg(Zx). (Y?> is
ind(Y) + deg(A)+deg(B). deg(Z)
degree increasing !)
Oas6 2. Let the be an instance of -> h:9
following arbitrary

= =
27x IF4^(B,an^)K)c 272
27= Z?^(l->B)?hOc

First note: (1) Every intensional barometer of X or of Z is an intensional


= 0
barometer of the antecedent of 27; and (2) anx by proviso (2) of
->h

To show deg(Z)^ deg(Zx), let Y be an intensional barometer of X


with maximum indicator. Then deg(Zx) = ind(Y)+deg(Ax). But by
(1) and (2), Y, A~>Ba is an intensional barometer of the antecedent
of Z, and its indicator is ind(Y) + deg(Ax) + deg(Ba) + l, which suffices.
Then let W be an intensional barometer of the antecedent of Z2 with
maximum indicator. Then deg(Z2) = ind(W) + deg(Cc). Now note that
or ? is an intensional barometer of the
either W (W {B, a\jx})\j{A-^Ba}
antecedent of Z. In either case, deg(Z)^ deg(Z2), to finish the case.

So we now have confirmed control over the extensional and the in?
tensional complexity of consecutions proof that can occur in the search
tree a given
for consecution. It remains to be shown that this control
guarantees the Finite Branch Property.
Given the Subformula Property, the Beduction Lemma and the
definition of the proof search procedure, it is clear that a branch ? of
a proof search tree for Z is infinite iff infinitely many distinct subscripts
occur, since only a finite number of distinct sequences can be built, using
a finite number of subscripts and the subformulae of formulae oc?
only
in Z. And by inspection of the rules there can be infinitely many
curring
distinct in iff there are instances of Y -^ in
subscripts ? infinitely many
?. But recalling that h -> is degree increasing, it follows from the Degree
Lemma that if there were infinitely many instances of Y -> in ?, then
? is absurd.
deg(Z) would be infinite which patently
our proof search trees have both the Finite Fork and Finite Branch
So,
Property, whence we conclude this paper with:

Theorem 3. The GW-systems are decidable.10

Bemark. We should note that Cut Theorems for GUR+, GUT+


and Gu RW+ are immediate from the completeness proofs of [4] and [2].

9 -> in GUT+ and is not


Note that h GUR+ degree preserving.
10 I am for criticism and from the members of the
grateful helpful suggestions

Logic Group of the Department of Philosophy, RSSS, Australian National University,


in this thanks are due to Dr. Robert K. Meyer, Steve Giambrone.
regard, special
152
Four relevant Gentzen systems 71

References

[1] S. Giambrone, Gentzen Systems and Decision Procedures for Relevant


Logics, Australian National University Doctoral Dissertation, 1983.
{2] S. Giambrone and A. Urquhart, Proof theories for semilattice logics, Zeitshrift
f?r Mathematische Logik und Grundlagen der Mathematik 33 (1987).
;[3] A. Urquhart, Semantics for relevant logics, The Journal of Symbolic Logic
37 (1972), pp. 159-69.
14]. The Semantics of Entaliment, University of Pittsburgh Doctoral Di?
ssertation, Ann Arbor (University Microfilms) 1973.
f 5] K. Pine, Completeness for the semi-lattice semantics (Abstract), The Journal of
Symbolic Logic 41 (1976), p. 560.
?[6]G. Charlwood, An axiomatic version of positive semilattice relevance Logic,
The Journal of Symbolic Logic 46 (1981), pp. 231-39.
[7] A. Kron, Decision procedures for two positive relevance Logics, Reports on
Mathematical Logic 10 (1978), pp. 61-78.
?8]. Gentzen formulations of two positive relevance Logics, Studia L?gica
39 (1980), pp. 381-403.
![9] S. Giambrone, A critique of 'Decision Procedures for Two Positive Eelevanee

Logics9, Reports on Mathematical Logics 19.


J10]. On purported Gentzen formulations of two positive relevant Logics, Studia
L?gica 44.

[11] J. M. Dunn, Consecution formulation of positive E with cotenability and t, in A.


R. Anderson and Nuel D. Belnap, Jr., Entailment: The Logic of Relevance
and Necessity, Vol. 1, Princeton (Princeton University Press) 1975, pp. 381-91.
[12] N. D. Belnap, Jr., A. Gupta and J. M. Dunn, A consecution calculus for posi?
tive relevant implication with necessity, Journal of Philosophical Logic
9 (1980), pp. 343-62.
[13] N. D. Belnap, Jr., Display Logic, Journal of Philosophical Logic 11
(1982), pp. 375-418.
(14] S. Giambrone, and TW+ are decidable, Journal of Philosophical Logic
RW+
14 (1985), pp. 235-254.
[15] R. K. Meyer and M. McRobbie, Multisets and relevant implication, Austra~
lasian Journal of Philosophy 60 (1982), pp. 107-39 and 265-81.
C16] S. Giambrone, R. K. Meyer and A. Urquhart, A contractionless semilattice
semantics, forthcoming, in Journal of Symbolic Logic.
?17] G. Gentzen, Investigations into Logical deductions, in M. Szabo (ed.), The
Collected Papers of Gerhard Gentzen, Amsterdam (North Holland) 1969.

The University op Southwestern Belgrade University


Louisiana, Lafayette, U.S.A. Belgrade, Yugoslavia

Beceived August 12, 1985

?tudia 1
Logiea XLVI, 153
EEEAT?M

For Eead

(Czd A)
and
(?=><j) V(0 s A) V(J. =3 (7)
(0 = VPA) (C =.VPJ.) V(C =>VP?) V(3P? 3 0)"
C A C=> A
Tauketi's Takeuti's
If ~\aeT* If a $ T*
*\?\ # 1\V\
I I

Studia L?gica 46 :1 (1987)

154
TEMPORAL MODALITIES AND MODAL TENSE OPERATORS

ALEKSANDAR KRON

1. INTRODUCTION

Let A be a proposition and let us consider the following phrases:


(1) It is always necessary that Ai
(2) It is always possible that Ai
(3) It is sometimes necessary that Ai
(4) It is sometimes possible that A.
Our first aim is to construct a model theory for (1)-(4).
Let us now consider

(1') It is necessary that always Ai


(2') It is possible that always Ai
(3') It is necessary that sometimes Ai
(4') It is possible that sometimes A.
Our second aim is to construct a model theory for (1')-(4').
Let us write aD, a<>, SO, and s<> for temporal modalities always necessary, always
possible, sometimes necessary, and sometimes possible. Also, let us write oa, <>a, os, and
<>s for modal tense operators necessary that always, possible that always, necessary that
sometimes, and possible that sometimes, respectively. How temporal modalities and
modal tense operators are related? In particular, is (i) equivalent to (i'), 1 ~ i ~ 4?
Our third aim is to answer this question.

2. THE LANGUAGES

Let £1 ,£2 , and £3 be the languages of temporal modalities, modal tense oper-
ators, and of both of them, respectively; and let £ E {£1' £2, Cd. £ is defined on the
language of the classical propositional calculus PC. Let A, B, C, ... range over the set
of formulas of PC. The primitive symbols of £ are: =?, 1\, V,"" (the connectives) and
parentheses. Moreover, in £1 we have aD and so , in £2 we have oa and <>a, and in £3
we have all of them.
The sets Jiat, 12at, and 13at of atomic formulas of £1, £2, and £3, respectively,
are the smallest sets such that if A is a formula of PC, then (i) aDA, soA E Jiat,
(ii) oaA, <>aA E :Fiat, and (iii) 13at = Jiat U :Pi,at. Let Fat E {Jiat,:Pi,at,13at}.
The sets of formulas of £1, £2, and £3 are denoted by Ji, 12, and 13, respec-
tively. Let F E {Ji,12,13}i then F is the smallest set such that (iv) Fat c F
and (v) if U, V E F, then (U =? V), (U 1\ V), (U V V), ...,U E F. We assume that
175

A. Pavkovic (ed.), Contemporary Yugoslav Philosophy: The Analytic Approach, 175-183.


© 1988 by Kluwer Academic Publishers.

155
176

L,M,N, ... j P,Q,R, ... j X,Y,Z, ... rangeover:Fi,:Fi" andJii, respectively. Also, we
assume that =}, /\, and {=} are definable in terms of V and -, , as in PC. Furthermore,
let us define:
s <> A o!;=}v f -,ao-,A

a <> A {=}v f -,so-,A

<>sA o!;=}Vf -,oa-,A

osA {=}v f -, <> a-,A.

3. MODELS FOR £1

By T we denote a non-empty set of moments (intervals) of time, by W a non-empty


set of possible worlds, and by P(W) the power set of W. Let f be a function from T
to P(W) such that for any mET f(m) =I- 0. If A is a variable, then either w F A or
w ~ Aj furthermore, w F -,A iff w ~ A and w ~ A V B iff w ~ A and w ~ B, for
any w E W.
The quadruple M f =< T, W, f, F> is called a temporal modal structure (tms). For
a tms Mf and a formula of Ji let us define:

Mf F aDA iff "1m E T 'Vw E f(m)w F Aj


F soA iff 3m E T 'Vw E f(m)w F Aj
Mf

M f F -,L iff Mf ~ L ( not Mf F L)j

Mf F L V M iff either Mf F L or Mf F M.
What is the intuitive meaning of a tms? The set T can be regarded as a period
of time. For any mET there is exactly one state of affairs w that takes place at m.
However, thinking of mET, we consider all possible states of affairs (possible worlds)
that can be actualized at m. Of course, what states of affairs are possible at m is
an empirical question. Since we take into account no empirical restriction, we allow
any non-empty set f( m) to be the set of states of affairs possible at m. Therefore, we
consider any function f : T ~ P(W) - {0}.
By a frame M of a tms we understand < T, W, F>. Let r be the set of all functions
f : T ~ P(W) - {0}j we say that L is valid in M (in symbols: M F L) iff for any
fEr Mf F L. If L is valid in all frames, then we say that L is valid and we write F L.
4. THE SYSTEM TM

In this section we axiomatize the set of valid formulas of :Fi. Let us define the set
T1 of theorems of TM, where TM is an axiomatic system in the language £1' First,
all instances of any tautology B of PC, obtained by substitutions of formulas of Ji for
propositional variables in B are axioms. Second, we accept further axioms given by
the following schemata:
T M1 soA =} s <> A

156
177

T M2 a0 A => s 0 A
TM3 ao(A => B) => .aoA => aoB
TM4 ao(A => B) => .soA => soB.
The set Tl is the smallest set containing all axioms and closed under modus ponens
(MP) and the following rule:
Nl if A is a PC-tautology, then aDA E T l .

We omit the proof of


THEOREM 4.1 If L E T l , then F L.
Before we state and prove the completeness theorem for TM, let us note that in
TM we have the standard deduction theorem. This is due to the fact that any L E :Fi
is modalized (i.e. every occurrence of a propositional variable in L is in the scope of
either aD or so), and that no L E :Fi contains iterated modalities (aD or so cannot occur
in L in the scope of either aD or so). The reader should be familiar with the concepts
of deducibility (f-), consistency of a set of formulas, and maximal consistent (mc) sets
of formulas. We recall that any consistent set of formulas of PC can be extended to a
mc set. This applies to consistent sets of formulas of many other propositional systems
including TM.
THEOREM 4.2 If a set He:Fi is consistent, then there is a tms Mf such that Mf FL
for any L E H.
PROOF . We shall construct a tms with the desired property. Let T' be the set of
non-negative integers and let W' be the set of all mc sets of formulas of PC. Suppose
that H is given and let G e :Fi be a mc set containing H. Since G is a mc set, either
80 A E G or ...,s 0 A E G. If""8 0 A E G, then using TM2 and the properties of G,
we infer that ""ao E G. Hence, so...,A E G and by TMls o...,A E G. Therefore, for any
formula A of PC either 80 A E G or 80 ...,A E G. Since there are countably many
formulas of PC, there are count ably many formulas of the form 80 A in G.
Let us enumerate all members of G of the form 8 0 A or 8 0 A, and let E be such
an enumeration. For any mET' we define a family 1m of sets of formulas of PC
according to the following recipe:

(1) if Lm in E is so A, then the members of 1m are

(Ll) the set Wl = {A} U {G: aoG E G};

(1.2) for any aoB E G, the set W2 = {B} U {G: aoG E G}.

(2) If Lm in E is soA, then the. members of 1m are

(2.1) for any aoB E G, the set W3 = {A,B} U {G: aoG E G}.
Now we prove that every member of 1m is consistent.
(1.1) Ifwl is inconsistent, there are Gl, ... ,Gn E Wl such that Gl, ... ,Gn f- ...,A.
Using the properties of PC, N l , TM3 , and the definition of s oA, we prove G f- ""8oA,
and thus G is inconsistent.

157
178

(1.2) If W2 is inconsistent, there are CI, ... ,Cn , B E W2 such that C I , ... ,Cn I- ,B.
As above, we prove G I- ,s 0 B. Hence, by TM2 and the properties of G, we prove
G F ,a 0 B, and thus G is inconsistent.
(2.1) If W3 is inconsistent, there are C I , ... , Cn, B E W3 such that C I , ... , Cn, A I-
,B. Using properties of PC, N I , and TM3 , we prove G I- ao(A =? ,B). Now, by
T M 4 , G I- soA =? so,B, and hence, by using M P and the definition of a 0 A, G I- , 0 B;
therefore, G is inconsistent.
This proves that every member of 1m is consistent.
For any W E 1m there is a me set containing w. For any mET', let J m ~ W' be
the smallest set such that for any w E 1m there is exactly one me set in J m containing
w. By the axiom of choice, for any mET' there is such a J m , J m i' 0. It is obvious
that we have defined a function!, : T' -+ P(W') such that J'(m) = J m .
For any mET', any w E J m, and any propositional variable p we define F':

w F' p iff pEw.


It is easy to prove that for any formula A we have A E w iff w F' A. Hence, ~, =
< T', W',!" F'> is a tms. By the construction of ~" using standard methods, we
can prove that LEG iff~, F' L, and therefore, for any L E H, ~, F' L. This
completes the proof of the theorem.
As a corollary we have
THEOREM 4.3 If F L, then L E TI ·

5. MODELS FOR £2

Let T and W be as before, and let 6. be the set of functions 9 : T -+ W; then


Nb =< T, W, 6., F> is a modal tense structure (mts). For a mts Nb and any formula
of ~ we define:

Nb F oaA iff\lg E Nlm E T g(m) FA


Nb F oaA iff:3g E 6.\lm g(m) FA
Nb F ,p iff Nb ~ P
Nb F P V Q iff Nb F P or .NG. F Q.
The intuitive interpretation of a mts is as follows: the set T is a period of time
and W is a set of possible states of affairs that can take place in the period T. Again,
at each mET exactly one state of affairs is actualized. A function 9 E 6. gives us a
sequence of states of affairs. In fact, the range of 9 is a set of states of affairs, and if
T is ordered by the binary relation "not later than", then the range of 9 is a sequence
which can be called a course of events. Hence, 6. can be viewed as a set of courses of
events possible in T. Again, what courses of events are possible in T is an empirical
question .
.NG. F oaA means that in any course of events 9 and at any moment of it A is
true in the state of affairs that takes place at that moment .
.NG. F oaA means that there is a possible course of events in T such that at any
moment m of it A is true in the state of affairs that takes place at m.

158
179

The meaning of Nh p ,p and Nh p P V Q is obvious.


In the sequel, since we take into account no empirical restriction, we allow any
course of events to be possible in T. Therefore, we take t:. to be the set of all functions
g : T ----> W. In this case we write M instead of Nh and we say that P is valid in
M=< T, W, p> iff Mp P. P is valid (p P) iff P is valid in all frames.

6. THE SYSTEM MT

In this section we axiomatize the set of valid formulas of :Fi. MT is the corre-
sponding axiomatic system in the language £2. Let us define the set T2 of theorems of
MT. First, all instances of any tautology B of PC obtained by substitution of formulas
of ~ for propositional variables in B are axioms of MT. Second, we accept further
axioms given by the following schemata:

osA =} <>sA
<> aA =} <>sA
MT3 oa(A =} B) =} .oaA =} oaB
MT4 oa(A =} B) =} .osA =} osB.
The set T2 is the smallest set containing all axioms and closed under M P and the
following rule:

N2 ifA is a PC-tautology, then oaA E T 2 .

THEOREM 6.1 If P E T 2 , then p P.


THEOREM 6.2 If a set H C ~ is consistent, then there is a mts Nh such that Nh pP
for any P E H.
PROOF . We shall construct a mts with the desired property. Let T' and W' be as in a
tms. Suppose that H is given and let G C ~ be a me set containing H. We can show
that that there are count ably many members of the form osA in G. Let us enumerate
all members of G of the form <>sA or <>aA, and let E be such an enumeration. Also, let
us enumerate all members of G of the form DsA, and let E' be such an enumeration.
For any mET' we define a family 1m of sets of formulas of PC according to the
following rules:

(1) if Pm in E is of the form <>sA, then the members of 1m are

(1.1) the set w~{A} U {C : oaC E G};

(1.2) for any DsBn in E' the set w~ = {B} U {C: oaC E G} ;

(1.3) the set w~ = {C: oaC E G};

(2) if Pm in E is of the form <>aA, then the members of 1m are

159
180

(2.1) for any osBn in E' the set w;",n = {B} U {C: oaC E G} ;

(2.2) the set w;".


Now we prove that every member of Im is consistent.
(1.1) If w;" is inconsistent, then for some C1"",C k E w;", C1, ... ,Ck I- -,A.
Using the properties of PC, N 2 , MT3 , and the definition of osA, we prove G I- -, 0 sA
and thus G is inconsistent.
(1.2) If w~ is inconsistent, then for some C 1 , ... , Ck E w~, C 1 , ... , Ck I- -,B. Using
PC, N 2 , MT3, and the definition of osA, we prove G I- -, 0 sB. Hence, by MTI and
the properties of G, G I- -,osB, and G is inconsistent.
(1.3) It is obvious that w~ is consistent.
(2.1) If w;" n is inconsistent, then for some C 1 , ... , C k E w;" n' C 1 , ... , Ck, B I- -,A.
Using PC, N2 " and MT3 we prove G I- oa(B =? -,A). By MT4 , G I- osB =? os-,A,
G I- os-,A, and by the definition of osA, G I- -,0 aA. Hence, G is inconsistent.
(2.2) As in (1.1) of this proof, we obtain G I- -, 0 sA, and by MT2 , G I- -,0 aA.
Thus G is inconsistent.
For any w' E Im there is a me set containing w'. For any mET', let J m <;;; W'
be a set of me sets of formulas of PC such that for any w' E Im there is exactly one
me set in J m containing w'; such a member of J m we denote by w', too. Now for each
mET' we define a function gm : T' ---+ J m as follows.
(1) If Pm in E is osA, then gm(l) = w~ for any 1 ::::: I < m;
(Ll) gm(m) = w;";
(1.2) if E' is infinite, then gm(m + n) = w~ for any nET';
(1.3) if E' is finite and Qn is the last member of E', then gm (m + I) = w; for all
1 ::::: I::::: n, and gm(m + I) = w~ for I > n;
(2) if Pm in E is oaA, then gm(l) = w;" for all 1 ::::: I::::: m;
(2.1) if E' is infinite, then gm(m + n) = w;" n for all nET';
(2.2) if E' is finite and Qn is the last member of E', then gm(m + I) = w;",/ for
all 1 ::::: I ::::: n, and gm (m + I) = w~ for I > n.
Let !::.' = {gm : mET'} ; it is obvious that each g' E !::.' is a function from T' to
W'. If we define 1=' as in a tms, then

MLl.' =< T', W',!::.', 1='>

is a mts. It is easy to show that PEG iff MLl.' 1=' P. Hence, for any P E H,
MLl.' 1=' P.
As a corollary we have
THEOREM 6.3 If 1= P, then P E T 2 •
There is an obvious connection between TM and MT. Let h : :Fi ---+ :F;. be a
function such that h(aoA) = oaA, h(soA) = osA, h(-,L) = -,h(L), and h(L V M)
h(L) V heM). It is trivial that h is an isomorphism and that L E Tl iff h(L) E T2 .

160
181

7. MODELS FOR £3

Let T, W, f, ~ , and F be as before; then Mj,Ll =< T, W, f,~, F> is a combined


structure (cs). Obviously, Mj =< T, W,f, F> is a tms and Nb. =< T, w,~, F> is a
mts. For a cs Mj,Ll and any X E ;;j we define

if X E :Fi at, then Mj,Ll FX iff Mj FX


if X E :0.at, then Mj,Ll FX iff Nb. F X
Mf,Ll F oX iff MfA ~ X
Mj,Ll FX VY iff either Mj,Ll F X or Mj,Ll F Y.
Let r be as in Section 3, and let ~ denote the set of all functions 9 : T -+ W. X
is r -valid in the frame M=< T, W, F> iff Mf,A F X for any fEr. X is valid iff
X is valid in any frame M.

8. THE SYSTEM TMT

Now we axiomatize the set of valid formulas of;;j. Let TMT be the corresponding
system.The axioms and rules of TMT are those of both TM and MT. Let T3 be the
set of theorems of TMT. Obviously, we have
THEOREM 8.1 If X E T 3 , then F X.
THEOREM 8.2 For any consistent He ;;j there is a cs Mj,Ll such that Mj,Ll F X, for
any X E H.
PROOF. Let G be a mc set containing H. We already know that there arc a frame
M =< T, W, F>, a function f : T -+ peW) - {0}, and a set ~ of some functions
T -+ W such that Mf F X for any X E G n:Fi and Nb. F X for any X E G n :0.. By
an inductive argument we show that for any X E;;j, Mf,Ll FX iff X E G.
THEOREM 8.3 If F X, then X En.

9. SPECIAL MODELS FOR £3

In a cs Mj,A' ~ was the set of all functions 9 : T -+ W. However, we may choose


to consider other sets of functions instead. Let
~f = {g : T -+ W/Vm E T gem) E f(m)}.
THEOREM 9.1 For any f :T -+ peW) - {0}, ~f -I- 0 .
PROOF . The theorem is just a formulation of the axiom of choice.
THEOREM 9.2 For any non-empty set ~ of functions 9 : T -+ W there is a function
f : T -+ peW) - {0} such that ~ = ~j.
PROOF. Given ~ , let us define f : T -+ peW) - {0} by f(m) = {gem) : 9 E ~}. It
is clear that ~ = ~j for f just defined. Moreover, given ~ , the function f is unique.
THEOREM 9.3 Vm E T Vw E W:Jg E ~f gem) = w.
PROOF. By 9.1, there is agE ~ f; given mET and w E W let us define g' as follows:
= g(m') for all m' -I- m and g'(m) = w. Obviously, g' E ~f'
g'(m')

161
182

THEOREM 9.4 For any scs,


(1) 'tim E 'Nw E f(m) w F= A iff 'tIg E 6. f 'tlm E T g(m) F= A
(2) 3m E 'Nw E f(m) w F= A iff 'tIg E 6.f3m E T g(m) F= A.
PROOF . The proof of (1) is omitted.
Suppose that (a) 'tIg E 6.f3m E T g(m) F= A and (b) 'tim E T3w E f(m) w ~ A.
By the axiom of choice and (b), there is a go E 6. f such that

'tim E Tgo(m) ~ A.

Obviously, this contradicts (a). The remaining "half" of the proof of (2) is trivial.

10. THE SYSTEM ST

We axiomatize the subset of.r.i of formulas valid with respect to scs. The axioms
and rules are those of MT; additional axiom-schemes are:

STl aDA ,*oaA

ST2 oaA '* aDA


ST3 soA '* osA
ST4 osA '* soA.
Let T4 be the set of theorems of ST.
THEOREM 10.1 If X E T 4 , then F= X.
PROOF . Omitted.
THEOREM 10.2 If a set H C .r.i is consistent (with respect to ST), then there is a scs
Mf,!>, such that Mf,!>, F= X for any X E H.
PROOF . Let G C T3 be a me set, H ~ G; by 4.2, there is a tms Mf such that for any
X E G n:Fi, Mf F= X iff X E G. By 9.1, 6. f is non-empty and hence there is a ses
M f ,!> , . Let us show that JVb., F= X iff X E G n F_2.
Let X = oaA; by STI - ST2 , aDA '*oaA,oaA '*
aDA E G. Hence, oaA E G iff
aDA E G iff Mf F= aDA iff JVb., F= oaA.
Let X = oaA; by ST3 - ST4 , oaA '*a 0 A, a 0 A '*
oaA E G. Hence, oaA E G
iff a 0 A E G iff Mf F= a 0 A iff JVb., F= oaA.
By an easy inductive argument one can prove that M f ,!>, F= X iff X E G.
As a corollary we have
THEOREM 10.3 If F= X (with respect to scs) then X E T4 .

11. THE ANSWER TO THE THIRD QUESTION

The answer to the third question depends on the definition of validity. It is clear
that in ST (i) and (i') are equivalent, i E {I, ... ,4} , for aDA ~ oaA, aoA ~ oaA,
soA ~ osA, and so A ~ osA are members of T 4 • Moreover, this equivalence is
equivalent to the axiom of choice, as it is seen from 9.4 and 10.3.

162
183

In TMT (i) and (if) are non-equivalent, as will be shown in the next few theorems.
THEOREM 11.1 (a) There is a M,,!>. such that M,,!>. ~ aop '*
oap, (b) there is aM,,!>.
such that M,,!>. ~ oap '*aop, (c) there is a M,,!>. such that M,,!>. ~ sop '*
osp, and
(d) there is a M,,!>. such that M,,!>. ~ osp '*
sop.
PROOF . For any variable p there are me sets WI and W2 such that p E WI and -,p E W2'
Let T = {I} , let W be the set of all me sets of formulas of PC, let f(l) = {wr} ,
g(l) = W2, and l:. = {g} . Obviously, M, 1= aop and ;VG" ~ oap. This proves (a) and
( c).
To prove (b) and (d), let T, W, WI, and W2 be as before, let f(l) = {W2} , and
g(l) = WI'
Nevertheless, in TMT we can prove the following weak equivalences.
THEOREM 11.5 For any frame M, aDA is r - valid in M iffoaA is valid in
M(i.e. ~ 1= oaA).
PROOF. Omitted.

THEOREM 11.6 From any frame M, soA is r - valid in M iffosA is valid in M


PROOF. If we assume that soA is r-valid in M and that osA is not, then it is easy
to derive a contradiction. Let us assume that osA is valid in M and that soA is not
r-valid; then
(a) Vg E l:.3m E T g(m) 1= A,
(b) 3f E rVm E T3w E f(m) W ~ A.
Let fo E r be such that
c) Vm E T3w E fo(m) W ~ A.
By the axiom of choice,
(d) 3g E l:.Vm E TV!w E fo(m) (g(m) = w 1\ w ~ A).
It is now easy to obtain a contradiction.
THEOREM 11.7 For any frame M, if a 0 A is r-valid in M, then <>aA is valid in M

PROOF. Let us assume that aoA is r-valid in Mand that oaA is not; then
(a) Vf E rVm E T3w E f(m) w 1= A,
(b) Vg E l:.3m E T g(m) ~ A.
Let fo E r be such that
(c) Vm E T3w E fo(m) w 1= A.
Using the axiom of choice as in the preceding proof, we derive a contradiction.
We omit the proof of the following
THEOREM 11.8oaA being valid in M does not imply that a <> A is r-valid in M

12. A CONCLUDING REMARK

This investigation shows that the answer to the question as to whether (i) and (if)
from Section 1 are equivalent, 1 :::; i :::; 4, depends on the validity of the axiom of choice.
Thus, answering a rather naive question about natural language may be involved in
the consideration of a rather sophisticated question in the philosophy of mathematics.

163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
PUBLICATIONS DE L'INSTITUT MATHEMATIQUE 
Nouvelle serie, tome 57 (71), 1995, 165-178
-
Duro Kurepa memorial volume

IDENTITY AND PERMUTATION

A. Kron

Abstract. TW!
A!B B!A
It is known that in the purely implicational fragment of the system if
both ( ) and ( ) are theorems, then A and B are the same formula (the Anderson-

A!A
Belnap conjecture). This property is equivalent to NOID (no identity!): if the axiom-shema
TW! TW! -ID is obtained,
A!A
( ) is omitted from and the system then there is no theorem
of the form ( ).

J
A!A !
A Gentzen-style purely implicational system is here constructed such that NOID holds
J. NOID is proved J of
!B
for to be equivalent to NOE: there no theorem of the form (( )
B ) , i.e., of the form of the characteristic axiom of the implicational system E! of entailment.
p!p J as an axiom-schema (ID), then there are theorems (A B ) and !
B!A
If ( ) is adjoined to
( ) such that A and B J the Anderson-Belnap
are distinct formulas, which shows that for
conjecture is not equivalent to NOID.

The system J+ID is equivalent to RW! of relevance logic.

Introduction

By TW! we understand the system of propositional relevance logic de ned


in the language with ! as the sole connective, by the following axiom-schemata:
ID (A ! A)
ASU ((A ! B ) ! ((B ! C ) ! (A ! C )))
APR ((B ! C ) ! ((A ! B ) ! (A ! C ))):
The only rule of TW! is modus ponens.
By TW! {ID we understand the system obtained from TW! by deleting
the schema ID.
It has been shown that the following propositions were equivalent (Dwyer-
Powers theorem):

AMS Subject Classi cation (1991): Primary 11 A 05

Research supported by the Science Fund of Serbia (grant number 0401A) through the
Mathematical Institute in Belgrade.

191
166 A. Kron

if both (A ! B ) and (B ! A) are provable in TW! , then A and B are the


same formula (Anderson-Belnap's conjecture)
For no formula A is (A ! A) provable in TW! {ID (NOID).
Anderson-Belnap's conjecture is about an interesting property. Let us write
A  B i both (A ! B ) and (B ! A) are theorems of TW! ; then the axioms
of TW! and modus ponens are suÆcient to show that (a)  is an equivalence
relation and (b) that it is a congruence with respect to !. By Anderson-Belnap's
conjecture (the antisymmetry of !) this congruence is the smallest congruence
relation i.e., equality. Thus, the identity of formulas in the language with ! as the
only connective can be characterized exclusively by logical means { by the theory
TW! of implication.

NOID (and hence the Anderson-Belnap's conjecture) has been proved true
(cf. [2], [3] and [4]).
The proof of NOID in [3] has been obtained for a proper extension L of
TW! {ID.

Let S and S' be theories of implication and let A-B and NOID be the following
claims about S and S':
A-B if both (A ! B ) and (B ! A) are provable in S, then A and B are the same
formula,
and
NOID there is no theorem of S' of the form (A ! A).
Obviously, if S = TW! and S' = TW! {ID, then A-B and NOID are equiv-
alent.
In this paper we shall develop a proper extension J of L and prove that
(1) NOID holds for J and A-B does not hold for J+ID;
(2) NOID is equivalent to the following proposition: ((A ! B ) ! B ) is a
theorem of J i so is A.
The non-equivalence of A-B and NOID for J and J+ID is due to permutation
present in J in the form of the rule PERM.
The claim (2) is interesting because it shows that NOID cannot hold in any
system containing as a theorem any form of the E! axiom
(((A ! A) ! B ) ! B ):
Also, (2) will enable us to prove that there are in J some restricted forms
of contraction: any formula ((A ! (A ! B )) ! (A ! B )) is a theorem of J i
so is A.
(3) We shall show than NOE can be extended to formulas of a certain type.

192
Identity and permutation 167

The system J

Some of the basic de nitions given below are taken from [3].
Let p, q , r, . . . stand for propositional variables. The letters A, B , C , . . .
range over the set of formulas. Instead of (A ! B ) we shall write (AB ). Also, we
omit parentheses, with the association to the left. Thus, ABC stands for (AB )C .
Let R, S , T , U , V , W , X , Y , Z , . . . range over nite (possibly empty)
sequences of formulas. If X consists of a single formula A, we shall write A for X .
If X is empty, let X:B denote B . If X = hA1 ; . . . ; An i, n  1, then X:B denotes
the formula
A1 ! (A2 !    ! (An ! B ) . . . ):
Notice that any formula is of the form W:p, for some W and a variable p.
Very often we shall write WA :p for A, for any formula A.
By  (X ) we denote any permutation of X , and by  (X ):B we denote any
formula Y:B such that Y is a permutation of X .
Let C:DE be a subformula of A; suppose that B is obtained from A by
substitution of D:CE for C:DE , at a single occurrence of C:DE in A; then we
shall say that B is obtained from A by the rule PERM. Let us write A  B i
B can be obtained from A by a nite (possibly zero) number of applications of
PERM. It is clear that  is an equivalence relation. We shall write X  Y i Y
can be obtained from a permutation Z of X by a nite (possibly zero) number of
applications of PERM to some members of Z . For any A by A we shall denote
any formula B such that A  B . Also, for any X by X  we denote any Y such
that X  Y . It is clear that ( (X ))   (X  ).
The axioms of J are given by the following schema:
ASU  ((AB ) ; Bp; A):p:
The rules of J are:
JSU From  (X; Y ):p to infer  (X  ; (Y  :p)q ):q:
JPR From  (X; B ):p to infer  (X  ; (AB ) ; A):p:
JG From  (X; Y ):p and  (Z; B ):q to infer
 (X  ; Z  ; ((Y:p)B ) ):q:
The rule JG is to be understood as follows: if there are permutations V and W
of the sequences X; Y and Z; B , respectively, such that V:p and W:q , are derivable
in J, so is W 0 :q , for any permutaion W 0 of the sequence X  ; Z  ; ((Y:p)B ) . In a
similar way we understand JSU and JPR.
We shall assume that derivations in J are given in forms of trees, with usual
properties. The weight w of a node in a derivation, derivability with weight w and
the combined weight are de ned as in [5, p. 113]. By the degree of A (of X ) we
understand the number of occurrences of ! in A (in X ).
Let us de ne Apn as follows: Ap0 = A; Apn+1 = (Apn )p.

193
168 A. Kron

J is closed under modus ponens

We start with
Theorem 1. If A is derivable in J with weight w, so is A ; if X:p is derivable
in J, so is (X ):p, for any permutation (X ) of X .
Proof. By an easy induction on the weight of A in a given derivation of A.
Theorem 1 shows that J is closed under PERM; it enables us to identify A and
A , X and X  , and X and  (X ) in derivations in J. In the sequel this identi cation
is assumed.
Theorem 2. If (a) (X; Y ):p is derivable in J, so is (b) (X; (Y:p; Z ):q:Z ):q.
Proof. By JSU we obtain (X; (Y:p)q):q from (a); hence, (b) is obtained by
using JPR.
Theorem 2 and JG show that J is closed under the following assertion rules:
ASS1 From A to infer ABB .
ASS2 From A and  (X; B ):p to infer  (X; AB ):p.
Theorem 3. (TRANSITIVITY, JTR) If (a)  (X; Y ):p and (b)  (Y:p; Z ):q are
derivable in J, so is (c) (X; Z ):q.
Proof. Proceed by double induction. Suppose that (a) and (b) are derivable
with combined weight w and that Y:p is of degree d. Our induction hypotheses are:
Hyp 1 The theorem holds for any Y 0 :p of degree d0 < d and any combined
weight w;
Hyp 2 The theorem holds for Y:p and any combined weight w0 < w.
Case I (b) is an instance of ASU; hence,  (Y:p; Z )   (AB; Bq; A) for some
A, B and q .
I.1 Y:p  AB   (A; WB ):p, and Y   (A; WB ). From (a) we obtain (c)
by using JSU.
I.2 Y:p  Bq ; hence, Y  B and p  q . From (a) we obtain (c) by using
JPR.
I.3 Y:p  A; hence, by Theorem 2 we obtain (c).
Case II (b) is obtained by JSU from (b1 )  (V; W ):r, where  (V; (W:r)q ) 
 (Y:p; Z ).
II.1 V   (V 0 ; Y:p); by (a), (b1 ) and Hyp 2  (X; V 0 ; W ):r is derivable;
hence, by using JSU we obtain (c).
II.2 (W:r)q  Y:p and Z  V ; hence, Y  W:r and p  q . By (a), (b1 ) and
Hyp 1, (c) is derived.
Case III (b) is obtained by JPR from (b1 )  (V; B ):q , where  (V; AB; A) 
 (Y:p; Z ).

194
Identity and permutation 169

III.1 V   (V 0 ; Y:p) and Z   (V 0 ; AB; A); by (a), (b1 ), and Hyp 2, we


obtain  (X; V 0 ; B ):q , and then (c) by using JPR.
III.2 AB  Y:p. We have (a)  (X; A):B ; hence, by (a), (b1 ), and Hyp 1, (c)
is derived.
III.3 A  Y:p. From (a) and (b1 ) we obtain (c) by JG.
Case IV (b) is obtained by JG from (b1 )  (U; V ):r and (b2 )  (W; A):q ,
where
 (Y:p; Z )   (U; W; (V:r)A):
IV.1 U   (U 0 ; Y:p) and Z   (U 0 ; W; (V:r)A). By (a), (b1 ) and Hyp 2,
 (X; U 0 ; V ):r is derivable. Hence (c), by using (b2 ) and JG.
IV.2 W   (W 0 ; Y:p) and Z   (U; W 0 ; (V:r)A). Now  (X; W 0 ; A):q is
derivable by (a), (b2 ) and Hyp 2; hence (c), by using (b1 ) and JG.
IV.3 (V:r)A  Y:p and Z   (U; W ). It is clear that (a) is  (X; V:r):A. By
(a), (b2 ) and Hyp 1, (a')  (X; W; V:r):q is derivable. Now by (b1 ), (a'), and Hyp 1,
(c) is derivable.
A trivial consequence of this theorem is
Theorem 4 ( MODUS PONENS, MP ). If A and AB are derivable in J, so is B .
There is a Hilbert style formulation of J. Let K be the system with MP,
PERM, ASS1 and the axiom-schema  (AB; BC; A):C .
Theorem 5. K and J are equivalent.
Proof. It is obvious that J contains K.
The rules JTR and JPR are easily derivable in K, by using the axioms, MP
and PERM. In the same way the rules JSU and JG are easily derivable provided
that X is nonempty. The rule ASS1 plays the role of JSU when X is empty. Now
by using ASS1, JPR and JTR we derive JG when X is empty (ASS2).

The system L

The system L is obtained from J by restricting JSU and JG: in JSU and JG
X must not be empty. Let LSU and LG be JSU and JG, respectively, restricted in
this way. In [3] it is assumed that L has a single propositional variable p.
The following theorems were proved in [3].
L1 If A is derivable in L with weight w, so is A .
L2 L is closed under the following transitivity rule:
from  (X; A; Y ):p and  (Z; Y  :p):p to infer  (X  ; Z  ; A ):p.
L3 L contains TW! {ID.
L4 There is no theorem of L of the form Ap.
L5 There is no theorem of L of the form  (( (X; Y ):p)p2k ; Y  ):p, k 2 ! .
L6 There is no theorem of L of the form AA.

195
170 A. Kron

L7 There is no theorem of L of the form ABB .


L8 There is no theorem of L of the form A:ABB .
L9 There is no theorem of L of the form ABBA.
L6 { L9 are consequences of L5 . We shall prove or disprove theorems about
J analogous to L1 { L9 rst.
Notice that L is not closed under MP. Let A  pp:pp:pp and B  (pp:pp)p:ppp;
AB is an instance of ASU. If L were closed under MP, applying MP to
 (AB; Bp; A):p
twice, Bpp would be obtained in L, contrary to L4 .
L is not closed under ASS1 either. Otherwise, App would be derivable, con-
trary to L4 .
That L is not closed under ASS2 can be seen as follows. Let A   (pp; pp; p):p;
by using A and ASS2, in J we derive B , B   ( (A; p):p; pp; p):p. Let us show that
B is not derivable in L.
B is not an instance of ASU.
If B is obtained by LSU from C , then C   ( (A; p):p; p):p  (A(pp))(pp),
violating thus L7 .
If B is derivable by LPR from  (X; F ):p, then
 ( (A; p):p; pp; p)   (X; EF; E )
for some X , E , and F . It is clear that E  p.
If F  Ap, then  (Ap; pp):p is derivable in L. But this is neither an axiom nor
can it be obtained by LPR or LG. If it is obtained by LSU, then App is derivable,
contrary to L4 .
If F  p, then (A(pp))(pp) is derivable, contrary to L7 .
Suppose that B is derived by LG from  (X; Y ):p and  (Z; E ):p; hence,
 ( (A; p):p; pp; p):p   (X; (Y:p)E; Z ):p:
If (Y:p)E  pp; then Y is empty, E  p and  (X; Z )   ( (A; p):p; p). Now
X is not p and Z is not empty (otherwise, pp is derivable). Hence, B is obtained
from ( (A; p):p)p and  (p; p):p, which is impossible.
If (Y:p)E   (A; p):p, then  (X; Z )   (pp; p).
Let Z be empty; then B is obtained from  (pp; p; Y ):p and Ep, contrary to
L4 .
Let Z  pp; then B is obtained from  (Y; p):p and  (E; pp):p. Obviously, Y
is not empty and E is not pp; hence, E  Ap and Y:p  p - a contradiction.
Let Z  p; then B is obtained from  (pp; Y ):p and  (E; p):p. Since Y cannot
be empty, Y:p  A and E  pp, contary to L6 .
This shows that L is not closed under ASS2.
Since JSU = LSU + ASS1 and JG = LG + ASS2, we have J = L + ASS1 +
ASS2.

196
Identity and permutation 171

J is a proper extension of L and there is no theorem about J analogous either


to L4 or L5 or L7 . However, theorems analogous to L6 , L8 , and L9 still hold true.

No instance of AA is derivable in J

Theorem 6. (X:p)p is derivable in J i X is nonempty and any member of


X is derivable in J.
Proof. Let X be (A1 ; . . . ; An ), n > 0, and let A1 ; . . . ; An be derivable in J.
By ASS1, An pp is derivable; if n > 1, by using JG in the form of ASS2, we derive
( (A1 ; . . . An ):p)p, i.e., (X:p)p.
Suppose that (X:p)p is derivable. If X is empty, then pp is derivable; however,
this is neither an axiom nor can it be obtained by any of the rules. Hence, X is
nonempty.
Let X   (A1 ; . . . ; An ) and proceed by induction on the weight of the deriva-
tion of (X:p)p.
Obviously, (X:p)p is neither an instance of ASU nor can it be obtained by
JPR. If it is obtained from (a') by JSU, then (X:p)p  (V:p)pp and (a') is V:p;
hence, X  V:p  A1 , X is nonempty and A1 is derivable in J.
If (X:p)p is obtained from (a') and (a") by JG, then X:p   (U; W; (V:p)C ), U
and W are empty, X   (A1 ; . . . ; An )   (V:p; WC ) for some A1 ; . . . ; An , and (a')
and (a") are V:p and (WC :p)p, respectively. By induction hypothesis, all members
of WC , say WC   (A1 ; . . . ; An 1 ), are derivable in J. Obviously, we can take
V:p  An .
This completes the proof of the theorem.
Since J is (as L) closed under uniform substitution, to prove the main theo-
rems of this paper it suÆces to prove them under the assumption that there is only
one variable in J, say p. Let J1 be J with just one variable p. In the sequel, if not
stated otherwise, "derivable" means "derivable in J1 ".
Theorem 7 (NOID). There is no theorem of J1 of the form ((X:p)p2k ; X ):p,
k 2 !.
Proof. If there is a theorem of J1 of this form, then
Hyp 3 there is a formula (a)  ((X:p)p2k ; X ):p of smallest degree
derivable in J1 .
Let us consider how (a) could have been obtained. We leave to the reader the
veri cation that (a) cannot be an instance of ASU.
Case I (a) is obtained from (a') by JSU; hence,
 ((X:p)p2k ; X )   (Y; (Z:p)p)
for some Y and Z .
I.1 Y   (Y 0 ; (X:p)p2k ) and X   (Y 0 ; (Z:p)p). Obviously, we have (a')

197
172 A. Kron

 ( ((Y 0 ; (Z:p)p):p)p2k ; Y 0 ; Z ):p.


If both Y 0 and Z are empty, then (a') is pppp2k p; hence, pp is derivable by
Theorem 6. This is impossible.
If Y 0 is empty and Z nonempty, then (a') is  ((Z:p)ppp2k ; Z ):p, contrary to
Hyp 3.
Let Y 0 be nonempty and Z arbitrary. By using ASU and JPR we derive (b)
 ( (Y 0 ; Z ):p; (Z:p)p; Y 0 ):p.
If k > 0, we use JSU to derive (c)  ( (Y 0 ; Z ):p; ( (Y 0 ; (Z:p)p):p)p2k 1 ):p. Hence,
by (c), (a'), and JTR we derive  ( (Y 0 ; Z ):p; Y 0 ; Z ):p, contrary to Hyp 3.
I.2 (X:p)p2k  (Z:p)p and X  Y . If k = 0, then X  Z:p and (a') is
 (Z:p; Z ):p, contrary to Hyp 3.
If k > 0, then Z:p  (X:p)p2k 1 and Z  (X:p)p2k 2 . Hence, we have (a')
 ((X:p)p2k 2 ; X ):p;
contrary to Hyp 3.
Case II (a) is obtained by JPR; hence,  ((X:p)p2k ; X )   (Y; AB; A) for
some Y , A and B .
II.1 Y   (Y 0 ; (X:p)p2k ) and X   (Y 0 ; AB; A). Obviously, we have (a')
 (( (Y 0 ; AB; A):p)p2k ; Y 0 ; B ):p.
Now  (Bp; AB; A):p is an instance of ASU; hence, by JPR we obtain
 ( (Y 0 ; B ):p; Y 0 ; AB; A):p
and then by using JSU we derive  ( (Y 0 ; B ):p;  (Y 0 ; AB; A):p)p):p. If k > 0, by
JSU we get  ( (Y 0 ; B ):p;  (Y 0 ; AB; A):p)p2k 1 ):p. Hence, using JTR and (a') we
obtain  ( (Y 0 ; B ):p; Y 0 ; B ):p, contradicting thus Hyp 3.
II.2 (X:p)p2k  AB and X   (Y; A). Hence,
(X:p)p2k  ( (Y; WA :p):p)p2k   (WA :p; WB ):p:
If k > 0, then WB is empty and we have B  p, and WA :p  ( (Y; WA :p):p)p2k 1 ;
this is impossible.
Let k = 0; then  (Y; A)   (A; WB ) and Y  WB . Thus, (a') is
 (WB :p; WB ):p, contrary to Hyp 3.
II.3 (X:p)p2k  A and X   (Y; AB ); this is impossible.
Case III (a) is obtained by JG; hence,  ((X:p)p2k ; X )   (Y; Z; (U:p)B ) and
both (a')  (Y; U ):p and (a")  (Z; B ):p are derivable.
III.1 Y   (Y 0 ; (X:p)p2k ) and X   (Y 0 ; Z; (U:p)B ); hence, (a') is
 (( (Y 0 ; Z; (U:p)B ):p)p2k ; Y 0 ; U ):p.
From (a") we obtain (b)  (U:p; Z; (U:p)B ):p by using JPR. If necessary, we apply
JPR to obtain (c)  ( (Y 0 ; U ):p; Y 0 ; Z; (U:p)B ):p. If k > 0, by using JSU we derive
(d)
 ( (Y 0 ; U ):p;  (Y 0 ; Z; (U:p)B ):p)p2k 1 ):p.

198
Identity and permutation 173

Hence, by (d), (a'), and JTR we derive  ( (Y 0 ; U ):p; Y 0 ; U ):p, contrary to Hyp 3.
III.2 Z   (Z 0 ; (X:p)p2k ) and X   (Y; Z 0 ; (U:p)B ); hence, (a") is
 (( (Y; Z 0 ; (U:p)B:p)p2k ; Z 0 ; B ):p.
On the other hand, from (a') we obtain (b)  (Y; (U:p)B ):B , by Theorem 2.
Hence, by using JSU we derive (c)  (Bp; Y; (U:p)B ):p, and if Z 0 is nonempty, we
derive (d)
 ( (Z 0 ; B ):p; Y; Z 0 ; (U:p)B ):p
by using JPR. Now if k > 0, we can use JSU to obtain (e)
 ( (Z 0 ; B ):p; ( (Y; Z 0 ; (U:p)B ):p)p2k 1 ):p:
In any case we can use (e), (a"), and JTR to obtain  ( (Z 0 ; B ):p; Z 0 ; B ):p,
contrary to Hyp 3.
III.3 (X:p)p2k  (U:p)B and X   (Y; Z ). If k > 0, then B  p, U:p 
(X:p)p2k 1 and U  (X:p)p2k 2 . Obviously, we have (a')  (( (Y; Z ):p)p2k 2 ; Y ):p
and (a")  (Z; p):p. Hence, Z is nonempty.
III.3.1 Let Y be empty; then (a') is (Z:p)p2k 1 . We derive
(b)  (p; (Z:p)p2k 1 ):p
by using (a") and JSU. Hence, by using (a'), (b), and MP we obtain pp, which is
impossible.
III.3.2 Let Y be nonempty. By using (a") and JPR we derive
(b)  (Y:p; Y; Z ):p;
hence, by applying JSU to (b) we derive (c)  (Y:p; ( (Y; Z ):p)p2k 1 ):p, and hence
 (Y:p; Y ):p is derivable by using (a'), (c), and JTR, contrary to Hyp 3.
Let k = 0 and B  V:p; then X   (U:p; V ).
III.3.3 Y   (Y 0 ; U:p) and V   (Y 0 ; Z ). We have
(a')  (Y 0 ; U:p; U ):p and (a")  ( (Y 0 ; Z ):p; Z ):p:
If Y 0 is empty, Hyp 3 is violated.
Let Y 0 be nonempty. If Z is empty, (a") becomes (Y 0 :p)p and hence
 (U:p; U ):p is obtained from (a') and (a") by JTR, contrary to Hyp 3.
Let Z be nonempty. By using JPR, from (a') we obtain
 ( ( (Z; U ):p; U; Y 0 ; Z ):p:
Hence, by using JTR and (a"), we obtain  ( (Z; U ):p; Z; U ):p, contrary to Hyp 3.
III.3.4 Z   (Z 0 ; U:p) and V   (Y; Z 0 ). We have
(a')  (Y; U ):p and (a")  (Z 0 ; U:p;  (Y; Z 0 ):p):p:
From (a') and (a") we obtain  ( (Y; Z 0 ):p; Y; Z 0 ):p, by using JTR, contrary
to Hyp 3.
This completes the proof.
Theorem 8. There is no theorem of J of the form AA.

199
174 A. Kron

Theorem 9. There is no theorem of J of the form A:ABB .


Theorem 10. There is no theorem of J of the form ABBA.
Theorems 8 { 10 are trivial consequences of NOID.

No istance of AABB is derivable in J

Theorem 11 ( NOE ).  ( (X; Y ):p; Y ):p is derivable in J1 i X is nonempty


and every member of X is derivable in J1 .
Proof. To prove the non-trivial part of the theorem, proceed by induction
on the degree of  (X; Y ):p. If Y is empty, we use Theorem 6. Let us accept the
induction hypothesis
Hyp 4 The theorem holds for any  (X 0 ; Y 0 ):p of degree smaller than the
degree of  (X; Y ):p.
Suppose that (a)  ( (X; Y ):p; Y ):p is derivable in J1 . By NOID, X is
nonempty. The veri cation that (a) is not an instance of ASU is left to the reader.
Case I (a) is obtained by JSU from (a')  (U; V ):p, where  ( (X; Y ):p; Y ) 
 (U; (V:p)p):
I.1 (V:p)p   (X; Y ):p and U  Y ; obviously, either X or Y is empty. But
X is nonempty. If Y is empty, then by Theorem 6, X   (A1 ; . . . ; An ) for some
derivable A1 ; . . . ; An .
I.2 Y   ((V:p)p; Y 0 ) and U   ( (X; (V:p)p; Y 0 ):p; Y 0 ). Obviously, (a) is
obtained from (a')  ( (X; (V:p)p; Y 0 ):p; V; Y 0 ):p. Since X is nonempty, there is a
member A of X . But as an instance of ASU we have  ( (A; V ):p; (V:p)p; A):p. By
using JPR we derive  ( (X; V; Y 0 ):p; X; (V:p)p; Y 0 ):p. Hence, by using JTR and
(a') we obtain  ( (X; V; Y 0 ):p; V; Y 0 ):p. By Hyp 4, X   (A1 ; . . . ; An ) for some
A1 ; . . . ; An and n, and A1 ; . . . ; An are derivable in J1 .
Case II (a) follows by JPR from (a')  (U; D):p, where  ( (X; Y ):p; Y ) 
 (U; CD; C ).
II.1 Y   (CD; C; Y 0 ) and U   ( (X; CD; C; Y 0 ):p; Y 0 ). But
 ( (X; D; Y 0 ):p; X; CD; C; Y 0 ):p
is easily derivable in J1 . Hence, by using JTR and (a'), so is
 ( (X; D; Y 0 ):p; D; Y 0 ):p:
Hence, by Hyp 4, X   (A1 ; . . . ; An ) for some A1 ; . . . ; An and n, and A1 ; . . . ; An
are derivable in J1 .
II.2  (X; Y ):p  CD and Y   (U; C ); hence,  (X; C; U ):p  CD. It is
clear that  (X; U )  WD . Now (a') is  ( (X; U ):p; U ):p and by Hyp 4, X 
 (A1 ; . . . ; An ) for some A1 ; . . . ; An and n, and thus A1 ; . . . ; An are derivable in J1 .
II.3  (X; Y ):p  C and Y   (CD; Y 0 ); this is impossible.

200
Identity and permutation 175

Case III (a) follows by JG from (a')  (U; V ):p and (a")  (W; D):p, where
we have
 ( (X; Y ):p; Y )   (U; W; (V:p)D):
III.1  (X; Y ):p  (V:p)D and Y   (U; W ); hence,
 (X; U; W )   (V:p; WD ):
III.1.1 X   (X 0 ; V:p), WD   (X 0 ; U; W ), and (a") is
 ( (X 0 ; U; W ):p; W ):p:
If U is empty, by Hyp 4 and (a"), X 0   (A1 ; . . . ; An 1 ) for some derivable
A1 ; . . . ; An 1 and n. On the other hand, (a') is V:p and we may take V:p  An .
If U is nonempty, from (a') we obtain  ((V:p)p; U ):p and hence
 ( (X 0 ; V:p; W ):p; X 0 ; U; W ):p
by JPR. Now by using JTR and (a"), we obtain  ( (X; W ):p; W ):p. Hence, by
Hyp 4, X   (A1 ; . . . ; An ) for some derivable A1 ; . . . ; An .
III.1.2 U   (V:p; U 0 ) and WD   (X; U 0 ; W ). Obviously, (a') and (a") are
 (V:p; U 0 ; V ):p and  (W;  (X; U 0 ; W ):p):p;
respectively. Hence, by using JPR and (a'), we easily derive
 ( (X; V; W ):p; X; U 0 ; V; W ):p:
Now by using (a") and JTR we get  ( (X; V; W ):p; V; W ):p in J1 . By Hyp 4 we
have that for some derivable A1 ; . . . ; An , X   (A1 ; . . . ; An ).
III.1.3 W   (V:p; W 0 ) and WD   (X; U; W 0 ). Obviously, (a') and (a")
are  (U; V ):p and  (V:p; W 0 ;  (X; U; W 0 ):p):p, respectively. By JTR, we derive
 ( (X; U; W 0 ):p; U; W 0 ):p. By Hyp 4, X   (A1 ; . . . ; An ) and A1 ; . . . ; An for some
derivable A1 ; . . . ; An .
III.2 U   ( (X; Y ):p; U 0 ) and Y   ((V:p)D; U 0 ; W ). Now (a') is
 (( (X; V:p)D; U 0 ; W ):p; U 0 ; V ):p:
By using (a")  (W; D):p and JPR we derive
 ( (X; U 0 ; V ):p; X; U 0 ; W; (V:p)D):p:
Hence, by using JTR and (a'), we obtain  ( (X; U 0 ; V ):p; U 0 ; V ):p. Hence, X 
 (A1 ; . . . ; An ), by Hyp 4, for some derivable A1 ; . . . ; An .
III.3 W   ( (X; Y ):p; W 0 ) and Y   ((V:p)D; U; W 0 ). Now, obviously,
(a") is
 ( (X; (V:p)D; U; W 0 ):p; W 0 ; D):p:
From (a')  (U; V ):p, we obtain  (U; (V:p)D):D by Theorem 2, and
 ((V:p)D; Dp; U ):p
by JSU. Now by repeatedly using JPR, we easily derive
 ( (X; W 0 ; D):p; X; (V:p)D; U; W 0 ):p;
and hence
 ( (X; W 0 ; D):p; W 0 ; D):p;

201
176 A. Kron

by using JTR and (a"). By Hyp 4, X   (A1 ; . . . ; An ) for some derivable


A1 ; . . . ; An .
This completes the proof of the theorem.
COROLLARY There is no theorem of J of the form AABB .
Proof. Suppose that there are A and B such that AABB is derivable in
J. Since J is closed under uniform substitution, there are A1 and B1 such that
A1 A1 B1 B1 is derivable in J1 . By NOE, A1 A1 is derivable in J1 and hence in J,
contrary to NOID.
In fact, NOE is in J equivalent to NOID. For, suppose NOE and let A be a
formula such that AA is derivable in J; then AApp is derivable, contrary to NOE.
It is known that AABB is a theorem of E! ; hence the name NOE.
A corollary of NOE concerning contraction and the Reirce Law is the following
Theorem 12. (A(AB ))(AB ) is derivable in J i so is A; ABAA is derivable
in J i so is AB .

NOE can be generalized to the following theorem.


Theorem 13. (a) (((X; Y ):p)p2k ; Y ):p is derivable i X is nonenmpty and
every member of X is derivable .
Proof. If k = 0, the theorem is true by NOE.
Let k > 0 and proceed by induction on k . If Y is empty, we use Theorem 6.
Suppose that (a) is derivable in J1 . By NOID, X is nonempty. The veri ca-
tion that (a) is not an instance of ASU is left to the reader.
Case I (a) is obtained by JSU from (a')  (U; V ):p, where
 (( (X; Y ):p)p2k ; Y )   (U; (V:p)p):
I.1 (V:p)p  ( (X; Y ):p)p2k and U  Y ; obviously, either X or Y is empty.
But X is nonempty. If Y is empty, then by Theorem 6, X   (A1 ; . . . ; An ) for
some derivable A1 ; . . . ; An .
I.2 Y   ((V:p)p; Y 0 ) and U   ( (X; (V:p)p; Y 0 ):p; Y 0 ). Obviously, (a) is
obtained from (a')  (( (X; (V:p)p; Y 0 ):p)p2k ; V; Y 0 ):p. Since X is nonempty, there
is a member A of X . But as an instance of ASU we have  ( (A; V ):p; (V:p)p; A):p.
By using JPR we derive  ( (X; V; Y 0 ):p; X; (V:p)p; Y 0 ):p, and then by using JSU
we obtain  ( (X; V; Y 0 ):p; ( (X; (V:p)p; Y 0 ):p)p2k 1 ):p. Hence, by using JTR and
(a') we obtain  ( (X; V; Y 0 ):p; V; Y 0 ):p. Now we use NOE to conclude that X 
 (A1 ; . . . ; An ) for some A1 ; . . . ; An derivable in J1 .
Case II (a) follows by JPR from (a')  (U; D):p, where  (( (X; Y ):p)p2k ; Y )
  (U; CD; C ).

II.1 Y   (CD; C; Y 0 ) and U   (( (X; CD; C; Y 0 ):p)p2k ; Y 0 ). But

202
Identity and permutation 177

 ( (X; D; Y 0 ):p; X; CD; C; Y 0 ):p


is easily derivable in J1 . By JSU we derive
 ( (X; D; Y 0 ):p; ( (X; CD; C; Y 0 ):p)p2k 1 ):p
Hence, by using JTR and (a'), we obtain  ( (X; D; Y 0 ):p; D; Y 0 ):p. Hence, X 
 (A1 ; . . . ; An ), by NOE, for some A1 ; . . . ; An and n, and A1 ; . . . ; An are derivable
in J1 .
II.2 ( (X; Y ):p)p2k  CD and Y   (U; C ); since k > 0, D  p and Y is
empty. The theorem follows by Theorem 6.
II.3 ( (X; Y ):p)p2k  C and Y   (CD; Y 0 ); this is impossible.
Case III (a) follows by JG from (a')  (U; V ):p and (a")  (W; D):p, where
we have
 (( (X; Y ):p)p2k ; Y )   (U; W; (V:p)D):
III.1 ( (X; Y ):p)p2k  (V:p)D and Y   (U; W ). Hence, D  p and V 
( (X; U; W ):p)p2k 2 . We have (a')  (U; ( (X; U; W ):p)p2k 2 ):p and (a")  (W; p):p.
By using JTR we derive
 (( (X; U; W ):p)p2k 2 ; U; W ):p:
By induction hypothesis, X   (A1 ; . . . ; An ) for some derivable A1 ; . . . ; An .
III.2 U   (( (X; Y ):p)p2k ; U 0 ) and Y   ((V:p)D; U 0 ; W ). Now (a') is
 ((( (X; V:p)D; U 0 ; W ):p)p2k ; U 0 ; V ):p:
By (a")  (W; D):p and JPR we derive  ( (X; U 0 ; V ):p; X; (V:p)D; U 0 ; W ):p,
and then we use JSU to obtain  ( (X; U 0 ; V ):p; ( (X; (V:p)D; U 0 ; W ):p)p2k 1 ):p.
Hence, by using JTR and (a'), we obtain  ( (X; U 0 ; V ):p; U 0 ; V ):p. Hence, X 
 (A1 ; . . . ; An ), by NOE, for some derivable A1 ; . . . ; An .
III.3 W   (( (X; Y ):p)p2k ; W 0 ) and Y   ((V:p)D; U; W 0 ). Now, obvious-
ly, (a") is
 (( (X; (V:p)D; U; W 0 ):p)p2k ; W 0 ; D):p:
From (a')  (U; V ):p, we obtain  (U; (V:p)D):D by Theorem 2, and
 ((V:p)D; Dp; U ):p
by JSU. Now by repeatedly using JPR and JSU, we easily derive
 ( (X; W 0 ; D):p; ( (X; (V:p)D; U; W 0 ):p)2k 1 ):p;
and hence  ( (X; W 0 ; D):p; W 0 ; D):p; by using JTR and (a"). Now we use NOE to
conclude that X   (A1 ; . . . ; An ) for some derivable A1 ; . . . ; An .
This completes the proof of the theorem.
The di erence between L and J1 is now clear: by L5 , there is no theorem
of L of the form  (( (X; Y ):p)p2k ; Y ):p; by Theorem 13,  (( (X; Y ):p)p2k ; Y ):p is
derivable in J1 i X is nonempty and every member of X is derivable in J1 .

203
178 A. Kron

Two open problems

Let us adjoin to J the axiom-schema pp. It is easy to prove that ASU, JSU,
and JPR are redundant. The system J+ID is equivalent to RW! , de ned by MP
and the following axiom-schemata:
ID AA
ASS A:ABB
TR AB:BC:AC
(the proof is omitted). It is then easy to show that A-B is not true for J+ID.
From A:ABB , by Theorem 2 we obtain ABB (AB ):A:AB . On the other hand,
A(AB ):ABB:AB is an instance of ASU. Thus there are distinct formulas C and D
such that both CD and DC are derivable in J+ID. It is therefore natural to raise
the following two questions:
Question 1. Is there any proper extension EX of TW! such that A-B holds
for EX?
Question 2. Is there any proper extension EX of J such that NOID holds for
EX ?

REFERENCES

Entailment, the Logic of Relevance and Necessity, Vol. I,


x
[1] A.R. Anderson and N.D. Belnap,

BB0 I
Princeton Univ. Press, 1975, 8.11.

[2] Y. Komori, Syntactical Investigations into BI Logic and Logic, Studia Logica 53
(1994), 397{416.

[3] A. Kron, A constructive proof of a theorem in relevance logic, Z. Math. Logik Grund. Math.,
31 (1985), 423{430.
[4] E. Martin and R.K. Meyer, Solution to the P{W problem, J. Symbolic Logic 47 (1982),
869{887.

[5] R. Smullyan, First-Order Logic, Springer-Verlag, Heidelberg { Berlin { New York, 1968.

University of Belgrade (Received 13 02 1995)


Faculty of Philosophy
11000 Belgrade, Yugoslavia

204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
Aleksander Kron

THE LAW OF ASSERTION AND THE RULE OF


RESTRICTED PERMUTATION1

The only connective in the propositional language that we are using


is →. We write (AB) for (A → B); furthermore, ABC stands for (AB)C.
For any A the set c(A) is the smallest set satisfying (1) A ∈ c(A) and
(2): let B ∈ c(A) be such that C(DE) is a subformula of B, for some C,
D and E, and let B ∗ be obtained by substitution of D(CE) for C(DE), at
a single occurrence of C(DE) in B; then B ∗ ∈ c(A).
A and B are congruent (A ∼ B) iff B ∈ c(A). A 6∼ B means B ∈ / c(A).
For any A by A∗ we denote any formula B such that A ∼ B.
The propositional system considered herein is TRW→ +RP; it is de-
fined by the following axiom-schemata and rules:
ID ` AA
ASU ` AB(BC(AC))
APR ` BC(AB(AC))
SU If ` AB, then ` BC(AC)
PR If ` BC, then ` AB(AC)
TR If ` AB, and BC, then ` AC
RP If ` AB, then ` A∗ B ∗
Let TRW→ +RP–ID be obtained from TRW→ +RP by omitting ID.
Applying the results of [1], we established the following facts:
(a) (No identity, NOID) There is no theorem of TRW→ +RP–ID of
the form AA.
(b) (No assertion, NOASS1 ) There is no theorem of TRW→ +RP–ID
of the form A(ABB).
(c) (NOASS2 ) There is no theorem of TRW→ +RP–ID of the form
ABBA.
1 This research has been supported by the Science Fund of Serbia (grant number
0401A) through the Mathematical Institute in Belgrade.

33

223
(d) (NOE2 ) There is no theorem of TRW→ +RP–ID of the form
AABB.
In [2] it is proved that
(e) For any A and B such that A 6∼ B, ` AB in TRW→ +RP iff ` AB
in TRW→ +RP–ID.
As a consequence of (e) and NOID we have
(f) (CONGR) A ∼ B iff both ` AB and ` BA in TRW→ +RP.
Thus, the congruence relation ∼ is determined by logical means only
- by provability in TRW→ +RP.
In the presence of SU, NOASS1 and NOASS2 are equivalent, and we
write NOASS for any of them.
As a consequence of CONGR and (a) - (d), NOE holds for TRW→ +RP.
Also, we have
(g) (NOABB) There is no theorem of TRW→ +RP of the form ABB.
TRW→ +RP is not closed either under modus ponens or under the
following permutation rule: if ` A.BC then ` B.AC.
The main result of [2] is
(h) In TRW→ +RP CONGR and NOASS are equivalent.
The proof is obtained by a double induction applied to a normal form
theorem.
In this paper we give (1) a Gentzen-style formulation GTRW→ +RP
of TRW→ +RP and, as a consequence of NOASS, we prove (2) a normal
form theorem for GTRW→ +RP. Moreover, we prove that (3) NOASS and
the normal form theorem are equivalent.
(1) Let X and Y be finite (possibly empty) sequences of formulas;
by X, Y (alternatively, (X, Y )) we understand the sequence obtained by
writing down in order all members of X, and then by writing down in order
all members of Y . A formula standing alone is a one element sequence.
As in [1], if X is the sequence A1 , . . . , An , then by X.B we understand
the formula

A1 (A2 . . . (An B) . . .).


By π(X) we denote the set of all permutations of X, and by π(X).B
we denote any formula Y.B such that Y ∈ π(X).
Let X = (A1 , . . . , An ); by X ∗ we denote the sequence (A∗1 , . . . , A∗n ).
By the beginning of a nonempty sequence X, in symbols [X], we un-
derstand the first formula of the sequence X.

34

224
The axioms of GTRW→ +RP are all formulas of the form
ID pp, for any variable p
The only rule of GTRW→ +RP is
RPG If ` (X, A, Y ).p and ` (Z1 , C, Z2 ).q,
then ` π(X ∗ , Z1∗ , Z2∗ , ((Y.p)C)∗ , A∗ ).q
for any π(X ∗ , Z1∗ , Z2∗ , ((Y.p)C)∗ , A∗ ).q such that the following condition is
satisfied: the beginning of π(X ∗ , Z1∗ , Z2∗ , ((Y.p)C)∗ , A∗ ) either is [X ∗ ] or
[Z1∗ ] or ((V.r)C)∗ , as we please.
(2) (NORMAL FORM THEOREM, NMF): For any A and any node
S in any derivation of AA in GTRW→ +RP there is a formula B such that
S ∼ BB.
(3) The equivalence of NOASS (and hence CONGR) and NMF sug-
gests that CONGR and (e) are perhaps equivalent.

References
[1] A. Kron, Identity and Permutation, Publication de l’Institut Mathématique,
Nouvelle série, Tome 57 (71) (1995), pp. 165-178.
[2] A. Kron, Between TRW→ and RW→ , to appear.

Department of Philosophy
University of Belgrade
Yugoslavia

35

225
226
PUBLICATIONS DE L'INSTITUT MATHEMATIQUE 
Nouvelle serie, tome 63 (77), 1998, 9{20

BETWEEN TW ! AND RW !
Aleksandar Kron

To the memory of V. A. Smirnov (1931{1996)


Communicated by Zarko Mijajlovic

Abstract. We investigate some pure implicational systems placed between


the implicational fragments TW! and RW! of the well-known relevance systems
TW and RW For one them, TRW! +RP, we prove (1) and (2):
` ! ` !
(1) if both A B and B A in TRW! +RP, then B can be obtained
! !
from A by substitution of occurrences of formulas of the form D :C E for some
! !
occurrences of subformulas of A of the form C :D E (CONGR);
(2) CONGR is equivalent to NOASS: for any A and B ,
6` ! ! !
A :A B B

in TRW! +RP.
CONGR is a generalization of the solution to the P{W problem, solved for
TW! in [6] (cf. also [1]{[4] for other solutions).
The equivalence of CONGR and NOASS is a generalization of the Dwyer-
Powers theorem for TW! to the e ect that the P{W problem is equivalent to
NOID: there is no theorem of TW! -ID of the form AA.
The proof of the equivalence of CONGR and NOASS is obtained by double
induction applied jointly with a normal form theorem.

1. Introduction

The only connective in the propositional language investigated here is !.


We write (AB ) for (A ! B ); furthermore, ABC and A:BC stand for (AB )C and
A(BC ), respectively.
For any A the set c(A) is the smallest set satisfying (1) and (2): (1) A 2 c(A);
(2) let B 2 c(A) be such that C:DE is a subformula of B , for some C , D and E ,
AMS Subject Classi cation (1991): Primary 03B46
Supported by the Science Fund of Serbia, grant number 04M01A through the Mathematical
Institute in Belgrade.

227
10 Kron

and let B  be obtained by substitution of D:CE for C:DE , at a single occurrence


of C:DE in B ; then B  2 c(A).
A and B are congruent (in symbols A  B ) i B 2 c(A). A 6 B means
B2 = c(A).
For any A by A we denote any formula B such that A  B .
In the sequel several sub-systems of RW! are investigated. If S and T are
any two of them, we write (S  T), S  T, and S = T if the set of theorems of S
is a (proper) subset of or equal to the set of theorems of T, respectively.
Let U be either an axiom-scheme or a rule; if U is adjoined (deleted) to (from)
a system S, the result is denoted by S+U (S{U).
RW! can be de ned by modus ponens (MP) and the following axiom-
schemata:
ID AA
ASU AB:BC:AC
APR BC:AB:AC
AP A(BC ):B:AC:
An equivalent formulation of RW! is obtained by substitution of the axiom-
scheme A:ABB (axiom-scheme of assertion, ASS) for AP (axiom-scheme of permu-
tation). Sometimes it is important to distinguish between these two formulations;
on such occasions the rst will be called RW!AP and the second RW!ASS .
RW! is closed under substitution of equivalents and, hence, under the rule
of permutation:
P If ` A, then ` A
Also, RW! is closed under the rules
ASS1 If ` A, then ` ABB
ASS2 If ` A and ` B1 : . . . :Bk : . . . :Bn C , then ` B1 : ... :ABk : ... :Bn C:

The closure under ASS1 follows by ASS and MP, and the closure under ASS2
by P, ASS1 and TR. On the other hand, when we have ID, ASS1 is obtained by
ASS2.
TW! is de ned by MP and the axiom-schemata ID, ASU and APR. It has
the Anderson-Belnap property (A-B): if both ` AB and ` BA in TW! , then A
and B denote the same formula.
A-B is equivalent to the Dwyer-Powers property (D-P): for any A, 6` AA in
TW! {ID.

TW! and TW! {ID have alternative formulations TRW! and TRW! {
ID, respectively, obtained by deleting MP and by adjoining the following rules
instead:
SU If ` AB , then ` BC:AC
PR If ` BC , then ` AB:AC
TR If ` AB and ` BC , then ` AC

228
Between 11

TW! +P is an equivalent formulation of RW! . On the other hand, adding


P to TRW! does not suÆce to produce RW! { we must add ASS1 as well.
Theorem 1.1. TW! +P = TRW! +P+ASS1.
Proof. It was proved in [5, Theorem 5] that TW! +P{ID = TRW! +P+
ASS1{ID. The inductive proof given there was to the e ect that TRW! +P+ASS1{
ID is closed under the following Ackermann's rule Æ (and hence under MP):
if
(a) ` Ai and (b) ` A1 : . . . :Ai 1 :Ai :Ai+1 : . . . :An p;
then
(c) ` A1 : ... :Ai 1 :Ai+1 : ... :An p:

The induction is on the weight of (b) and has to be extended here by consid-
ering ID. This is easy.
Can we substitute the rules SU, PR and TR for MP in RW!AP and RW!ASS
such that the resulting systems are equivalent to the old ones and to each other?
The negative answer is a surprise. We shall show that these new systems
are not closed under MP. Moreover, they are not equivalent to each other. This
shows that between TW! and RW! there is more room for some interesting
intermediate systems than we believed to be. One of them is TRW! +RP; we
shall show that it enjoys CONGR { a property analogous to A-B: if both ` AB
and ` BA in TRW! +RP, then A  B . This property is not shared by all
systems between TW! and RW! ; for example, ` AA:AAAA and ` AAAA:AA
in TRW! +P; obviously, AA 6 AAAA.
2. The intermediate systems

Let us de ne the rule of restricted permutation:


RP If ` AB , then ` A B 
Since TRW ! +AP is closed under substitution of equivalents, it is easy to
prove
Theorem 2.1. TRW! +RP = TRW! +AP.
Theorem 2.2. TW! +RP = RW! .
Proof. It is clear that TW! +RP  RW! .
In TW! +RP ID and RP yield AP. Also, TW! +RP is closed under MP.
Hence, RW! = RW!AP  TW! +RP.
The main property of TRW! +RP is given in the next theorem.
Theorem 2.3. If A 6 B , then
` AB in TRW! +RP i ` AB in TRW! +RP{ID:

229
12 Kron

Proof. Suppose that A 6 B and proceed by induction on theorems of


TRW ! +RP.
If AB is an axiom of TRW! +RP, then AB is an axiom of TRW! +RP{ID.
Let AB = CD:ED, ` CD:ED by ` EC and SU; since CD 6 ED, it follows
that E 6 C ; by induction hypothesis, ` EC in TRW! +RP{ID, . Hence, by SU
` CD:ED in TRW! +RP{ID.
In the case of PR we proceed in a similar way.
If ` AB by ` AC , ` CB and TR, then either A 6 C or C 6 B . If
both A 6 C and C 6 B , then by induction hypothesis, ` AC and ` CB in
TRW! +RP{ID; hence, ` AB in TRW! +RP{ID by TR.

If A  C , by induction hypothesis ` CB in TRW! +RP{ID; we obtain


` AB by ` CB and RP. If C  B , by induction hypothesis ` AC in TRW! +RP{
ID; we obtain ` AB by ` AC and RP.
Let AB = C  D and ` C  D by ` CD and RP; since C  6 D , we have
C 6 D; by induction hypothesis, ` CD in TRW! +RP{ID, and ` C  D by RP.
Since TRW! +RP{ID  TRW! +RP, the theorem is proved.
It was shown in [4] that TRW! +P+ASS1{ID (called there K) has an equiv-
alent Genzten-style formulation J that contains no theorem of any of the forms AA,
A:ABB , ABBA or AABB . We shall state this fact in the form of a theorem, for
further reference.
Theorem 2.4. In TRW! +P+ASS1{ID:
(a) (No identity, NOID) There is no theorem of the form AA.
(b) (No assertion, NOASS1 ) There is no theorem of the form A:ABB .
(c) (NOASS2 ) There is no theorem of the form ABBA.
(d) (NOE1 ) ` (A1 :A2 : . . . :An B )B i ` A1 ; . . . ; ` An .
(e) (NOE2 ) There is no theorem of the form AABB .
NOASS1 and NOASS2 are equivalent whenever we have SU. In the sequel we
write NOASS both for NOASS1 and NOASS2 .
Since TRW! +RP{ID  TRW! +P{ID  TRW! +P+ASS1{ID, NOID,
NOASS and NOE2 hold for TRW! +RP{ID as well.
Now we can prove CONGR.
Theorem 2.5. (CONGR) If both ` AB and ` BA in TRW ! +RP, then
A  B.
` AB and ` BA in TRW! +RP, and that
Proof. Suppose that both
A 6 B .
By Theorem 2.3, ` AB and ` BA in TRW! +RP{ID. Hence, ` AA in
TRW! +RP{ID by TR, contrary to NOID.

The Anderson{Belnap property A-B is a special case of CONGR.


Another surprise is that NOASS holds for some systems having the axiom-
scheme ID.

230
Between 13

Theorem 2.6. For any A and any B , 6` A:ABB in TRW! +RP.


Proof. Suppose that there are A and B such that ` A:ABB in TRW! +RP;
by A 6 ABB and Theorem 2.3, ` A:ABB in TRW! +RP {ID, contrary to
NOASS.
Theorem 2.6 shows that TRW! +RP is closed neither under MP nor under
P; otherwise, from the axiom pp:pq:pq we would obtain pp:p:pqq by RP and then
p:pqq by pp and MP.

Theorem 2.7. In TRW! +RP CONGR implies NOASS.


Proof. Suppose that ` A:ABB in TRW! +RP, for some A and B . By SU
we obtain (a) ` ABB (CB ):A:CB for any C . Also, (b) ` C (AB ):ABB:CB by
ASU. Hence, (c) ` A(CB ):ABB:CB by RP. Now by (a), (c) and CONGR we have
A(CB )  ABB:CB , which is impossible. Hence, in TRW! +RP CONGR implies
NOASS.
Let us compare TRW! +AP and TRW! +ASS.
Theorem 2.8. TRW! +AP  TRW! +ASS.
Proof. As in the proof of Theorem 2.7, we have (a) and (b). Hence, we have
` B (AC ):A:BC in TRW! +ASS, by TR; thus TRW! +AP  TRW! +ASS. By
NOASS, TRW! +AP = 6 TRW! +ASS.
Since TRW! +ASS is closed under substitution of equivalents, it is closed
under RP. Moreover, we have
Theorem 2.9. TRW! +ASS is closed under P.
Proof. By induction on theorems of TRW! +ASS.
Suppose that ` D:EF in TRW! +ASS. If it is an instance of ID, then
` E:DF by ASS, and conversely, if ` D:EF by ASS, then ` E:DF by ID.
If ` D:EF by ASU (APR), then ` E:DF by APR (ASU).
Suppose that ` D:EF by (a) ` ED1 and SU, where D = D1 F . Now (b)
` D1 :D1 F F is an axiom; hence, ` E:DF by (a), (b) and TR.
Suppose that ` D:EF by (a) ` D1 F and PR, where D = ED1 . By (a) and
PR we have (b) ` ED1 D1 :ED1 F . On the other hand, (c) ` E:ED1 D1 by ASS.
Therefore, ` E:DF by (c), (b) and TR.
Suppose that ` D:EF by (a) ` DG, (b) ` G:EF and TR. By induction
hypothesis, (c) ` E:GF . On the other hand, (d) ` GF:DF by (a) and SU; hence,
` E:DF by (c), (d) and TR.
Therefore, if ` D:EF in TRW! +ASS, then ` E:DF in TRW! +ASS.
This, together with the closure under RP gives us the closure under P.
Corollary. TRW! +ASS = TRW! +P.

231
14 Kron

Proof. Obviously, TRW! +ASS  TRW! +P, by ID and P. TRW! +ASS


is closed under P, by Theorem 2.9; hence, TRW! +P  TRW! +ASS.
It is interesting to notice that TRW! +ASS{ID 6= TRW! +P{ID. For,
NOASS holds in TRW! +P{ID, but in TRW! +ASS{ID we have the axiom-
scheme ASS. Hence, it is easy to prove AP in TRW! +ASS{ID (look at (a), (b)
and TR in the proof of Theorem 2.7). Thus, some instances of ID are theorems of
TRW! +ASS{ID. However, we can prove neither pp nor pq:pq nor pqr:pqr nor etc.

By induction on theorems one can prove that TRW! +ASS contains no the-
orem of the form ABp; hence, it contains no theorem of the form App. This shows
that TRW! +ASS is not closed under ASS1. It follows that TRW! +ASS is not
closed under MP (otherwise, by MP and ASS, it is closed under ASS1).
It was shown in [3] that TRW! +P{ID has an equivalent Genzten-style for-
mulation L that enjoys the properties NOID and NOASS and contains no theorem
of the form ABB . But we have more than that.
Theorem 2.10. (NOABB). For any A and any B , 6` ABB in TRW! +RP.
Proof. Suppose that ` ABB in TRW! +RP, for some A and B . Since
B 6 AB , by Theorem 2.3, ` ABB in TRW! +RP{ID  TRW! +P{ID, which
is impossible.
What happens when either ASS1 or MP is added to any of these systems?
TRW! +ASS1 and TRW! +ASS1{ID are closed under MP. This is proved
by induction on the weight of the major premiss of MP. In the rst of these systems
we have the axiom-scheme AABB { the characteristic axiom-scheme of EW! ; in
the latter we have NOE2 .
It is easy to show that TRW! +ASS+ASS1 and TRW! +ASS+ASS1{ID are
closed under MP. Since AP is a theorem of TRW! +ASS+ASS1{ID, the systems
TRW! +ASS+ASS1 and TRW! +ASS+ASS1{ID are closed under P. Therefore,
TRW! +ASS+ASS1 = TRW! +P+ASS1 = RW! .

However, in TRW! +P+ASS1{ID we have both NOID and NOASS, but


none of them in TRW! +ASS+ASS1{ID. Therefore, TRW! +P+ASS1{ID 
TRW! +ASS+ASS1{ID.

Adding ASS1 to TRW! +RP destroys CONGR. Since (a) ` AA(AA):AA


and (b) ` AA:AA:AA in TRW! +RP+ASS1, we see that CONGR does not
hold here. By (a) and SU we have (c) ` AA(AA):(AA:AA):AA; by (b), (c) and
TR we obtain ` AA:(AA:AA):AA { an instance of ASS. Therefore, NOASS does
not hold in TRW! +RP+ASS1. Since NOASS holds for TRW! +RP+ASS1{ID,
there are A and B such that ` AB in TRW! +RP+ASS1, A 6 B and 6` AB in
TRW! +RP+ASS1{ID.

Adding MP to TRW! +RP would collapse it to RW! . Adding MP to


TRW! +RP+ASS1{ID destroys NOABB. For example, let A = pp:pp:pp and B =
(pp:pp)p:ppp; then A and AB are instances of ASU. By RP and AB:BC:AC we get
(a) ` AB:A:BCC ; hence, by (a) and MP, ` BCC .

232
Between 15

Let us summarize the comparison of the systems investigated here.


TW ! = TRW!  TRW! +RP = TRW! +AP  TRW! +P = TRW! +ASS
 TRW! +P+ASS1 = TRW! +ASS+ASS1 = TW! +RP = TW! +P = RW!
= RW!AP = RW!ASS .
Also, TW! {ID = TRW! {ID  TRW! +RP {ID  TRW! +P{ID 
TRW! +P+ASS1{ID.

Only TRW! +RP has the property stated in CONGR; it is not shared by
stronger systems between TRW! and RW! investigated here.
The congruence relation  de ned for formulas is determined here by logical
means only - by provability in TRW! +RP.
3. The equivalence of congr and noass

The nal surprise is the fact that in TRW! +RP CONGR and NOASS are
equivalent. To prove it we need some other facts.
In the sequel ` A means ` A in TRW! +AP.
Theorem 3.1. If either ` Ap or ` pA, then A = p; if ` B:pp, then B = pp.
Proof. By induction on theorems.
Proofs in TRW! +AP can be written in a normal form.
Theorem 3.2. For any proof of a theorem containing n applications of TR
there is a proof of the same theorem containing n applications of TR such that no
application of TR precedes an application of another rule .
Proof. If ` AC by (a) ` AB , (b) ` BC and TR, and then ` CD:AD by
SU, then (c) ` BD:AD by (a) and SU, as well as (d) ` CD:BD by (b) and SU;
hence, ` CD:AD by (c), (d) and TR.
In a similar way we take care of PR.
If the old proof contains n applications of TR, so does the new one.
In the sequel we assume that in proofs of theorems no application of TR
precedes an application of another rule.
The sequence of theorems
` AB:C1 D1 ; ` C1 D1 :C2 D2 ; . . . ; ` C n 1 Dn 1 :Cn Dn ; `Cn Dn :EF

is called a transitive chain (TR-chain ) from AB to EF i TR is not applied in the


proof of any member of the chain.
Theorem 3.3. If (a) ` AB and (b) ` BC such that (a) is obtained by an
application of SU (PR) in the last step and (b) is obtained by an application of PR
(of SU) in the last step, then there is a proof of ` AC by TR such that the left
premiss (a') in this application of TR is obtained by PR (SU) in the last step and
the right premiss (b') is obtained by SU (PR) in the last step .

233
16 Kron

Proof. Suppose that (a) ` DE:F E by (a') ` F D and SU, and that (b)
` F E:F G by (b') ` EG and PR. We have (b') ` DE:DG by (b') and PR, and
` DG:F G by (a') and SU.
Suppose that (a) ` ED:EF by (a') ` DF and PR, and (b) ` EF:GF by
(b') ` GE and SU. We have ` ED:GD by (b') and SU, and ` GD:GF by (a')
and PR.
Theorem 3.4. If (a) ` AB and (b) ` BC such that (a) is obtained by an
application of SU in the last step and (b) is an instance of AP, then there is a
proof of ` AC by TR such that the left premiss (a') is an instance of AP and the
right premiss (b') is obtained by PR in the last step .
Proof. Suppose that (a) ` D(EG):F:EG by (a") ` F D and SU, and that (b)
` F (EG):E:F G by AP. We have (a') ` D(EG):E:DG by AP and (b") ` DG:F G
by (a") and SU, and then (b') ` E (DG):E:F G by (b") and PR.
Theorem 3.5. If (a) ` AB and (b) ` BC such that (a) is obtained by (a')
and PR in the last step , (b) is an instance of AP, and (a') is obtained from (a")
either by SU or by PR, then there is a proof of ` AC by TR such that the left
premiss (c) is an instance of AP and the right premiss (b') is obtained either by
SU or by PR in the last step, respectively .
Proof. Suppose that (a") ` ED, (a') ` DF:EF , (a) ` C (DF ):C:EF and
(b) ` C (EF ):E:CF . We have (c) ` C (DF ):D:CF by AP, and ` D(CF ):E:CF
by (a") and SU.
Suppose that (a") ` ED, (a') ` F E:F D, (a) ` C (F E ):C:F D and (b)
` C (F D):F:CD. We have (c) ` C (F E ):F:CE by AP, and ` F (CE ):F:CD by
(a") and PR applied twice.
Theorem 3.6. Let
` AB:D1 E1 ; ` D1 E1 :D2 E2 ; . . . ; ` D n 1 En 1 :Dn En ; `Dn En :CD

be a TR-chain from AB to CD; if no member of the chain is an axiom, then either


(a) ` CA and B = D;
if all members of the chain are obtained by SU in the last step, or
(b) ` BD and A = C;
if all members of the chain are obtained by PR in the last step, or
(c) ` CA and ` BD:
otherwise .

234
Between 17

Proof. If all members of the TR-chain from AB to CD are obtained by SU


(PR), then ` CA and B = D (A = C and ` BD); the proof is by induction on
the number of members of the chain.
Apply Theorem 3.3. Let every member in

` AB:D1 E1 ; ` D1 E1 :D2 E2 ; . . . ; ` D k 1 Ek 1 :Dk Ek ;

be obtained by SU, and let every member in

`D k Ek :Dk+1 Ek+1 ; . . . ; ` Dn 1 En 1 :Dn En ; ` D E :CD


n n

be obtained by PR; then `D k A, B = Ek , C = Dk , and ` E D. k

Corollary. Let
` AB:D1 E1 ; ` D1 E1 :D2 E2 ; . . . ; ` D n 1 En 1 :Dn En ; `D n En :CD

be a TR-chain from AB to CD; if no member of the chain is an instance of either


ASU or APR, then either ` CA and B  D (in case every member of the chain
is either an instance of AP or obtained by SU in the last step ) or else ` BD and
A  C (in case every member of the chain is either an instance of AP or obtained
by PR in the last step ) or ` CA and ` BD otherwise .
Proof. By Theorems 3.4 and 3.5, we may assume that all instances of AP in
the TR-chain from AB to CD precede the members obtained by either SU or PR.
Hence, apply Theorem 3.6.
Now we can prove that NOASS implies CONGR.
Theorem 3.7. If ` AB , ` BA, then A  B .
Proof. Assume NOASS in TRW! +AP (forgetting that it is already proved)
and ` AB and ` BA. If either A = p or B = p or A = pp or B = pp, then A = B ,
by Theorem 3.1.
By Theorem 3.2, we have the TR-chain ()

` AB1 ; ` B1 B2 ; . . . ; ` B k 1 Bk ; `B k Bk+1 ; ` B +1 B +2 ; . . . ` B
k k m 1 Bm ; `B m A

Lemma 3.8 No member of () is an instance of either ASU or APR.


Proof of the lemma. Suppose that ` DE:EF:DF is a member of ();
then we have (a) ` (EF:DF ):DE as well. But ` EF:DE:DF and (b)
` (DE:DF )(DF ):EF:DF by APR and SU; hence, ` (DE:DF )(DF ):DE by (a),
(b) and TR, contrary to NOASS.
Let ` EF:DE:DF be a member of (); then (a) ` (DE:DF ):EF as well.
Now ` DE:EF:DF and (b) ` (EF:DF )(DF ):DE:DF by ASU and SU; hence,
` (EF:DF )(DF ):EF by (a), (b) and TR, contrary to NOASS.

235
18 Kron

Lemma 3.9 No member of the chain () is obtained by PR in the last step
from either ASU or APR.
Proof of the lemma. Let (a') ` DE:EF:DF , (a) ` G(DE ):G:EF:DF by
(a') and PR, and let (a) be a member of (); then (b) ` G(EF:DF ):G:DE , for
there is a member ` HI of () such that H  I   G(EF:DF ):G:DE . Now (c)
` EF:G(DE ):G:DF by ` EF:DE:DF , ` DE (DF ):G(DE ):G:DF and TR. Hence,
(d) ` (G(DE ):G:DF )(DF ):EF:DF by (c) and SU, (e)
` G((G(DE ):G:DF )(DF )):G:EF:DF
by (d) and PR, (f) ` ((G(DE ):G:DF ):G:DF ):G:EF:DF by (e) and RP, and, even-
tually, ` ((G(DE ):G:DF ):G:DF ):G:DE by (f), (b) and TR, contrary to NOASS.
Suppose that (a') ` EF:DE:DF , (a) ` G(EF ):G:DE:DF by (a') and PR,
and that (a) is a member of (); then (b) ` G(DE:DF ):G:EF , for there is
a member ` HI of () such that H  I   G(DE:DF ):G:EF . Now (c)
` DE:G(EF ):G:DF by ` DE:EF:DF , ` EF (DF ):G(EF ):G:DF and TR, (d)
` G(EF )(G:DF )(DF ):DE:DF by (c) and SU, (e)
` G((G(EF ):G:DF )(DF )):G:DE:DF
by (d) and PR, (f) ` ((G(EF ):G:DF ):G:DF ):G:DE:DF by (e) and RP,
and, eventually, ` ((G(EF ):G:DF ):G:DF ):G:EF by (f), (b) and TR, contrary
to NOASS.
Returning to the proof of the theorem, proceed by double induction: on the
degree of A and on the length l of the TR-chain (). Suppose that the theorem
is true for any formula of degree smaller than the degree of A and any TR-chain
(Hyp 1), and for A and any TR-chain of length smaller than l (Hyp 2).
Let us analyze the TR-chain (); by theorems 3.2-6 and lemmas 3.8-9, there is
a TR-chain () from A to A such that no member of () is an instance of either ASU
or APR. Moreover, we may assume that all instances of AP precede all instances
obtained by either SU or PR in the last step.
If the member ` Bm A of () is an instance of AP, so are all members of ()
and the theorem is proved.
Suppose that
` AB1 ; ` B1 B2 ; . . . ; ` Bk 1 Bk ;
are instances of AP, and that
`B k Bk+1 ; ` B +1 B +2 ; . . . ; ` B
k k m 1 Bm ; `Bm A

are obtained either by SU or by PR in the last step. By Theorem 3.3 all applications
of SU in the last steps precede all applications of PR in the last step.
Let A = A1 :A2 A3 and let Bk = Bk1 :Bk2 Bk3 , . . . , Bm = Bm
1 :B 2 B 3 .
m m

Case I A = Bk .

236
Between 19

I.1 Bm A is obtained by SU; hence, ` A1 Bm 1 and A A = B 2 B 3 . Also,


2 3
all members of ` Bk+1 Bk+2 ; . . . ; ` Bm 1 Bm are obtained by SU in the last step.
m m

Hence, ` Bk1+1 A1 and A2 A3 = Bk2+1 Bk3+1 . By Theorem 3.6 ` Bm 1 B 1 . We have


k+1
` A1 Bm and ` Bm A1 ; by Hyp 1 A1  Bm , A  Bm and there is a TR-chain from
1 1 1
A to A of length l 1. By Hyp 2, A  B1  . . .  Bm  A.
I.2 Bm A is obtained by PR; hence A1 = Bm 1 and ` B 2 B 3 :A A .
m m 2 3
I.2.1 ` ABk+1 by PR; hence, A1 = Bk1+1 , ` A2 A3 :Bk2+1 Bk3+1 and all mem-
bers of ` Bk+1 Bk+2 ; . . . ; ` Bm 1 Bm are obtained by PR in the last step. Hence,
` A2 A3 :Bk2+1 Bk3+1 ; . . . ` Bm2 Bm3 :A2 A3 . By Hyp 1, A2 A3  Bk2+1 Bk3+1     
Bm2 B 3 and A  B  . . .  B  A.
m 1 m

I.2.2 ` ABk+1 by SU; hence, ` Bk1+1 A1 and A2 A3 = Bk2+1 Bk3+1 . By


Theorem 3.6, ` Bm 1 B1
k+1
and ` Bk2+1 Bk3+1 :Bm 2 B 3 . We have ` A B 1
m 1 k+1 and
` Bk+1 A1 . By Hyp 1, A1  Bk+1 , A  Bk+1 and there is a TR-chain from A to A
1 1
of length l 1. By Hyp 2, A  B1  . . .  Bm  A.
Case II A2 :A1 A3 = Bk1 :Bk2 Bk3 .
II.1 Bm A is obtained by SU and so are all members of ` Bk+1 Bk+2 ; . . . ,
` Bm 1 Bm ; hence, A1 A3 = Bk2+1 Bk3+1 = . . . = Bm2 Bm3 = A2 A3 . Therefore,
A1 = A2 and we may proceed as in Case I.
II.2 Bm A is obtained by PR; hence A1 = Bm 1 and ` B 2 B 3 :A A .
m m 2 3
II.2.1 ` Bk Bk+1 by PR; hence, A2 = Bk+1 , ` A1 A3 :Bk+1 Bk3+1 and all
1 2
members of ` Bk+1 Bk+2 ; . . . ; ` Bm 1 Bm are obtained by PR in the last step.
Hence, ` A1 A3 :Bk2+1 Bk3+1 ; . . . ; ` Bm 2 B 3 :A A . By Theorem 3.6, A = A and
m 2 3 1 2
we may proceed as in Case I.
II.2.2 ` Bk Bk+1 by SU; hence, ` Bk1+1 A2 and A1 A3 = Bk2+1 Bk3+1 . By
Theorem 3.6, ` Bm 1 B1
k+1
and ` Bk2+1 Bk3+1 :Bm 2 B 3 . Hence, ` A B 1
m 1 k+1 and
` Bk+1 A2 . We get ` A1 A2 and hence ` A2 A3 :A1 A3 by SU. But we also have
1
` A1 A3 :A2 A3 . By Hyp 1, A1 A3  A2 A3 . Hence, A1  A2 and we may proceed as
in Case I.
This completes the proof of the theorem.

4. A concluding remark

It is both logically and philosophicaly interesting that substitution of formulas


of the form B:CD for subformulas of the form C:BD in a formula A can be identi ed
with CONGR - with the derivability of certain formulas in the weak logical system
TRW! +RP.

Also, it is interesting that the same substitution can be identi ed with NOASS
- with the non-derivability of certain formulas.

237
20 Kron

References
[1] A.R. Anderson, N.D. Belnap, and J. M. Dunn, Entailment, the Logic of Relevance and Neces-
x
sity, Vol. II, Princeton Univ. Press, 1992, 66.
[2] Y. Komori, Syntactical Investigations into BI Logic and BB 0 I Logic , Studia Logica 53 (1994),
397{416.
[3] A. Kron, A constructive proof of a theorem in relevance logic , Z. Math. Logik Grund. Math.
31 (1985), 423{430.
[4] A. Kron, Identity and Permutation , Publi. Inst. Math. (Beograd) (N.S.) 57(71), (1995), 165{
178.
[5] A. Kron, Identity, Permutation and Binary Trees , Filomat (Nis) 9:3 (1995), 765{781.
[6] E. Martin and R.K. Meyer, Solution to the P-W problem , J. Symbolic Logic 47 (1982), 869{
887.

Filozofski fakultet (Received 15 10 1997)


Univerzitet u Beogradu
11000 Beograd
Yugoslavia
krons@mi.sanu.ac.yu

238
Algebra and Logic, Vol. 38, No. 4, 1999

A SEMANTICS FOR THE FIRST QUARTET B Y T. S. E L I O T


A. K r o n * UDC 510.64

We construct a semantics for tense logic based on the following central concepts: (a) state of
affairs, (b) is simultaneous or earlier than (no~ later than), (c) is accessible, and (d) is realized,
in which a bit of T. Eliot's lyrics is interpreted.

INTRODUCTION

The great poet of the English language Thomas Stearns Eliot (1888-1965), who obtained the Nobel
prize in 1948, wrote in his poem "Burnt Norton," first published in 1936 (Collected Poems lgog-lg33), and
then in 1944 (Four Quartets):
Time present and time past
Are both perhaps present in time future,
And time future contained in time past.
If all time is eternally present
All time is unredeemable.

What might have been and what has been


Point to one end, which is always present.
In this article we construct a semantics in which the quoted verses are interpreted and proved true. Let
~1 ~-- Time present and time past are both perhaps presenr in time future,
a2 ~--- Time future is contained in time past,
a3 = All time is eternally present,
a 4 ~ All time is unredeemabIe,
cr~ ~ What might have been and what has been point to one end,
a6 ~--- This end is always present.
The quoted verses are the following molecular sentences: a l A o'~, a3 ~ a4, and a5 A as.
Take a molecular sentence ~(ai, ai+l), i E {1, 3, 5}: the interpretations of ai and ai+l are found in two
nonidentical but isomorphic structures 92 and 92~, respectively. It is a common mathematical practice to
identify isomorphic structures; if we identify 92 and 92~, we obtain an interpretation of ~(ai, ai+l) in a single
structure. After all, Eliot's verses are poetry, not mathematics.
The rest of the article consists in constructing the structures, in proving the corresponding isomorphisms,
and in interpreting and proving ~(crl, ai+l), i G {1, 3, 5}.

*Supported by the Science Foundation of Serbia (gremt No. 04M02A) through the Belgrade Mathe~tical Institute.

Translated from Algebra i Logika, Vol. 38, No. 4, pp. 383-408, July-August, 1999. Original article submitted
February 14, 1998.

0002-5232/99/3804-0209 $22.00 ~) 1999 Kluwer Academic/Plenum Publishers 209

239
1. T H E I N T U I T I V E BACKGROUND

1. T h e central primitive terms of our semantics are as follows: (a) s~ates of affairs, (b) simnl~aneous
or earlier ~han (not la~er ~han), (c) accessible, and (d) realized. Term (a) denotes individuals and terms
(b)-(d) denote relations (two binary ones and a unary) defined for states of affairs.
2. A state of affairs can be realized or nonrealized. If a s t a t e of affairs is realized, it is either past or
present. If a state of affairs is nonrealized, then it is either one t h a t might have been realized, either in the
past or in the present, or else it is one that can be realized in the future. If a s t a t e of affairs can never be
realized, it is not called possible. Hence, a possible state of affairs can be either (1) realized and past, or
(2) realized and present, or (3) nonrealized and past, or (4) nonrealized and present, or (5) nonrealized and
future. An example of (3) is t h a t I might have been a citizen of Novosibirsk in m y y o u t h (I was not) and
an example of (4) is t h a t I might have been a citizen of Novosibirsk now (I a m not).
3. T w o states of affairs can be simultaneous or one can be earlier to the other. Notice that the relation
'is simultaneous or earlier t h a n ' ('is not later t h a n ' ) is defined for states of affairs rather t h a n for moments
of time. A "moment of time" is not a primitive term of our semantics. We assume t h a t the relation 'is
earlier (later) t h a n ' is dense, without end-points, and continuous.
4. T h e accessibility relation is understood as follows: the realization of the first state of affairs in the
relation does not prevent a future realization of the second; the first being realized, the second is still
possible in the future of the first. The accessibility is a one-to-many relation: the realization of the first
state of affairs may render possible a future realization of several (possibly one or infinitely many) states
of affairs. However, the first state of affairs being realized, onbj orte of the possibly infinitely m a n y states
of affairs t h a t can succeed the first can be realized. The accessibility defines a branching structure.
5. The sequences of realized states of affairs are continuous ~ between realized states of affairs, the
accessibility relation allows no "gaps."
These are the main ideas in the construction of the semantics. The remaining ones are about how these
ideas are m a d e precise a n d connected to each other.

2. T H E AXIOMS

Let W ~ ~; the elements of W are states of affairs. A binary relation E is defined on W: z E y stands
for ' z is simultaneous or earlier t h a n y.' The relation E is reflezive (El), transitive (E2), and linear (E3).
Formula z S y stands for z E 9 ^ 9E~ ('z and 9 are .~imnl~aneous'). Let us write [~] for the equivalence
class of z under S. Also, we define zLy ('y is later than x,' ' z is earlier t h a n 9') by z E y A -~(gEx). The
relation L is without end-points (E4 and E5) a n d dense (E6).
T h e next two axioms concerning E guarantee the existence of E-suprema a n d E - i n f i m a of some sets of
states of affairs satisfying certain conditions.
Let X C W and X ~ O; we say that z is an E-upper bound of X if[ (Vz E X ) x E z . By a leas~ E-upper
bound of X (an E-snpremu~ of X ) we mean a n y E-upper bound z of X such t h a t Vg((Vz E X)xE~] ~ zEg).
In a similar way we define an E-Io~er bound of X and a grealest E-lower bound of X (an E-infimnv~ of
X ) . Notice t h a t neither E - s u p r e m a nor E-infima of X are necessarily unique; if they exist, they are only
simultaneous. We introduce the following axioms:

if X has an E - u p p e r bound, then X has an E - s u p r e m u m

210

240
and
if X has an E-lower bound, then X has an E-infimum. (E8)

By s u p s ( X ) a n d i n f s ( X ) we denote the sets of, respectively, E - s u p r e m a and E-infima of a n o n e m p t y


set X . The accessibility relation R and L are related as follows:

:cRy =:~ ;~Ly. (R1)

I f z is later t h a n z, then there is a state of affairs y simultaneous with z such t h a t z is accessible from
y, i.e.,

Also, if z is later t h a n z, then there is a state of affairs y simultaneous with z such that y is accessible
f r o m ~, i.e.,
:cL~ ~ 3y(ys~ ^ :cRy) (R3)

R2 and R3 are combined in R4: if z R z ^ zLy A yLz, then there is a state of affairs y~ simultaneous with
y such that z R y ~A y~Rz, i.e.,

:cRz A z L y A yLz ::r 3y'(y'Sy A :cRy' A y'Rz). (R4)

An R-chain Ch _C W is a set such that (Vz, y 9 C h ) ( z R y V yRz V x = y). If x, y, z 9 Ch, xRy, and yRz,
t h e n zRz. For, we have z R z V zRx V z = z; by R1 and the transitivity of L we obtain zLz, which excludes
z R x V z = z; hence, ;~Rz. Notice, however, that R is not transitive in general. Obviously, the relation R is
acyclic: if :cRy a n d yRz, then ~(zRz).
A trivial consequence of E4, E5, R2, and R3 is
THEOREM 1. (1) Vxqy:cRy;
(2) Vy~:c:cRy.
As a consequence of R1, R4 and the density of L, we have
THEOREM 2. T h e relation R is dense.
Let Ch be a n y R-chain. We shall need the following two axioms:

y 9 sups(Ch) ~ (3~ 9 [y])((v, 9 Ch)O,R~ v 9 = ~)^

v~((v~ 9 C h ) ( ~ a u v :: = ,~) ~ . z R ~ , v ~ = ~,)) (Rs)

and
y E infE(Ch) ~ (3z E [y])((Vx E Ch)(zR~: V :c = z ) ^

w ( ( v y c Ch)(~Ry v y = ~) ~ .~Rz v ~ = z)). (R~)

Axioms R5 and R6 are extensions and strengthenings of R2-R4 to the effect that the relation R allows no
"gaps": the transition of the state of affairs :c to another state of affairs y is continuous.
In the context where there is a distinction between possible states of affairs t h a t are realized and those
t h a t are not, the m e a n i n g of R becomes complex and can be given only after a u n a r y relation of realization
has been introduced. Some states of affairs are realized; we write O(z) to indicate t h a t x is realized. T h e
u n a r y relation of realization satisfies three conditions. The first is

~(~) ^ ~(y) ~ .~Ry v yR:c v 9 = y.

211

241
We assume t h a t all realized states of affairs are either present or past. This implies t h a t there is a realized
state of affairs z such that any realized state of affairs is either z or is earher than z (z is the present), i.e.,

^ vy(0(y) .yL v 9 = y)). (02)

As to the states of affairs t h a t are nonrealized, if they are either present or past, they are simultaneous
with a realized state of affairs, i.e.,

Vy, z(o(z) ^ yEz ~ 3 z ( z S y ^ 0(z))). (03)

Of course, two nonrealized states of affairs z and y may be in the relation R: for such states, z R y means
t h a t if z would have been realized, then this would not exclude the realization of y.
As a consequence of 02, E2 and the definition of L, we have
THEOREM 3. 31z(O(z) ^ Vy(0(y) ~ .yrz V z = y)).
Let us define the presen~ s~ale of affairs v as follows: v = (~z)(O(z) A Vy(0(y ) ~ .yZx V x = y)). By
T h e o r e m 3, this is a correct definition.
THEOREM 4. 0(v).
A state of affairs z is called pas~ (.future) iff z L v (vLz)
The relations L and R coincide on the set of realized states of affairs. Any realized s t a t e of affairs y is
L-preceded by a realized state of affairs; moreover, there is a realized state of affairs z such t h a t zRy.
THEOREM 5. (1) 0(z) A 0(Y) ::~ .zRy r zLy;
(2) vy(0(y) ^
(3) vy(0(y) ^
An R-chain Ch is called complete iff (Vz E W)(3y E [z])y G Oh.
THEOREM 6. If Ch is a complete R-chain, then (Vz e W)(31y 6 [z])y E Ch.
THEOREM 7. Any R-chain is contained in a complete R-chain.
Proof. Let X be the set of all R-chains that contain Ch (for any Ch' E X , Ch C Chl). The set
X is n o n e m p t y and partially ordered by C__. Let Y be a chain with respect to C and consider U Y" Let
z , y E U Y ; there are Chl and Ch2 such t h a t x E Chl and y 6 Ch2. But either Chl C_ Ch2 or Ch2 C Chl,
say, Chl C Ch2. Hence, z , y E Ch2 and either zRy, or yRz, or z = y. Therefore, U Y is an R-chaln
t h a t contains Ch and U Y e x . Obviously, U Y is an upper bound of Y (with respect to C). By Zorn's
lemma, X has a maximal element Dh. Suppose t h a t Dh is not complete; then there is an z G W such t h a t
Dh rh [z] = 0 . Let D1 and 9 2 be sets {y G D h l y L z } and {y G D h l z L y } , respectively.
Let D2 = O; then Dh = D~ r O; by ET, the set Dh has an E - s u p r e m u m z. By R5, (qz' G [z])(Vy E
D h ) y R z ' V z' = y. If z' E Dh, then we can use R3; hence, there is an z ' E [z] such t h a t z ' R z ' and DhU {z'}
is an extension of Dh and Dh is not maximal. I f z r ~ Dh, then yRz I for any y G Dh, Dh U {z I} is an
extension of Dh, and Dh is not maximal. If D1 = O, we proceed similarly, using E8, R2, a n d R6.
Suppose t h a t Dx r ~ and D2 ~ O; it is clear that z is both art E - u p p e r b o u n d of Dx a n d an E-lower
bound of D2. By E7 and E8, the set D1 has an E - s u p r e m u m y and the set D~. has an E - i n f i m u m z. By R5
and l:t6, there is a y' E [y] such t h a t
(1) (Vu G D1)(uRy' v u = y'), and
(2) Vv((Vu 6 D1)(uRv =~ .y'Rv V y' = v)),
and there is a z' E [z] such t h a t
(3) (Vu G D~)(zCRu V u = z'), and

212

242
(4) v,,((v,., e D~)(,,R,, ~ .,,R~' v ~' = v)).
Since DI, D2 C Ch, it is clear that
(S) (Vu 9 Z)I)(Vv 9 D2)uRv.
Let w 6 9 1 ; then (Vu 6 D2)wRu by (5), and hence (Vu 6 D2)(wRz' V z' = w) by (4). From wLx A zEz',
it follows t h a t wLz', and so w # z'; hence, (Vu E D2)(wRu =~ wRz'). Therefore, (Vu 6 D1)uRz'. By (2),
we obtain ~ R z ' V y' = z'. If y' = z', then y' 9 [z], and by (1), Dh U {y'} is an extension of Dh, contrary
to the assumption that Dh is maximal. Let fRz'; if y' 9 [x], then by (1), Dh U {y'} is an extension of Dh
and Dh is not maximal. Similarly, if z' E [x], then by (3), Dh is not maximal. Let y'Lz A zLz'; we can use
R4 to conclude that there is an ~' E [x] such that y'Rx' and z'Rz'. Hence, Dh U {y', x', z'} is an extension
of Dh and Dh is not maximal.
By a possible history PH(x) of x we mean {y 9 Ch [yR~}, where Ch is a complete R-chain such that
z 9 Oh. O f course, PH(x) C_ {yJyLx}. We define the history H(z) of a realized z as a possible history
PH(x) such that (Vy 6 P//(x))8(Y)- The set H(u) is called the history. By gl, 83, and Theorem 4, we have
THEOREM 8. V x ( z L v :~ 31y 6 [x]p(y)).
THEOREM 9. (1) There is exactly one history;
(2) for a~v ~alized ~ 9 W, ~ h ~ is e~actlv o ~ history H ( ~ ) ;

P r o o f . (1) By 81, the set P(g) -- {z 9 W [ 8 ( x ) A zRv} is an R-chain; by Theorem 1(2), P ( v ) # ~.


By T h e o r e m 7, P(v) is contained in a complete R-chain Oh. Let H(Ch, u) = {x 9 ChlzRv}; we have
P(v) C_ H(Ch, v). Let z 9 H(Ch, v); obviously, z L v by R1. By Theorem 8, there is exactly one y 9 Ix]
such t h a t e(y); hence, y 6 P(v) C Ch. Since Ch is an R-chain, either zRy, or yRz, or x = y; since zSy,
we have 9 = V and 9 9 P ( . ) . Therefore, ~(~) = n ( C h , ~) = Zr(~).
(2) If ~,(z), then H(z) = {y 9 l yRz}. We use (1).
(3) Suppose that 8(Y); if z 9 H(y), then zRy A 8(z) by definition; hence, z L y A 8(z) by R1. Let
s l y A @(a:); since 8(z) A @(Y), we may apply Theorem 5(1).
The set ~r of realized states of affairs is the history together "with the present: 7r -- H(u) U {u}.
THEOREM 10. (1) a(x) r x 9 H ( v ) V x -- u;
(~) 8 ( ~ ) ~ . ~ v ~ = ~;

In the rest of this section, we study the cardinality of the set W. By E4, E5, and E6, the set W cannot
be finite. Let cx,/3.., range over the set of S-eqnivalence classes. We shall write aEly and aLfl iff xEy and
xLy, respectively, for any x 9 cz and y 9 ly. Let aLlY; by an open (closed) interval (in symbols ]cz,lY[ and
[cz,lY], respectively) we mean {7 I(xL7 A 7LlY} ({7 ]r A 7ElY}).
THEOREM 11. Every open interval contains an infinite set of Soequivalence classes.
P r o o f . Define inductively an infinite sequence 71,.-- of S-equivalence classes. Let x 9 cz and y 9 lY.
By E6, t h e r e is a z t such that zLzl A zxLy. Take any such zx and let 71 be the S-equivalence class of Zl;
obviously, c,LTx AvILlY. Suppose that 7- is defined so that c~Lqx ATILT2 A...Ag'~_ILg'n ALlY. Let z~ 97-;
by E6, there is a zn+ 1 such that z, Lzn+l A z,,+lLy; take any such z,,+l and let 9',,+1 be the S-equivalence
class of z,~+x. It is clear that for any 7,, defined in this way, %, 6]a, lY[.
THEOREM 12 (theorem on nested intervals). Let (Vn 9 w)(a,~Ea,,+l A lY,~+lE/9,,); then

213

243
Since (Vn E w)c~,Lflo, by E7, the set {a,, In ~ w} has a (unique) E-supremum c~; since
Proof.
(Vn e w)aoLfl,,, by E8, the set {fl,~ In e w} has a (unique) E-infimum ft. By the definition of an infimum
and a supremum, aE~ and ~,fl e N { [ a , , f l , ] [ n e w}.
T H E O R E M 13. The set W is not denumerable.
P r o o f . The proof t h a t the set t t of real numbers is not denumerable, given by Dieudonn~ in [1,
2.2.14-2.2.17], is easily adapted here.

3. F R A M E S AND MODELS

By a frame (of zero order) we mean (W, E, R) (W r 0), where E and R satisfy axioms E1-E8 and R1-
R6. By a (~)-modelin a frame (W, E, R) we mean (W, E, R, ~, v), where ~ and v satisfy axioms ~1-~3. In the
sequel we investigate models in a fixed frame (W, E, R). Let M be the set of all such models and let A, B,
C . . . . range over M. Each model A is uniquely determined by the present viA ) and the history H(v(A)).
Define ~he reality 7r(A) in A as follows: 7r(A) = H(v(A)) tJ {v(A)}. A model A can be characterized by
(W, E, R, ~r(A)). The relations s and R are defined on M as follows: As and AT~B iff v(A)Ev(B) and
v(A)Rv(B), respectively. Also, we shall write A S P and A s for As163 and A s A-~(Bs respectively.
For any A and B, the relation _ (A is a ~-submodel of B) is defined in this way: A ~ B iff ~r(A) C ~r(B).
If A ~ B and A r B, we shall write A -~ B (A is a proper ~-submodel of B).
THEOREM 14. If A -~ B, then v(A)Rv(B).
T H E O R E M 15. (1) The relation ~_ is a partial ordering;
(2) A - ~ C A B _ C ~ A ~ B V B ~ A V A = B (backwardlinearity).
P r o o f . (1) The reflexivity and transitivity of _ are obvious by the definition of __. If A _ B A B _ A,
then r(A) = r(B); hence A = B by Theorem 9(1).
(2) Suppose that A ~ C A B -~ C; then (v(A)Rv(C) V v(A) = v(C)) A (v(B)Rv(C) V v(B) = v(C)) by
Theorem 14. If either v(A) = v(C) or v(B) = v(C), then either r(A) = 7r(C) or ~r(B) = ~r(C), respectively,
by Theorem 9(1); hence, B _ A or A _ B, respectively. I r A -~ C A B -~ C, then ~r(A) Ur(B) C 7r(C). Take
C; here we have either v(A)Rv(B), or v(B)Rv(A), or v(A) = v(B) by ~1. If v(A) = v(B), then A = B
by Theorem 9(1). Let v(A)Rv(B); obviously, v(A) E ~r(B), and for any z E 7r(A), we have zLv(B) A p(z).
Hence, r(A) C 7r(B) by Theorem 9(3). Also, if v(B)Rv(A), then 7r(S) C r(A).
T H E O R E M 16. Let z e W and A : (W, E, R, 7r(A)). Then:
(1) if v(A)Rz, then there is a ~-model S - (W, E, R, lr(B)) such that z = v(B) and A -~ B;
(2) if v(A)Lz, then there is a ~-model B = (W, E, R, ~r(B)) such that zSv(B) and A -~ B.
P r o o f . (1) Let v(A)Rz; the set 7r(A) tJ {z} is an R-chain, and by Theorem 7, it is contained in a
complete R-chain Ch. Let r(B) - {y E C h l y R z } and v(S) = z; then the ~-model B -- (W, E, R, 7r(B)) is
defined and A -~ B.
(2) Let v(A)Lz; by R3, there is a y such that zSy and v(A)Ry; now apply (1).
T H E O R E M 17. Let z E W and B : (W,E,R,r(B)). Then:
(1) if z ~ 7r(B), then there is a ~-model A : (W, E, R, ~r(A)) such that viA ) : z and A -< B;
(2) if zLv(B), then there is a ~-model A = (W, E, R, ~r(A)) such that v(A)Sz and A -< B.
The p r o o f is similar to the preceding one.
THEOREM 18. The relation -< is dense.

214

244
P r o o f . Let A -~ C; then u(A)R~,(C) by Theorem 14. Since R is dense, there is a y 9 W such that
y ( A ) R y R t / ( C ) . Take C; by Theorem 6, there is a unique z 9 ~r(C) such that ySz. Define a ~-model B as
follows: ~r(B) = {u 9 r ( C ) l u R z } and t/(B) --- z. It is clear that A -~ B -~ C.
Define ~he reality of A in M in this way: H(A) -- {B ~ M [ S ___A}. The set II(A) \ (A} is ~he history
of A in M. A trivial but important consequence of this definition is this.
T H E O R E M 19. (1) A ~ B iff H(A) C H(B);
(2) for any A 9 M , ~here is ezac~ly one reality of A in M;
(3) A 9 H(B) :=~ H(A) C IX(B);
(4) II(A)= U IX(B).
BeII(A)
Thus, the reality of A in M is the union of the realities of all ~-submodels of A in M.
Let B and C be two distinct ~-models such that the intersection of their realities in M is nonempty;
then there is a ~-model A such that A is the maximal ~-submodel of both B and C (i.e., A is the "branching
point" for B and C in the sense that ~r(A) is the maximal common reality of B and C in M).
THEOREM 20. Let I C M be a nonempty set of p-models such that (3B, C E I)-~(B ~_ C) A -~(C
B); then
A n(c) r g ~ 3A(A E ~ H(G)^ (VD 9 ~ H(G))D -g A).
GEl GEl GEI

P r o o f . Let n H ( e ) r X CAW be the set n ~-(G). It is clear that X r ~, that X i s a n


GEI GEI
R-chain, and that X C 7r(G) for any G 9 I.
L E M M A 20.1. For any G 9 I and z 9 X, we have r ( G ) \ (~(G)} : H(z) CAX.
To p r o v e the lemma, suppose that z E X and y 9 H(x); then y 9 7r(G) A yRz. Since x E r ( G ) for any
G 9 I, it follows that y E r(G) for any G E I, i.e., y E X.
Let Y -- {y e W l ( W e x ) ~ Z y } .
L E M M A 20.2. Y r 0.
P r o o f . By the hypothesis of the theorem, we note, there are B, C E I such that --(B _-q C) A --(C -~ B);
hence, neither lr(B) C A ~r(C) nor ~r(C) C 7r(B). Suppose that t/(B) ~ Y and t,(C) ~ Y; then there is
an z E X such that r,(B)Zz and t/(C)Ez. Since X CA 7r(B) n 7r(C), it follows t h a t (xRz/(B) v z =
t~(B)) A (xRr,(C) V z -- ~(C)). Hence, ( z L ~ ( B ) V z = t/(B)) A (zLr,(C) V z = t/(C)) by R1. Therefore,
z = t~(B) ---- r,(C). But H ( z ) in B (in C) is v ( B ) \ {~(S)} (~-(C) \ {t~(C)}). By Lemma 20.1, ~r(B) _ X
and r(C) ___X; hence ~(S) = r(C) by Theorem 9(1), a contradiction. Therefore, f r O.
Coming back to the proof of the theorem, we conclude that the set X has an E-upper bound. By
ET, X has an E-supremum y. By R5, there is a z 9 [y] such t h a t (1) (Vz E X ) ( ~ R z V x = z) and (2)
Vu((Vz E X ) ( z R u V z = u) ~ z R u V z = u). Moreover, z is unique. For, suppose that there is a z' satisfying
(1) and (2); then z R z ' V z = z'. If zRz', then zLz', contrary to z S z ' . Hence z = z'.
L E M M A 20.3. z 9 X.
P r o o f . By Theorem 7, the R-chain X U {z} is contained in a complete R-chain Ch. Since z is unique,
z E Ch ~ for any complete R-chain Ch ~ containing X. Let G E I; by Theorem 7, ~r(G) is contained
in a complete R-chain Ch(G), and since X _C 7r(G), we have z 9 Ch(G). Using (2), we conclude that
zRz/(G) V z - - - ~(G); hence, z E ~(G). Thus, z 9 r(G) for any G 9 I, i.e., z 9 X. P u t 1/(A) -- z and
r ( A ) = H ( z ) U {z} --- X; the ~-model A = ( W , E , R , ~ ( A ) ) is defined. By Lemma 20.1, A _ G for any
G ~ I ; h e n c e , A ~ n H(G). L e t D ~ n ; t h e n T ~ ( D ) C - X - - ~ ' ( A ) , a n d h e n c e D ~ A "
G~I G~I

215

245
Let #(A) = {B 9 M I A __. B}; then ~(A) \ {A} is the open future of A in M.
T H E O R E M 21. (1) A -4 B iff r C ~(A);
(2) r N #(B) # O iff either ~(A) C ~(B) or ~ ( S ) C ~(A);
(3) if v(A)Sv(B), then ~(A) C1r # O iff ~(A) = ~(B);
(4) If(A) = fl If(B);
Be~(A)

Ben(A)
By a -'<-chain in M we understand a subset CH C M such that (VA, B 6 CH)A -< B V B -4 A. A
-4-chain CH is called complete iff

(VA 9 M)(3B 9 M)(v(A)Sv(B) A B 9 CH).

T H E O R E M 22. Suppose that CH is a complete __-chain. Then:


(1) (VA 6 M)(31B E M ) A S B A B 6 CH;
(2) (VA 6 M)(A 6 CH ::~ II(A) C CH);
(3) A"< B :=~.B E CH ~ A 6 CH;
(4) C H = U H(A).
AECft
As a consequence of Theorem 7, we have
T H E O R E M 23. Any ___-chain is contained in a complete -~_-chain.
We call a zero-order frame (W, E, R) ~ree-like if[ it satisfies the following condition: there are ~, y, z 6 W
such that zRy, zRz, and neither yRz nor zRy.
T H E O R E M 24. If (W, E, R) is a tree-like frame, then there is a model A 6 M such that
(1) (~(A), ___)is a tree, and
(2) for any B, C 6 @(A), there is a glb(B, C) 6 ~2(A) (with respect to _).
P r o o f . Let (W, E, R) be a tree-like frame, with z, y, z 6 W satisfying zRy, zRz, ~(yRz), and ~(zRy).
Obviously, we may choose models A, B, and C such that v(A) "- z, v(B) = y, and v(C) = z. Then ~(A)
is a tree by Theorem 15.
(2) Let B , C 6 ~(A), 9 ( 8 -4 C) and -~(C -4 B). Since A E [1 II(G), we have II(B)AII(C) r 0, and
Ge~(A)
Theorem 20 is applicable. Hence, there is a D 6 V(A) such that D 9 II(B) n II(C), and for any E 9 @(A),
if E 9 H(B) n n ( c ) , then E -4 D. Obviously, n(B) n n ( c ) = n(D).
Let (W, E, R> be a tree-like frame. Fox A 9 M, we say that it is a branching poin~ iff there are B, C 9 M
such that -~(B -4 C), -~(C -4 B), and A - glb(B, C).
Now we shall define n(A) sets, which will play a substantial role in the sequel. Let ~(A) be a set of all
complete __.-chains containing A. It is easy to show that f~(A) • O. If A is a branching point, we have

(3CH1, CH2 9 fI(A))CH1 g ell2 ^ ell2 g CH1),

and A = glb(B, C) for any B 6 CE1 and C 6 CH2.


THEOREM 25. (1) II(A) C nl2(A); if A is a branching point, then N f~(A) C II(A);
(2) U n(A) = II(A) U ,)(A);
(3) if A is a branching point and CH 6 ll(A), then n n(A) C CH;
(4) if ASP, then n(A) n e(B) # o i~ n(A) = n(B);
(5) if A -4 B, then ll(B) C_ f~(A); if B is a branching point and N(B) C fl(A), then A _ B;

216

246
(6) fl(A) n f~(B) # ~ iff fl(A) c_ I2(B) or fl(B) _c fl(A).
P r o o f . (1) Let B 9 II(A) and c / / 9 f~(A); since A E CH, by Theorem 22(2), H(A) C C H . Hence,
B E CH. Let B 9 N i2(A); then for any C H 9 f~(A) we have A, B E CII. If A is a branching point, then
B _.< A; hence, B 9 H(A).
(2) Let B E U fl(A); then there is a C H E fl(A) such that B E C H and A ___ B v B ~ A; hence
B E t2(A) UO(A). Let B E 12(A) UO(A); then A ~ B V B _ A, and there is a C H 9 ~2(A) such that
B 9 CH. Hence, B 9 Uf~(A)-
(3) Let C H 9 t2(A); then A E CH, and by Theorem 22(2), H(A) C CH. If A is a branching point,
then n f~(A) = H(A) C C H by (1).
(4) To prove the nontrivial part, let A S B and i2(A) n f~(B) # ~; this implies that there is a C I t E
t2(A) n i2(B). Hence, A , B ~ C H and A -< B V B -4 A. But A S B excludes A -< B v B -4 A; so, A --- B.
Hence, n(A) = n(B).
(5) Let A _-4 B and C H 9 f~(B); then H(A) C II(B), H(B) C C H and H(A) C CH. Hence, C H E t2(A).
Let f)(B) C t2(A); since there is a C H such that A, B 9 CH, either A _~ B or B -< A. Suppose that B -< A;
if B is a branching point, there is a CHi 9 t2(B) such that A ~ CHx, contrary to our hypothesis.
(6) If fl(A) r~ f~(B) # ~, then there is a C H such that A, B 9 CH. Hence, either A -4 B or B _< A. Let
A _ B; then t2(S) C f~(A) by (5), and hence f~(A) ~ t2(S) # ~.

4. H I G H E R - O R D E R FRAMES AND MODELS

Let (W, E, R) be a zero-order frame. Define sets W, E and 7~, depending on (W, E, R) such that
(W,E,7~) is a frame. For any z E W, choose a model A~ = (W,E,R,~r(A~)). Let W = {A~lx e W},
where Ax E M is the model chosen for x. The relations s and R are defined on )'V as on M: A~EBy and
A~7~By iff x E y and c r y , respectively. It is clear that (02, E, 7~) is isomorphic to (W, E, R). The frame
(W, s 7~) is called a first-order frame.
Let A -- (W, E, R, 7r(A)) be a zero-order model. We define a first-order model .4 depending on A
as follows: .4 = (W,E,7~,~r(.4)), where ( W , E , ~ ) is a first-order frame, and we have (1) v(.4) = A,
(2) r(.4) = H(A), and (3) for any z E W \ ~ r ( A ) , the model A~ E M is such that v(Ax) = z and
~(A~) = P H ( z ) U { z } for a possible history P H ( x ) of z. (By Theorem 1(2) and Theorem 7, for any z
there is a P H ( x ) , and by the axiom of choice we can choose one of them.)
The intuitive interpretation of ,4 is as follows: )a] is the set of states of affairs and s 7~, ~r(A), and v(`4)
are interpreted analogously to E, R, ~(A), and v(A) in A, respectively. However, in a first-order frame
possible worlds are endowed with their own "time" and "reality." The relation -4_ is naturally interpreted
thus: `4 _-< B means ~r(`4) C ~r(B), i.e., II(A) _ II(B); by Theorem 19(1), this means A -4 B; hence, either
A is in a possible or in the real history of B or A -- B.
The models A and `4 are isomorphic. Let f be the function f : W --* }4) such that f ( x ) -- A~; it is clear
that f is a bijection. Let zEy, f ( x ) = A~ and f ( y ) = By; then A~EB~ by the definition of .4. Similarly, if
~Ry, then A~TZB~. Let y e ~(A); there is a unique model By e M [by Theorem 16(1) and Theorem 17(1)]
such that v ( f ( y ) ) = v(B~) = y and 7r(By) C ~r(A). Hence By e II(A) and By 9 lr(`4) by definition. Let
= v(A); then f ( z ) - A~ -- A = v(`4) by definition. Hence, A and `4 are isomorphic.
For a first-order model A = (W, e, n , depending on A = (W, E, R, we sh U define m o d e l
A n, `4~, and `4a depending on `4; they are called higher-order models. Let A range over {.An, ,4 ~, An},
where A = ( W , E, R, ~r(A)). First, w e define the corresponding frame (W, E, R), for the given first-order
frame ( W , E , R ) . Let W = {g(A~)IA . 9 W}; if g = .ArI, then g(Ax) = H(A,); if A = `4r then

217

247
g(Az) = O(A~); if A = .An, then g(Az) = i2(Ax). The relations E and R are defined on W as follows: for
any g ( B ) , g ( C ) 9 W, we have g ( B ) E g ( C ) and g ( B ) R g ( C ) iff B E C and B ~ C in (1/V,E,7~), respectively.
Put 9(B) "< e(C) iff B -< C in .,4.. Let n(g(Bv) ) = {g(C,)I g(C,) -<_ g(B,)}.
T H E O R E M 26. U H ( I I ( B ) ) = II(B).
P r o o f . Notice that C 9 II(B) iff If(C) 9 H(II(B)). Now O H(II(B)) = { D I 3 C ( D -.< C A C --< B)}, and
since -< is transitive, we have O II(II(B)) C If(B). Since _ is dense, D ~ B implies 3 C ( D -< C ^ c -< B);
hence, If(A) C_U n(ii(B)).
A trivial consequence of Theorem 21(5 / is
THEOREM 27. N H ( r = r
T H E O R E M 28. NH(t2(B)) = t2(B).
P r o o f . If C H 9 NH(i2(B)), then for any C _ S we have C H 9 i2(C); hence, C H 9 fl(B). If
C H 9 fl(B), then B 9 C H . Let C _--<B. By Theorem 22(3), C 9 C H . Hence, C H 9 N H(ft(B)).
Define ff~(g(B)) by {g(C) ]B _-4 C}.
T H E O R E M 29. ~ q i ( I I ( B ) ) = If(B).
P r o o f . If D 9 [7 qi(ii(B)), then D 9 II(C), for any B _ C; hence, D 9 II(B). If D 9 II(S), then
D 9 C for any B _ C; hence, D 9 [7 qi(ii(B)).
T H E O R E M 30. U @(II(B)) = (J ~2(B).
P r o o f . Let D 9 II(C) for some C, B __. C; then D _ C. Since B -< C, either D __ B or B _ D;
hence, D 9 II(B) U @(B). Therefore, there is a complete _-<-chain C H such that D 9 C H and B 9 CH, i.e.,
D 9 O t2(B). Let D 9 U i2(B); there is a complete ___-chain C H 9 i2(B) such that D 9 C H and B 9 CH;
hence, either D ~ B or B _--<D. Since C H is complete, there is a C 9 C H such that B _ C and D -4 C;
hence, D 9 II(C), B -< C, and so D 9 O @(II(B)).
T H E O R E M 31. U @ ( @ ( B ) ) = @(B).
P r o o f . If D 9 @(C) for some C, B _~ C, then D 9 @(B) by the transitivity of _. If D 9 @(B), then
D 9 U ~(@(B)), since D 9 ~(D).
T H E O R E M 32. OO(f~(B))= n(B).
P r o o f . Let C H E U O(n(B)) for a complete ___-chain CH; then there is a C, B --< C, such that C E CH;
by Theorem 25(5), fl(C) _C f2(B). Therefore, C H 9 fl(B). Let C H 9 fl(B); then B 9 CH; therefore,
Ca" 9 U O(t2(B)).
Now we are ready to define the model A, extending the definition of the frame (W, E, R) by adding
definitions of ~r(A) and v(A). Put
if A = .An, then v(A) = If(A) and .CA) = II(II(A));
if A -- .Ao, then v(A) = O(A) and lr(A) -- H(r
if A = .An, then v(A) = n ( A ) and 7r(A) = n(f~(A)).
It is easy to see that the models ,4. and A are isomorphic. For example, we claim that

f(v(A)) = v(A) and B 9 7r(A)ifff(B) 9 7r(A).

We have f ( v ( A ) ) -- f ( A ) = g(A) = v(A). Also, B E 7r(A) iff B _ A iff g(B) _ g(A) iff g(B) E H(g(A)) iff
/(B) 9 -(A).
T h e model .An is interesting because the set W m the universe of .An __ is a base and a sub-base of
an IIausdorff space. W e prove this fact. Take the ordered pair (X, 0), where X = {CH lcBr is a complete

218

248
chain in M} and {9 is the smallest set which contains O, all sets 12(A), for any A E M, and is closed under
arbitrary unions of elements of {9. Obviously, we have
T H E O R E M 33. The ordered pair (X, {9) is a topological space.
By Theorem 25(6), the set of all 12(A) for any A E M is a base and a sub-base of the space (X, (9).
In what follows we write X for (X, (9).
T H E O R E M 34. X is an IIausdorff space.
P r o o f . Let C H ~,CH r~ E X and CH' ~ CHr~; then there are A , B E M such that A ~ B, A S B ,
A E CH', and B E C H 't. Obviously, CH' E 12(A) and CH" E 12(B). But 12(A) r f](B). Otherwise
II(A) : n f](A) : n 12(B) : II(B); hence A -< B A B ~ A by Theorem 15(1), and so A : B. It follows
that 12(A) n 12(B) : 0 by Theorem 25(4). Therefore, Z is an ttausdorff space.
T H E O R E M 35. Every base set in X is closed.
P r o o f . Let f~(A) be a base set and let C H be an adherent point of 12(A); by Theorem 22(1), there
is a B E CH such that A S B . The set 12(B) is a neighborhood of CH, and hence 12(A) A 12(B) r O. By
Theorem 25(4), 12(A) ---- ll(B), and so CH e 12(A).

5. T H E I N T E R P R E T A T I O N OF THE VERSES

We shall start with the verses gl A g2- First we have to define "time present," "time past," and "time
future" in a model. The simplest way to do this would be to define time past as the set of equivalence
classes of realized states of affairs, except the present, with respect to the corresponding simultaneity
relation. However, we feel that this would be insufficient: time past is time filled up with events that have
been realized. Therefore, we define time past as a set of ordered pairs the first coordinate of which is a
realized state of affairs (except the present) and the second coordinate is the equivalence class to which it
belongs. Similarly, time present is defined to be a singleton of the ordered pair the first coordinate of which
is the present and the second the corresponding equivalence class.
Let A E {.A, .An, .A~, .A~); then ~ime pas~ and ~ime presenZ [in symbols TPA(A) and TPR(A)] are
defined as follows: T P A ( A ) -- ((x, Ix]) Ix E lr(A) \ (v(A)}} and T P R ( A ) -- {(v(A), [**(A)])}. As to the
time future, it is the set of ordered pairs the first coordinate of which is a state of affairs that is realizable
and the second the equivalence class to which it belongs. Thus, by time future T F U ( A ) in A we mean
{(x, [x]) [x E ~(v(A)}. Let T(A) = TPA(A) U T P R ( A ) U TFU(A).
There is a sense in which time past and time present are contained in time future. Look at the model
.An; the elements of the underlying set W are sets of realized states of a~airs. This means that in .An a
future state of affairs is a set of states of affairs that contains a subset of realized (i.e., past or present)
states of affairs (in the model .A). To be more specific, we interpret the sentence "Time present and time
past are both perhaps present in time future" in .ArI as follows:

Vy((y, [y]) E TPR(.A n) U TPA(.A n) =~ 3z((z, [z]) E TFU(.A n) A y C z)).

Is this sentence true? Let y = II(B) and ((IT(B), [II(B)]) 6 TPR(.A n) tJ TPA(.An). Then II(B) ___II(A),
B ___ A, and II(B) E H(II(A)) = 8(.An). Let A = A~; by E4, there is a z E W such that zLz; by
Theorem 16(2), there is a C = C~ such that A -< C. Hence, B -< C and II(B) C II(C). We also have
~(.An) = II(A) and H(C) 6 ~(II(A)), i.e., (iT(C), [ii(C)]) 6 TFU(.An). Therefore, the sentence is true.
Notice that "perhaps" is represented here by ~.

219

249
We proceed to the sentence "Time future (is) contained in time past." There is a sense in which this
sentence is also true. In the model -4~, the elements of the underlying set W are the sets @(B) of future
states of affairs. This means that in .4 ~ a past state of affairs [an dement of H(@(A))] is a set of states of
affairs that are future in -4. More precisely, we have

Vy((y, [y]) 6 TFU(-4 ~) = 3z((z, [z]> 6 TPA(-4 ~) A y C z)).

W e claim that this sentence is true. Let y = @(B) and (q~(B),[@(B)]) E TFU(-4~); then @(B) 6
9 (q~(A)) and A -< B. Let A = A~; by E5, there is a z E W such that zLz; by Theorem 17(2), there is
a Cz = C 6 ]/%; such that C -< A. Hence, C - < B. By Theorem 21(2), ~(B) C ~(C). It is clear that
(~(C), [~(C)], ) e TPA(.4*).
N o w take 0.3,i.e., "All time is eternally present." By "all time" in a model A we understand T(A).
Let us anaiy,.e -4; we have ~(-4) = A. For any <B, [B]> 9 T(-4), either B _~ a or A _< B. Hence,
B 6 Urn(A) = -(.An). In this sense "all time" of .4 is present in the present u(-4n) of .An. Since
f~(A) C f~(C) for any C -~ A, "alltime" is present in the same sense in any moment of TPA(-4n).
Is "all time" present in any moment of T F U ( . 4 % ? Let 12(A) -4 f~(B) and let C 6 U f~(A) -- II(A)u@(A);
if A is a branching point, then it is possible that C ~ II(B) U @(B); hence, "all time" is not necessarily
contained in ft(B), where (f~(B), [fl(B)]> is contained in TFU(-4 n) in the sense in which it is contained in
TPR(.4n). This means that the sentence "All time is eternally present" is not logically valid. On the other
hand, it is possible that "all time" is present in any moment of TFU(.A n) (if there are no branching points
in M); this is to say that the sentence "All time is eternally present" is consistent. We interpret it by

VBVI2(C)((B, [B]) 6 W(-4) A <f~(C), [f~(C)]) E T(-4 n) ~ S 6 U f~(C)). (1)

By 0.4, i.e., "All time is unredeemable," we mean "No moment of time is redeemable" rather than "It is
not the case that all time is redeemable," i.e., rather than "Some time is not redeemable." W h a t is meant
by saying that a moment of time is redeemable? If (x, [x]> is redeemed in (y, [y]), then (a) (x, Ix]) is present
in (Y, [Y]/ and (b) if (y, [y]> is realized, then there is a moment (z, [z]> in the future of <y, [y]> such that
(x, Ix]> is not present in (z, [z]>. Moreover, x and z are in a complex relationship ixrelevant in the present
context.
Let RD(x,y) mean that <x, [x]) is redeemed in (y, [y]>; then x is redeemable [in symbols RDLE(x)]
means 3yRD(x,y). Now we can interpret the sentence "All time is unredeemable" by

Vx((x, Ix]) 9 T(A) ~ -,RDLE(x)).

If a simultaneous interpretation in .4 and in .An is permitted, as before, it is possible to show that the
sentence 0.3 ::~ 0"4 is true.
Suppose that the contrary is the case. Then we have (1), and

3B((B, [B]) 9 T(-4) A 3ft(e)((f~(C), [f/(C)])9 T(-4n) A RD(B, ~2(C)))).

Let Bo 9 ]42 be such that (Bo, [Bo]> 9 T(.4) and let ~(Co) 9 Wn be such that (i2(Co), [f~(Co)]) e T(-4 n) A
RD(Bo, H(Co)), where YYn is the set PV of elements of the sort l](A). This means that Bo 6 U f~(C0) and
that there is an f/(Do) E YYf~ such that f/(Co) -< ll(D0) and B0 ~ U f~(Do), where C0 is a branching point.
But, by (1) we have Bo 9 Uf/(Do), a contradiction.
Below we give two necessary and sufficientconditions under which (I) holds. Take An; (I) is equivalent
to (2):

220

250
there is a complete -4-chain C H such that for any fl(B) E kV~, if B -~ A V A -4 B, then f~(B) --- {C/-/).
We claim that (2) implies (1). Assume (2) and let (B, [B]) E T(A) and (f~(C), [l'A(C)]) E T(Aa).
Obviously, either B -~ A or A _ B. Also, either C _ A or A -4 C. Suppose that B -~ A. If A -44_C, then
B E II(C) C_ Urn(C). If C -4 A, then either B ~ C or C -4 B. If B -4 C, then B E Urn(C) as before. If
C__ B, then B E ~ ( B ) C (I,(C) C U n ( C ) . Suppose that A-4 B; then n ( B ) C n(A). By (2), there is a
complete -4-chain C H such that 12(A) = f~(C) = {CH}. Hence, ~(B) _C { C H } and thus f~(B) = {CH}.
Since B E CH, we have B E U fl(c).
We claim that (1) implies (2). Suppose that (1) holds, and that for any complete ~-chain CH, there
is an 12(B) such that B E e l l , B ~ A Y A ~ B, and 12(B) # { e l l } . Obviously, C H E fl(B) and there is
a CH' E ~(B) such t h a t C H ~ CH'. Hence, there are Dt E CH and D2 E CH' such that DI # D2 and
D1SD2. Since A -~ D1 and A -~ D~., we have (D1, [O1]) E T(A) and (f~(D2), [12(D2)]) E T(An); however,
D1 ~ H(D2) U ~(D2), for neither D1 ~ D2 nor D2 -4 D1. This contradicts (1).
Another equivalent formulation of (1) is (3):
there is a complete -4-chain CH such that U YI(fl(A)) U U O(12(A)) = {CH}.
This can be shown as follows. Assume (2) and let e l l ' E U II(f~(A)); then CH' E 12(B) for some B,
B _ A; hence CH' = C H by (2). It is clear that CH E U YI(fl(A)), by (2). Hence, U II(12(A)) = {CH}.
Let CH' e U O(fA(A)); then CH' E 12(B) for some B, A -4 B; hence CH' = CH. It is clear that CH E
U ~(f~(A)). Therefore, we obtain (3). Assume (3) and let CH be such that U YI(f~(A)) u U ~(f~(A)) =
.[CH}. Obviously, U II(f~(A)) _c (C//}. Since f~(B) C U II(a(A)) = (CH}, we have fl(B) = ( C H } for
any B, B -4 A. Also, fl(A) = .[ell). Since fl(B) _C f~(A), it follows that f](S) = .[CH} for any B, A -4 B.
Thus, in the model A a, claim (1) is of eleatic flavour. We see that (1) is equivalent to u(A a) = fA(A) =
{CH} = 12(B) for any B E CH. The set ~r(Aa) u ,li(Aa) could be called "being." But (1) is equivalent
to 7r(Aa) U ~ ( A n) ---- II(fl(A) U ~(f2(A) ---- {fl(A)) -- { ( C H ) } . Since being is a singleton, it is true that
"being is one."
In topological terms, (1) is equivalent to the claim that the present is a singleton containing an isolated
point C H of X.
Finally, we come to as A 0"6. First, we must define what is meant by "what might have been and what
has been." Take a zero-order model A = (w, E, R, 7r(A)). If an ~: e W is "what has been" (in A), we have
x E H(u(A)). If x is "what might have been" in A, then x ~ ~r(A) ^ ::By(A) A (Sy E H(u(A)))yRx. Second,
we must define what is meant by "one end" of a state of affairs that is either realiT.ed or that might have
been realized. We assume that "the end" in A of x E W is a zero-order model B that may be taken as an
end of x being possible: in B, the state of affairs x ceases to be possible - - it is being reali~.ed or it becomes
"what might have been." Write END(z, A, B) to denote the fact that B is the end of z in A; then we may
define END(z, A, B) iff A ___ B A (u(B) = ~ V u(B)Sx). By Theorem 17, for any model A ~ M and any
z E W such that xE~,(A), there is an end B ~ M of z in A. It is easy to see t h a t such an end is unique.
If we allow simultaneous interpretations in A and in A rI, we may interpret " W h a t might have been and
what has been point to one end" in A, and "always present" in A n. The line quoted above can be written
as follows:
q~ v(A) A zEu(A) A ((Sy ~ H(u(A)))yR;~) V :: ~ ~(A) :::#
31B(END(x, A, B) A (VH(C) e ~(H(A)))B ~ H(C)).

As our previous considerations show, this sentence is true.

221

251
6. I N D E T E R M I N I S M

Although the quoted verses might look very deterministic, the open future in the models gives rise to
indeterminism.
Following [2],define B • iff-,(B -4_C) A -,(C -4 B). Also, for any M' C 4~(A),let M '• = {B l(VC(C 6
M' =~ B.l_C)} be the orlhocomplemen~ofM'. M ' is a simple outcome if[ M' = M '1• Let M ' ~ M " = (M'U
M " ) •177and let I be the set of simple outcomes; then (I, n, ~,• ) is a Boolean algebra (cf. [2]). Furthermore,
let J C #(A), and for any B, C 6 #(A), define B_I_jC by B]_C and glb(B, C) 6 J. By analogy with M '•
n, and ~ we define M '• N~, and ~ j as follows: {B [(VC(C E M' =:~ B • M' Nj M " = M ' A M " ,
and M' ~ j M " = ( M ' U M " ) • As in [2], we can show that the ortholattice (Ij, N j, ~j,J-J ) of outcomes
emerging from ~(A) and relativized to J is orthomodular rather than distributive.
Acknowledgments. The first few pages of an earher version of this article were also inspired by [3]
and written jointly with Nata~a Raki6 in 1990. I wish to thank her for insisting upon Theorem 20, which
later on proved to be a central point in the interpretation of some verses. I am grateful to Dubravka
Pavlis for valuable comments on some properties of the relation R, which resulted in a new version of the
article. I wish to thank Ilijas Farah for carefully checking the proofs of all theorems, some corrections, and
improvements. In particular, he pointed out t h a t an earlier version of axioms R5 and R6 was insufficient to
prove Theorem 7, and that an earlier proof of Theorem 20 could be made shorter. Chris Weeks encouraged
me to prove Theorem 13. I want to thank Dilys Wadman for her remark that the interpretation of the
last line had to be made more precise. Finally, I thank Zarko Mijajlovid for some useful comments and
encouragement.

REFERENCES

1. J. Dieudonn6, Fundaraen~als of Modern Analysis, Academic Press, New York (1961).


2. N. D. Belnap, "The very idea of an outcome," in So. Review, See. Se. Eng., 19-20, Belgrade (1996),
pp. 15-16.
3. R. S. Woolhouse, "Tensed modal;ties," J. Phil. Log., 2, 393-415 (1973).

222

252
Classifications, titles translations to Serbian and Internet addresses
assigned to papers

A note on E
(Jedna primedba o sistemu E)
https://projecteuclid.org/euclid.ndjfl/1093890632
Classification: Relevance logic

Deduction theorems for relevant logics


(Teoreme dedukcije za releventne logike)
http://onlinelibrary.wiley.com/doi/10.1002/malq.19730190306/abstract
Classification: Relevance logic

Preference and choice


(Preferencije i izbor)
https://link.springer.com/article/10.1007/BF00169106
Classification: Preference, choice, Von Right

Deduction theorems for T, E and R reconsidered


(Ponovno razmatranje teorema dedukcije za sisteme T, E i R)
http://onlinelibrary.wiley.com/doi/10.1002/malq.19760220135/abstract
Categories: Relevance logic

An analysis of causality
(Analiza kauzaliteta)
https://link.springer.com/chapter/10.1007/978-94-010-1823-4_9
Classification: Causality

Logika i vreme
https://philarchive.org/archive/KROLIV
Classification: Logic and Philosophy of Logic

Gentzen formulations of two positive relevance logics


(Gencenovske formulacije dvaju sistema pozitivne relevantne logike)
https://link.springer.com/article/10.1007/BF00713549
Classification: Proof theory, relevance logic

Entailment and quantum logic


(Entailment i kvantna logika)
https://link.springer.com/chapter/10.1007/978-1-4613-3228-2_14
Classification: Quantum logic

A constructive proof of a theorem in relevance logic


(Konstruktivni dokaz jedne teoreme relevantne logike)
http://onlinelibrary.wiley.com/doi/10.1002/malq.19850312505/full
Classification: Proof theory, relevance logic

Four relevant Gentzen systems


(Četiri gencenovska sistema relevantne logike)
https://link.springer.com/article/10.1007/BF00396905
Classification: Gentzen systems, proof theory, relevance logic

Temporal modalities and modal tense operators


(Temporlne modalnosti i modalni vremenski operatori)
https://link.springer.com/chapter/10.1007/978-94-009-2821-3_10
Classification: Temporal logic, modal logic

253
Decidability and interpolation for a first-order relevance logic
(Odlučivost i interpolacija u relevantnoj logici prvog reda)
https://global.oup.com/academic/product/substructural-logics-9780198537779?cc=rs&lang=en&
Classification: decidability, proof theory, relevance logic

Identity, permutation and binary trees


(Identitičnost, permutacija i binarna drveta)
http://elib.mi.sanu.ac.rs/files/journals/flmt/9c/flmn70p765-781.pdf
Classification: relevance logic, combinatorics

Identity and permutation


(Identitičnost i permutacija)
http://elib.mi.sanu.ac.rs/files/journals/publ/77/n071p165.pdf
Classification: relevance logic, combinatorics

The law of assertion and the rule of restricted permutation


(Zakon tvrdnje i pravilo ograničene permutacije)
https://eng.iph.ras.ru/page17019644.htm
Classification: relevance logic, combinatorics

Between TW and RW


(Između sistema TW i sistema RW)
http://elib.mi.sanu.ac.rs/files/journals/publ/83/n077p009.pdf
Classification: Relevance logic

A semantics for the first quartet by T. S. Eliot


(Semantika za prvi kvartet T.S. Eliota)
https://link.springer.com/article/10.1007/BF02671726
Classification: model theory, poetry

254
CIP - Каталогизација у публикацији - Народна библиотека Србије,
Београд

16 Крон А.

KRON, Aleksandar, 1937-2000


Collected Works in Logic of Aleksandar Kron [Elektronski izvor] : On the
occasion of the 80th Anniversary of his birth / Miloš Arsenijević, Žarko
Mijajlović ( eds.). - Beograd : The Serbian philosophical society = Srpsko
filozofsko društvo : Mathematical Institute of SASA = Matematički institut
SANU, 2017. - [1] elektronski optički disk (CD-ROM) ; 12 cm

Nasl. s naslovnog ekrana. - Nasl. u kolofonu: Sabrani spisi iz logike


Aleksandra Krona : povodom 80 godina rodjenja. - Tekst na engl. i srp.
jeziku.

ISBN 978-86-81349-42-7
1. Dr. up. stv. nasl.
a) Крон, Александар (1937-2000) - Логика - Зборници b) Крон, Александар
(1937-2000) - Математичка логика - Зборници
COBISS.SR-ID 254291724

Вам также может понравиться