Вы находитесь на странице: 1из 367

The origins of thought

A journey of thought into thought

Galtons bean machine

Anti-prologue: The grand human illusion

People in the beginning were no more intelligent than the other creatures. At this stage, there was no meaning of a notion such as divine entity, and every aspect of human activity was focused on mere survival, even if there had already existed some idea of a super-natural world, with respect to nature and the stars. It was then that the instinctive, unconscious brain gave its place to the moral, logical brain.

What was the cause of this shift? Some would say that it had to do with purely random processes of evolution. Some others would attribute it to some sort of divine intervention. Id rather say that, according to the anthropic principle, the appearance of intelligent life at some stage in the universe was somewhat programmed from the beginning. However, one way or the other, all these assumptions fall into the category of the moral-logical brain- as it is important we realize that logic is part of ethics, and vice versa.

Why are meaning and cause so important for us (even within the strict context of survival)? Is it just because of our mortal nature so that we all need an ethical-metaphysical basis to rely on? The world of miracles mainly belongs to the gods, so that cause and meaning seem to transcend the sphere of our everyday-material world. But again this is just the interpretation of our moral mind, suggesting or even imposing on nature what she should be and how she should behave. Is there another way that we may prove or, better, agree that human morality corresponds to some kind of universal ethics?

Not only morality but also the other fundamental questions of our being- such as those of the type: Where do we come from? or how did the universe begin? or is individual existence preserved after death?- may be considered only according to a generalized notion of correspondence principle, or principle of analogy. In simple words, our thoughts, our beliefs and our sentiments or feelings should correspond (or be analogous) to natures respective properties. If they didnt then we would be like castaways, with respect not only to any ethical and logical validity but, literally, we would seem to live outside nature and the universe. But the fact that, one way or the other, we are part of this world, forms, if not a proof by itself, at least a confirmation that what takes place in the world, also takes place in ourselves.

According to this realization, a complete understanding of the world may seem not impossible, even if it may be proved very difficult and effort consuming to be achieved- even if it would take an infinity to be accomplished. It teaches us that if we grasp the totality of the world, if we consider ourselves as parts of a larger whole which consists of mutually related parts, we may understand the meaning of life and of the universe at the largest scale and highest level. But what about the smallest parts? Would it be enough to divide, lets say, wholeness, into a dozen different, fundamental blocks of matter, and accordingly build a theory of the universe, based on the interactions between the fundamental entities? I guess that this is not enough. The indivisibility of natural processes, as implied by the notion of wholeness itself, prevents us from doing so. Even if we tried to reconstruct wholeness, the resulting object would look like a reassembled broken glass, not as fabulous as the original, anyway.

One way or the other, the clue is that when we construct a theory about facts, objects, even about ourselves and reality in general, we have to pay attention both to the individual parts we use and to the totality of the final object. We cant do this simultaneously, at least not consciously, but from time to time we have to consult the general idea in order not to make a mess with the pieces of the puzzle to be solved.

There is a final preliminary remark I would like to make here. When we construct a theory about reality, that is more than a vague idea about what it might look like, the formal language we use in connection with our methodological procedure is very important. We chose some particular, abstract symbols, and we use them consistently and universally as patterns or modes of the whole process. For example, all poets know that as soon as they start to write, writing itself guides them in what they intended to write.

The same goes for mathematics. Since the time of classical physics, a whole new set of rules and symbols has been invented in order to express new notions corresponding to new discoveries about nature- which according to the new rules also includes ourselves. These symbols still preserve their algebraic, arithmetical character, although now they seem to act or project themselves to other symbols, while this sort of interaction obeys more matrix algebra than classical addition.

Is this new kind of formal reasoning enough to describe nature, or do we need a far more advanced non additive or non-commuting basic structure of a mental language in order to better and higher communicate with a more complex reality? An intriguing aspect, which is also very comforting, is that, according to the aforementioned principle of analogy, nature evolves as much as we evolve. But is nature as we know it all that nature can be, anywhere and anytime? I believe the answer is yes, and the reason has already been mentioned above- we are parts of nature, even if not the most intelligent ones. But we are getting on, and this progress is parallel to evolution. Now, if intelligence in the universe grew at a faster (or slower) rate than ours, this would be irrelevant because even so there would be another species, the more advanced one, to overtake us. So the real problem has to do not only with our thought, but also with the fact of

intelligent life at a universal scale and level. And I personally feel that this remarkable aspect should be granted to everything and to everyone.

The logical- ethical world of reasoning

Whether logic includes the whole sphere of thought is a question that needs an answer. How much different, for example, is logic from an instinctive mental response? It may be shown that we are far more instinctive than what we think we are. Is the existence of God in our thoughts a proof that there is something divine in them so that we are different from all other creatures? When a dog obeys its master, is this act of worship fundamentally different from a human being praying to a god? The main point here is that we should realize that, one way or the other, logic is submitted to the ethics of our mental character. For example, any kind of our considerations about right or wrong, our origins in the cosmos or our destination (a meaning and a cause), betray the ethical dimension of our minds. This dimension cannot be simply explained by instinctive fears or material pain. Animals do experience fear and pain and they may suffer from the loss of one of their members. But they dont bury their dead, nor do they have any notion about an afterlife. What makes us so special is an aspect of continuity ad infinitum, in other words the capacity to go far beyond ourselves and to gain access into a universal mind. If we were asked to mention just one thing that could be considered objective, we would realize that there isnt any. There are different gods for different religions, different theories and opinions about our origin and our cause, and even if we think about ourselves we realize that there isnt any certain common picture about who we are. We always project our view of the world and of others so that what we see is a personal aspect about an objective reality. We are subjects dealing with other objects, even if these objects are other beings, and we will never come into their place.

However, not only will we never really know how others feel and what they think, but also we may not find any clue that what we think or how we feel about ourselves consists an objective, absolutely personal reality. It seems more likely that we share a common experience that takes

place in the world with other beings and people, and that we just choose or individualize facts and processes which we consider that they belong to us and nobody else. Furthermore, even if we have taken possession of a part of the collective experience and begin the processes of analyzing it, the logical conclusions that we come to just confirm the external origin of the fundamental assumptions we have used. This incompleteness of our logic forms a well-known theorem which will be stated further down. It reveals the self-referential nature not only of our basic logical procedures, but also of our own existence with respect to a wider process of being which is revealed to us infinite and primal.

We gather here the first two conclusions of the processes and origins of our thoughts as follows:

1) No experience can be claimed to be personal. 2) No logical sentence can be proved by logical assumptions.

And we may see the respective consequences: 1) Personal consciousness is part of a collective information process which becomes familiarized and individualized. 2) Logic is based on truths which therefore should be of spiritual origin. The fact how fragmented the continuity by which we perceive and understand the external world can be summarized in the following sentences-processes:

Process one: Mental perception of the object: Its image (A) becomes perceived by consciousness (B). Process two: Physical appreciation of the object: Consciousness (B) analyzes its physical properties (C).

A=B B=C Therefore


The process is cyclic and self- referential: If B is the object, both A (perception) and C (cognition) are representations, even if C corresponds to a direct perception (e.g. physical contact).

Booles laws of thought

Clarke and Spinoza Consciousness in some sense is a process of realizing what we perceive. In other words we could say that it is thought squared. The mathematical approach of our mental processes was an inescapable outcome after the foundations of mathematical reasoning were established. George Boole was one of the first to realize the mathematical aspect of reasoning. But logic is what mathematics is all about. Because whatever the objective harmony in the world may be, we set our own rules of analogy according to the way we perceive the world and based on the functions of our own system of logic. Therefore, the way we think and understand the world is fundamentally logical, consisting of basic units of yes and no answers relative to some natural phenomenon, so that it is really remarkable how from this simplicity of brain function emerges complex and multilateral thought. We will refer now to George Booles Laws of thought. Firstly, we will mention a logical dilemma about the existence of God, keeping in mind how important the moral or divine aspect of reasoning seems to be. Secondly, we will dive deeper into the realm of our own thought and take a glimpse at its properties which may reveal a fundamental characteristic of order. With respect to Clarkes Demonstration of the Being and Attributes of God, Boole says that, (it) consists of a series of propositions or theorems, each of them proved by means of premises resolvable, for the most part, into two distinct classes, viz., facts of observation, such as the existence of a material world, the phenomenon of motion, &c., and hypothetical principles, the authority and universality of which are supposed to be recognized `a priori Though the trains

of argument of which it consists are not in general very clearly arranged, they are almost always specimens of correct Logic, and they exhibit a subtlety of apprehension and a force of reasoning which have seldom been equaled, never perhaps surpassed. We see in them the consummation of those intellectual efforts which were awakened in the realm of metaphysical inquiry, at a period when the dominion of hypothetical principles was less questioned than it now is, and when the rigorous demonstrations of the newly risen school of mathematical physics seemed to have furnished a model for their direction.

Then Boole mentions the fundamental propositions:

Proposition I. Something has existed from eternity. For since something now is, tis manifest that something always was. Otherwise the things that now are must have risen out of nothing, absolutely and without cause. Which is a plain contradiction in terms. For to say a thing is produced, and yet that there is no cause at all of that production, is to say that something is effected when it is effected by nothing, that is, at the same time when it is not effected at all. Whatever exists has a cause of its existence, either in the necessity of its own nature, and thus it must have been of itself eternal: or in the will of some other being, and then that other being must, at least in the order of nature and causality, have existed before it.

Proposition II. Some one unchangeable and independent Being has existed from eternity.

As Boole comments, It may be observed, that the impossibility of infinite succession, the proof of which forms a part of Clarkes argument, has commonly been assumed as a fundamental principle of

metaphysics, and extended to other questions than that of causation. Aristotle applies it to establish the necessity of first principles of demonstration; the necessity of an end (the good), in human actions, &c. There is, perhaps, no principle more frequently referred to in his writings. By the schoolmen it was similarly applied to prove the impossibility of an infinite subordination of genera and species, and hence the necessary existence of universals. Apparently the impossibility of our forming a definite and complete conception of an infinite series, i.e. of comprehending it as a whole, has been confounded with a logical inconsistency, or contradiction in the idea itself. Boole goes on enumerating Clarkes logical propositions concerning the existence of God:

Proposition III. That unchangeable and independent Being must be self-existent.

Boole goes on to comment on the previous proposition: In Dr. Samuel Clarkes observations on the above proposition occurs a remarkable argument, designed to prove that the material world is not the self-existent being above spoken of. The passage to which I refer is the following: If matter be supposed to exist necessarily, then in that necessary existence there is either included the power of gravitation, or not. If not, then in a world merely material, and in which no intelligent being presides, there never could have been any motion; because motion, as has been already shown, and is now granted in the question, is not necessary of itself. But if the power of gravitation be included in the pretended necessary existence of matter: then, it following necessarily that there must be a vacuum (as the incomparable Sir Isaac Newton has abundantly demonstrated that there must, if gravitation be an universal quality or affection of matter), it follows likewise, that matter is not a necessary being. For if a vacuum actually be, then it is plainly more than possible for matter not to be. According to Clarkes syllogisms, if there is a force of gravity then there must be a vacuum because otherwise motion cannot take place. And it follows that matter is not a necessary being. I

would say that this is a proof by contradiction that matter should exist, otherwise gravity wouldnt exist. But it is supposed that gravity do exist. So matter must exist too. That means that a unchangeable and independent Being cannot be self-existent, unless it is not composed of matter. But even if there exists a purely spiritual state of being, the existence of matter shows the dependence between the former and the latter, so that any form of existence should depend on both. Anyhow, Boole goes on to mention the rest of Clarkes demonstration: Of the remainder of Dr. Clarkes argument I shall briefly state the substance and connexion, dwelling only on certain portions of it which are of a more complex character than the others, and afford better illustrations of the method of this work. In Prop. IV. it is shown that the substance or essence of the self-existent being is incomprehensible. The tenor of the reasoning employed is, that we are ignorant of the essential nature of all other things, much more, then, of the essence of the self-existent being. In Prop. V. it is contended that though the substance or essence of the self-existent being is itself absolutely incomprehensible to us, yet many of the essential attributes of his nature are strictly demonstrable, as well as his existence. In Prop. VI. it is argued that the self-existent being must of necessity be infinite and omnipresent; and it is contended that his infinity must be an infinity of fullness as well as of immensity. The ground upon which the demonstration proceeds is, that an absolute necessity of existence must be independent of time, place, and circumstance, free from limitation, and therefore excluding all imperfection

In Prop. VII. it is argued that the self-existent being must of necessity be One. The order of the proof is, that the self-existent being is necessarily existent, that necessity absolute in itself is simple and uniform, and without any possible difference or variety, that all variety or difference of existence implies dependence; and hence that whatever exists necessarily is the one simple essence of the self-existent being.

In Prop. VIII. it is argued that the self-existent and original cause of all things must be an Intelligent Being. The main argument adduced in support of this proposition is, that as the cause is more excellent than the effect, the self-existent being, as the cause and original of all things, must contain in itself the perfections of all things; and that Intelligence is one of the perfections manifested in a part of the creation In Prop. X. it is argued, that the self-existent being, the supreme cause of all things, must of necessity have infinite power. The ground of the demonstration is, that as all the powers of all things are derived from him, nothing can make any difficulty or resistance to the execution of his will. It is defined that the infinite power of the self-existent being does not extend to the making of a thing which implies a contradiction, or the doing of that which would imply imperfection (whether natural or moral) in the being to whom such power is ascribed, but that it does extend to the creation of matter, and of an immaterial, cogitative substance, endued with a power of beginning motion, and with a liberty of will or choice. Upon this doctrine of liberty it is contended that we are able to give a satisfactory answer to that ancient and great question, , what is the cause and original of evil? The argument on this head I shall briefly exhibit, All that we call evil is either an evil of imperfection, as the want of certain faculties or excellencies which other creatures have; or natural evil, as pain, death, and the like; or moral evil, as all kinds of vice. The first of these is not properly an evil; for every power, faculty, or perfection, which any creature enjoys, being the free gift of God,. . . it is plain the want of any certain faculty or perfection in any kind of creatures, which never belonged to their natures is no more an evil to them, than their never having been created or brought into being at all could properly have been called an evil. The second kind of evil, which we call natural evil, is either a necessary consequence of the former, as death to a creature on whose nature immortality was never conferred; and then it is no more properly an evil than the former. Or else it is counterpoised on the whole with as great or greater good, as the afflictions and sufferings of good men, and then also it is not properly an evil; or else, lastly, it is a punishment, and then it is a necessary consequence of the third and last kind of evil, viz., moral evil. And this arises wholly

from the abuse of liberty which God gave to His creatures for other purposes, and which it was reasonable and fit to give them for the perfection and order of the whole creation. Only they, contrary to Gods intention and command, have abused what was necessary to the perfection of the whole, to the corruption and depravation of themselves. And thus all sorts of evils have entered into the world without any diminution to the infinite goodness of the Creator and Governor thereof. The previous results of Booles reasoning may be stated as follows:

Evils are either absolute evils, which are consequences of the abuse of liberty, or they are natural evils, which are consequences of imperfection.

This is why we said from the beginning that ethics is the meaningful consequence of logic. The whole conversation about the existence or non-existence of God ended up to a discussion about right and wrong. The notion of order as conceived by human thought transforms into a moral code within our souls. From this point onwards any logical attempt of proof is based on and biased by truths already established.

As far as the demonstration of Spinoza about the existence of God is concerned, Boole makes the following remarks: The Ethics of Benedict Spinoza is a treatise, the object of which is to prove the identity of God and the universe, and to establish, upon this doctrine, a system of morals and of philosophy. The analysis of its main argument is extremely difficult, owing not to the complexity of the separate propositions which it involves, but to the use of vague definitions, and of axioms which, through a like defect of clearness, it is perplexing to determine whether we ought to accept or to reject. While the reasoning of Dr. Samuel Clarke is in part verbal, that of Spinoza is so in a much greater degree; and perhaps this is the reason why, to some minds, it has appeared to possess a formal cogency, to which in reality it possesses no just claim

Boole makes an algebraic analysis of Spinozas arguments, and then goes on to wonder if there really is any chance that humans can understand God: It is not possible, I think, to rise from the perusal of the arguments of Clarke and Spinoza without a deep conviction of the futility of all endeavors to establish, entirely a priori, the existence of an Infinite Being, His attributes, and His relation to the universe. The fundamental principle of all such speculations, viz., that whatever we can clearly conceive, must exist, fails to accomplish its end, even when its truth is admitted. For how shall the finite comprehend the infinite? Yet must the possibility of such conception be granted, and in something more than the sense of a mere withdrawal of the limits of phenomenal existence, before any solid ground can be established for the knowledge, a priori, of things infinite and eternal Were it said, that there is a tendency in the human mind to rise in contemplation from the particular towards the universal, from the finite towards the infinite, from the transient towards the eternal; and that this tendency suggests to us, with high probability, the existence of more than sense perceives or understanding comprehends

There is, however, a class of speculations, the character of which must be explained in part by reference to other causes,- impatience of probable or limited knowledge, so often all that we can really attain to; a desire for absolute certainty where intimations sufficient to mark out before us the path of duty, but not to satisfy the demands of the speculative intellect, have alone been granted to us; perhaps, too, dissatisfaction with the present scene of things. With the undue predominance of these motives, the more sober procedure of analogy and probable induction falls into neglect. Yet the latter is, beyond all question, the course most adapted to our present condition. To infer the existence of an intelligent cause from the teeming evidences of surrounding design, to rise to the conception of a moral Governor of the world, from the study of the constitution and the moral provisions of our own nature;- these, though but the feeble steps of an understanding limited in its faculties and its materials of knowledge, are of more avail than the ambitious attempt to arrive at a certainty unattainable on the ground of natural religion. And as these were the most ancient, so are they still the most solid foundations, Revelation being set apart, of the belief that the course of this world is not abandoned to chance and inexorable fate.

Here I would agree with Boole. If God is just the product of formal logical deduction then He would be subject to all fallacies and expediencies of human necessity. But if we regard God as a being existing beyond the most pure and advanced thoughts of ours then we submit ourselves to a process of mental and moral progress. So even if the natural world is subject to a force of necessity, this force is controlled and guided by the highest considerations of mental awareness. We will always feel fear and pain but within our thoughts this fear and pain do not exist, not physically in any sense, so that a greater mental power can guide us through. What is truly remarkable is not that our thoughts tell us if something is right or wrong, but, considering either right or wrong as granted, that we have the freedom to choose, even if a wrong choice may lead us to the absolute evil.

Constitution of the intellect

Next comes the analysis of Boole about the notion of a system, such as that of thought, which is necessary to make it properly function according to an intrinsic moral faculty: What I mean by the constitution of a system is the aggregate of those causes and tendencies which produce its observed character, when operating, without interference, under those conditions to which the system is conceived to be adapted. Our judgment of such adaptation must be founded upon a study of the circumstances in which the system attains its freest action, produces its most harmonious results, or fulfills in some other way the apparent design of its construction. There are cases in which we know distinctly the causes upon which the operation of a system depends, as well as its conditions and its end. This is the most perfect kind of knowledge relatively to the subject under consideration

There are also cases in which we know only imperfectly or partially the causes which are at work, but are able, nevertheless, to determine to some extent the laws of their action, and, beyond this, to discover general tendencies, and to infer ulterior purpose. It has thus, I think rightly, been concluded that there is a moral faculty in our nature, not because we can understand the special instruments by which it works, but because while, in some form or other, the sentiment of moral approbation or disapprobation manifests itself in all, it tends, wherever

human progress is observable, wherever society is not either stationary or hastening to decay, to attach itself to certain classes of actions, consentaneously, and after a manner indicative both of permanency and of law. Always and everywhere the manifestation of Order affords a presumption, not measurable indeed, but real, of the fulfillment of an end or purpose, and the existence of a ground of orderly causation.

Someone could say that intelligence is a product of chance. Those of course that disagree claim that the probability of such a coincidence is practically zero. They also tend to regard that only a supreme and primordial intelligence could have created the natural world and, thus, human intelligence. Boole, however, follows the middle path. Neither coincidence, according to him, can lead to intelligence, nor can a supreme entity explain the uniqueness and particularities of human thought. This is why, as he explains, the search for truth by humans, while they get overwhelmed by the experience of the external world, must also be accompanied with the study of their own internal nature and reality: The particular question of the constitution of the intellect has, it is almost needless to say, attracted the efforts of speculative ingenuity in every age. For it not only addresses itself to that desire of knowledge which the greatest masters of ancient thought believed to be innate in our species, but it adds to the ordinary strength of this motive the inducement of a human and personal interest. A genuine devotion to truth is, indeed, seldom partial in its aims, but while it prompts to expatiate over the fair fields of outward.

This way, the experimental basis of modern science is established, and the nature of scientific truth is attested. Human knowledge, according to Boole, is based on the main facts of scientific truth, and of the human intellect in general- that we are able to deduce from the partial events of experience the general conclusions of science, thanks to our inherent capability to perceive order: Thus the necessity of an experimental basis for all positive knowledge, viewed in connection with the existence and the peculiar character of that system of mental laws, and principles, and operations, to which attention has been directed, tends to throw light upon some important questions by which the world of speculative thought is still in a great measure divided. How,

from the particular facts which experience presents, do we arrive at the general propositions of science? What is the nature of these propositions? Are they solely the collections of experience, or does the mind supply some connecting principle of its own? In a word, what is the nature of scientific truth, and what are the grounds of that confidence with which it claims to be received?...

When from a large number of observations on the planet Mars, Kepler inferred that it revolved in an ellipse, the conclusion was larger than his premises, or indeed than any premises which mere observation could give. What other element, then, is necessary to give even a prospective validity to such generalizations as this? It is the ability inherent in our nature to appreciate Order, and the concurrent presumption, however founded, that the phenomena of Nature are connected by a principle of Order. Without these, the general truths of physical science could never have been ascertained The security of the tenure of knowledge consists in this, that wheresoever such conclusions do truly represent the constitution of Nature, our confidence in their truth receives indefinite confirmation, and soon becomes undistinguishable from certainty

Modern writers of high repute have contended, that all reasoning is from particular to particular truths. They instance, that in concluding from the possession of a property by certain members of a class, its possession by some other member, it is not necessary to establish the intermediate general conclusion which affirms its possession by all the members of the class in common. Now whether it is so or not, that principle of order or analogy upon which the reasoning is conducted must either be stated or apprehended as a general truth, to give validity to the final conclusion. In this form, at least, the necessity of general propositions as the basis of inference is confirmed,- a necessity which, however, I conceive to be involved in the very existence, and still more in the peculiar nature, of those faculties whose laws have been investigated in this work. For if the process of reasoning be carefully analyzed, it will appear that abstraction is made of all peculiarities of the individual to which the conclusion refers, and the attention confined to those properties by which its membership of the class is defined.

The fact that a conclusion can be greater than the corresponding hypotheses is analogous to the case when a sum is greater than its parts. This may be due to the fact that a sum also includes the

binding energy of its constituents. However, many have doubted that the rules of inference are capable by themselves to capture, beyond the causal relations of the parts, the idea of totality. In this case, according to this view, human thought must rely on pre-existing and everlasting forms, or archetypes, that guide our thoughts towards the realization of inescapable, eternal truths. On the other side, are those who say that the so-called archetypes are partial products of human thought, which occur by deduction, and thus are doomed to be incomplete. For Boole, truth is again found somewhere in the middle. He refers to the example of geometrical shapes. The circle, as a perfect geometrical object, is not found in nature. Instead, we humans imagine the corresponding process which forms a circle, and which, somehow, becomes perfectly round within our thoughts. This way, as we approach the notion of an object through a physical process of thought which is not perfect by itself, we built the truth, and create, thanks to our thought, a notion about perfection and a form perhaps more ideal than the nature of the phenomenon which we originally wanted to grasp. In a similar way we built theories in physics. We observe natural phenomena and, based on previous remarks and experiences, we regard natural laws which are valid, if not for all cases, for the greater part of similar phenomena: But besides the general propositions which are derived by induction from the collated facts of experience, there exist others belonging to the domain of what is termed necessary truth The question concerning their nature and origin is a very ancient one, and as it is more intimately connected with the inquiry into the constitution of the intellect than any other to which allusion has been made, it will not be irrelevant to consider it here. Among the opinions which have most widely prevailed upon the subject are the following. It has been maintained, that propositions of the class referred to exist in the mind independently of experience, and that those conceptions which are the subjects of them are the imprints of eternal archetypes. With such archetypes, conceived, however, to possess a reality of which all the objects of sense are but a faint shadow or dim suggestion, Plato furnished his ideal world.

It has, on the other hand, been variously contended, that the subjects of such propositions are copies of individual objects of experience; that they are mere names; that they are individual objects of experience themselves; and that the propositions which relate to them are, on account of the imperfection of those objects, but partially true; lastly, that they are intellectual products

formed by abstraction from the sensible perceptions of individual things, but so formed as to become, what the individual things never can be, subjects of science, i.e. subjects concerning which exact and general propositions may be affirmed.

Now if the last of the views above adverted to be taken (for it is not proposed to consider either the purely ideal or the purely nominalist view) and if it be inquired what, in the sense above stated, are the proper objects of science, objects in relation to which its propositions are true without any mixture of error, it is conceived that but one answer can be given. It is, that neither do individual objects of experience, nor with all probability do the mental images which they suggest, possess any strict claim to this title.

It seems to be certain, that neither in nature nor in art do we meet with anything absolutely agreeing with the geometrical definition of a straight line, or of a triangle, or of a circle, though the deviation therefrom may be inappreciable by sense; and it may be conceived as at least doubtful, whether we can form a perfect mental image, or conception, with which the agreement shall be more exact. But it is not doubtful that such conceptions, however imperfect, do point to something beyond themselves, in the gradual approach towards which all imperfection tends to disappear. Although the perfect triangle, or square, or circle, exists not in nature, eludes all our powers of representative conception, and is presented to us in thought only, as the limit of an indefinite process of abstraction, yet, by a wonderful faculty of the understanding, it may be made the subject of propositions which are absolutely true. The domain of reason is thus revealed to us as larger than that of imagination.

If logic through perception connects the external natural world with the internal intellectual one, there will be two classes of laws to be faced with: both physical-material and spiritual laws of our system of thought. If we relate the physical laws with truths, the intellectual laws should be related to some form of necessity. So this interference between the absolute universal truths and the inherent necessity of human thoughts can cause what is called a logical error. But while an absolute subjection to the truths of the universe would deprive us from any freedom of thought, the recognition of an independent, even if sometimes false, intellectual necessity could impose its own truths on the universal reality. Boole assumes that the restoration of this

connection between nature and human thought can be done logically, philosophically, and also mathematically: Now what is remarkable in connection with these processes of the intellect is the disposition, and the corresponding ability, to ascend from the imperfect representations of sense and the diversities of individual experience, to the perception of general, and it may be of immutable truths. Wherever this disposition and this ability unite, each series of connected facts in nature may furnish the intimations of an order more exact than that which it directly manifests. For it may serve as ground and occasion for the exercise of those powers, whose office it is to apprehend the general truths which are indeed exemplified, but never with perfect fidelity, in a world of changeful phenomena

Were, then, the laws of valid reasoning uniformly obeyed, a very close parallelism would exist between the operations of the intellect and those of external Nature. Subjection to laws mathematical in their form and expression, even the subjection of an absolute obedience, would stamp upon the two series one common character. The reign of necessity over the intellectual and the physical world would be alike complete and universal But while the observation of external Nature testifies with ever-strengthening evidence to the fact, that uniformity of operation and unvarying obedience to appointed laws prevail throughout her entire domain, the slightest attention to the processes of the intellectual world reveals to us another state of things

But while the observation of external Nature testifies with ever-strengthening evidence to the fact, that uniformity of operation and unvarying obedience to appointed laws prevail throughout her entire domain, the slightest attention to the processes of the intellectual world reveals to us another state of things. The mathematical laws of reasoning are, properly speaking, the laws of right reasoning only, and their actual transgression is a perpetually recurring phenomenon. Error, which has no place in the material system, occupies a large one here. We must accept this as one of those ultimate facts, the origin of which it lies beyond the province of science to determine. We must admit that there exist laws which even the rigor of their mathematical forms does not preserve from violation. We must ascribe to them an authority the essence of which does not

consist in power, a supremacy which the analogy of the inviolable order of the natural world in no way assists us to comprehend.

Which are these laws whose authority brings them into conflict with the laws of intellectual necessity so that to lead to a logical fallacy? They may be the physical or mathematical laws of the universe, eternal and undifferentiated truths to whom humans both spiritually and physically are subject to. It may therefore be a question about our struggle against the will of nature, and, at the same time, against our own desires or ambitions. But the most intriguing aspect here is that these absolute laws do not come exclusively from the outside, but they could equivalently derive from a different secondary level of function of our own thoughts. This may therefore have to do with the moral dimensions of human thought, independent, up to a certain degree, from logic, juxtaposing their mutually exclusive consequences. This way, not only are we presented with the deepest aspect of a perfect harmony with the world and with the natural laws, but also we are faced with the task to fulfill these laws, at an ethical level, as our own responsibility with respect to nature, which originally formed them.

Even if human reasoning has the tendency, on one side, to divide things in a way to compare the opposites, on the other side, it seeks their unification in order to understand the world in its totality. Human syllogisms move from the part to the whole, composing thus the wholeness of the world. There may exist in parallel, however, a pre-existing aspect of wholeness, as a sort of truth in the world, which humans can grasp, perhaps in their own personal way, and built upon it any logical train of thought. Perhaps such an attempt is highly biased, and may inescapably lead us to mistakes and absurdities. Nevertheless, it reveals a possible deeper connection between humans and the world, of human intelligence with the essence of nature, so that by this principle of analogy we may move on gradually and progressively to compose the unity of the world: It may be that the progress of natural knowledge tends towards the recognition of some central Unity in Nature. Of such unity as consists in the mutual relation of the parts of a system there can be little doubt, and able men have speculated, not without grounds, on a more intimate correlation of physical forces than the mere idea of a system would lead us to conjecture. Further, it may be that in the bosom of that supposed unity are involved some general principles

of division and re-union, the sources, under the Supreme Will, of much of the related variety of Nature. The instances of sex and polarity have been adduced in support of such a view. As a supposition, I will venture to add, that it is not very improbable that, in some such way as this, the constitution of things without may correspond to that of the mind within. But such correspondence, if it shall ever be proved to exist, will appear as the last induction from human knowledge, not as the first principle of scientific inquiry. The natural order of discovery is from the particular to the universal, and it may confidently be affirmed that we have not yet advanced sufficiently far on this track to enable us to determine what are the ultimate forms into which all the special differences of Nature shall merge, and from which they shall receive their explanation.

Were this correspondence between the forms of thought and the actual constitution of Nature proved to exist, whatsoever connection or relation it might be supposed to establish between the two systems, it would in no degree affect the question of their mutual independence. It would in no sense lead to the consequence that the one system is the mere product of the other. A too great addiction to metaphysical speculations seems, in some instances, to have produced a tendency toward this species of illusion. Thus, among the many attempts which have been made to explain the existence of evil, it has been sought to assign to the fact a merely relative character,- to found it upon a species of logical opposition to the equally relative element of good. It suffices to say, that the assumption is purely gratuitous

If the study of the laws of thought avails us neither to determine the actual constitution of things, nor to explain the facts involved in that constitution which have perplexed the wise and saddened the thoughtful in all ages,- still less does it enable us to rise above the present conditions of our being, or lend its sanction to the doctrine which affirms the possibility of an intuitive knowledge of the infinite, and the unconditioned,- whether such knowledge be sought for in the realm of Nature, or above that realm. We can never be said to comprehend that which is represented to thought as the limit of an indefinite process of abstraction. A progression ad infinitum is impossible to finite powers. But though we cannot comprehend the infinite, there may be even scientific grounds for believing that human nature is constituted in some relation to the infinite. We cannot perfectly express the laws of thought, or establish in the most general sense the

methods of which they form the basis, without at least the implication of elements which ordinary language expresses by the terms Universe and Eternity.

As we saw, logical deduction, as a process of the intellect, permits us to move from the partial events of everyday experience to the general conclusions about scientific truth. This truth corresponds to an objective physical reality, which we also perceive as a general pattern upon which we base our thoughts. The constitution of our intellect, in other words, is relative to the truth of natural reality, and this relationship can be expressed through the principle of analogy. The world which surrounds us is not only chaotic and probabilistic; it is also characterized by lawfulness, origins and direction. These neutral natural properties we perceive and interpret as moral rules, cause, and destination, respectively. This way, while our conscience gets interested in this game of contact and understanding of nature, at the same time it is submitted to the sphere of its duties with respect to the world and its own self: Refraining from the further prosecution of a train of thought which to some may appear to be of too speculative a character, let us briefly review the positive results to which we have been led. It has appeared that there exist in our nature faculties which enable us to ascend from the particular facts of experience to the general propositions which form the basis of Science; as well as faculties whose office it is to deduce from general propositions accepted as true the particular conclusions which they involve. It has been seen, that those faculties are subject in their operations to laws capable of precise scientific expression, but invested with an authority which, as contrasted with the authority of the laws of nature, is distinct, sui generis, and underived. Further, there has appeared to be a manifest fitness between the intellectual procedure thus made known to us, and the conditions of that system of things by which we are surrounded,- such conditions, I mean, as the existence of species connected by general resemblances, of facts associated under general laws; together with that union of permanency with order, which while it gives stability to acquired knowledge, lays a foundation for the hope of indefinite progression.

Human nature, quite independently of its observed or manifested tendencies, is seen to be constituted in a certain relation to Truth; and this relation, considered as a subject of speculative knowledge, is as capable of being studied in its details, is, moreover, as worthy of being so

studied, as are the several departments of physical science, considered in the same aspect. I would especially direct attention to that view of the constitution of the intellect which represents it as subject to laws determinate in their character, but not operating by the power of necessity; which exhibits it as redeemed from the dominion of fate, without being abandoned to the lawlessness of chance.

We cannot embrace this view without accepting at least as probable the intimations which, upon the principle of analogy, it seems to furnish respecting another and a higher aspect of our nature,its subjection in the sphere of duty as well as in that of knowledge to fixed laws whose authority does not consist in power,- its constitution with reference to an ideal standard and a final purpose. It has been thought, indeed, that scientific pursuits foster a disposition either to overlook the specific differences between the moral and the material world, or to regard the former as in no proper sense a subject for exact knowledge. Doubtless all exclusive pursuits tend to produce partial views, and it may be, that a mind long and deeply immersed in the contemplation of scenes over which the dominion of a physical necessity is unquestioned and supreme, may admit with difficulty the possibility of another order of things. But it is because of the exclusiveness of this devotion to a particular sphere of knowledge, that the prejudice in question takes possession, if at all, of the mind. The application of scientific methods to the study of the intellectual phenomena, conducted in an impartial spirit of inquiry, and without overlooking those elements of error and disturbance which must be accepted as facts, though they cannot be regarded as laws, in the constitution of our nature, seems to furnish the materials of a juster analogy.

Finally, Boole makes us wonder what the study of the laws of thought and of their mathematical expression, in particular, offers us. Definitely, the realization of the fundamental questions which concern us, such as the definition of our species, our relationship with the rest of the world and the other people, as well as of the causes for the sake of which natures functions were designed, leads us to a self-awareness and to a relationship of harmony between ourselves, the other people and the rest of the world. This way, our spiritual civilization is being built. Mathematics comprises a language of rationalization with respect to processes of thought, and, together with language in the broader sense of communication, helps us to construct a comprehensive system

of education. Mathematics forms, in a few words, the instrument to make our logic reasonable. However, as Boole himself wisely notices, mathematics is not enough to reveal and describe all the phenomena of the human soul. The ethical dimension of human intelligence, together with emotions and instincts, drives us to thoughts and actions which seek a wider and deeper aspect of the world, and obliges us to accept higher causes, which have never been searched for before in the natural history of the world. If mathematics offers us the quantification and rationalization of our natural functions, a sort of insight, on the other hand, which co-exists within our intellectual system, asks us to extend the process of intellectual anticipation, expanding and bringing to perfection our own system of thought!: If it be asked to what practical end such inquiries as the above point, it may be replied, that there exist various objects, in relation to which the courses of men's actions are mainly determined by their speculative views of human nature. Education, considered in its largest sense, is one of those objects. The ultimate ground of all inquiry into its nature and its methods must be laid in some previous theory of what man is, what are the ends for which his several faculties were designed, what are the motives which have power to influence them to sustained action, and to elicit their most perfect and most stable results. It may be doubted, whether these questions have ever been considered fully, and at the same time impartially, in the relations here suggested. The highest cultivation of taste by the study of the pure models of antiquity, the largest acquaintance with the facts and theories of modern physical science, viewed from this larger aspect of our nature, can only appear as parts of a perfect intellectual discipline

The laws of thought, in all its processes of conception and of reasoning, in all those operations of which language is the expression or the instrument, are of the same kind as are the laws of the acknowledged processes of Mathematics. It is not contended that it is necessary for us to acquaint ourselves with those laws in order to think coherently, or, in the ordinary sense of the terms, to reason well. Men draw inferences without any consciousness of those elements upon which the entire procedure depends. Still less is it desired to exalt the reasoning faculty over the faculties of observation, of reflection, and of judgment. But upon the very ground that human thought, traced to its ultimate elements, reveals itself in mathematical forms, we have a presumption that the mathematical sciences occupy, by the constitution of our nature, a

fundamental place in human knowledge, and that no system of mental culture can be complete or fundamental, which altogether neglects them.

But the very same class of considerations shows with equal force the error of those who regard the study of Mathematics, and of their applications, as a sufficient basis either of knowledge or of discipline. If the constitution of the material frame is mathematical, it is not merely so. If the mind, in its capacity of formal reasoning, obeys, whether consciously or unconsciously, mathematical laws, it claims through its other capacities of sentiment and action, through its perceptions of beauty and of moral fitness, through its deep springs of emotion and affection, to hold relation to a different order of things. There is, moreover, a breadth of intellectual vision, a power of sympathy with truth in all its forms and manifestations, which is not measured by the force and subtlety of the dialectic faculty. Even the revelation of the material universe in its boundless magnitude, and pervading order, and constancy of law, is not necessarily the most fully apprehended by him who has traced with minutest accuracy the steps of the great demonstration. And if we embrace in our survey the interests and duties of life, how little do any processes of mere ratiocination enable us to comprehend the weightier questions which they present! [1]

Gdels incompleteness theorem

The point made previously, that the system of thought may be considered efficient to include both deduction in the form of logic and truth in the form of ethics, can be further expanded and more rigorously expressed with Gdels incompleteness theorem: Suppose you build a computing machine, and you give the order: You will never say if this sentence is true. If the sentence is true, then the machine should say that the sentence is false. If it is false, the machine can tell the truth that the sentence is false. So we will never know the correct answer. This is a problem that Gdel introduced, showing that logic is not immune to inconsistencies. Logic is not a perfect machine of truth. Gdel even quantified his theorem, which simply says that for each theory there is a sentence G which states that G cannot be answered by theory . If G could be proved by the axioms of , then would have a theorem

G, which is contradictory, so would be inconsistent. But if is consistent, then G cannot be proved by T, thus T is incomplete. As Solomon Feferman notes, Actually there are two incompleteness theorems, and what people have in mind when they speak of Gdels theorem is mainly the first of these. Like Heisenbergs Uncertainty Principle, it has captured the public imagination with the idea that there are absolute limits to what can be known. More specifically, its said that Gdels theorem tells us there are mathematical truths that can never be proved. Among postmodernists its used to support skepticism about objective truth; nothing can be known for sure. And in the Bibliography of Christianity and Mathematics its asserted that theologians can be comforted in their failure to systematize revealed truth because mathematicians cannot grasp all mathematical truths in their systems either. Not only that, the incompleteness theorem is held to imply the existence of God, since only He can decide all truths. [2] Anyway, Gdels theorem does not prove the existence of God. It proves that some things are truths beyond the realm of logic. What is more fundamental in Gdels theorem is the property of self- reference, i.e. a sentence whose truth relies on the existence of the sentence itself. This is exactly what would happen in the case of the aforementioned computer- it would face a program with an infinite loop. But how come we may accept something as true if we cannot prove it? In fact, the most fundamental questions about ourselves, such as the existence of God, life after death, the moral codes in general are common everyday truths which we accept even if they cannot be proved by facts happening in the real world. Feferman also notes that, Among those who know what the incompleteness theorems actually do tell us, there are some interesting views about their wider significance for both mind and matter. In his 1960 Gibbs Lecture, Gdel himself drew the conclusion that either mind infinitely surpasses any finite machine or there are absolutely unsolvable number theoretic problems. A lot has been written pro and con about the possible significance of Gdels theorem for mechanical models of the mind by a number of logicians and philosophers. One of the most prominent proponents of the claim that Gdels theorem proves that mind is not mechanical is Roger Penrose: there must be more to human thinking than can ever be achieved by a

computer. However, he thinks that there must be a scientific explanation of how the mind works, albeit in its non-mechanical way, and that ultimately must be given in physical terms, but that current physics is inadequate to do the job. But Stephen Hawking and Freeman Dyson, among others, have come to the conclusion that Gdels theorem implies that there cant be a Theory of Everything.

If our logic in particular or our thought in general were not sufficient to grasp the totality of information in the universe, we wouldnt be able to realize the incompleteness theorem in the first place. So what we have here is a fundamental logical paradox about logic itself. Logic seems to lead sometimes to a contradiction which seems to nullify logic but at the same time it reveals its combinatory power. Consider for example: Light sometimes blinds us because of its reflection, but simultaneously it makes as see. So the whole problem is, as I have already mentioned, a question about how self- reference works.

Impossible objects
Gdels incompleteness theorem is a problem of formal logic. However, it can be extended into any field of science or of everyday life. Many times our thought is led to paradoxes and absurdities without any obvious logical reason. We accept some facts as personal or universal truths which are self-evident, so they dont need to be proved or disproved. Furthermore, modern physics seeks for the so-called theory of everything, a set or rules by which any natural phenomenon could be explained. But if Gdels theorem is true then any such attempt is doomed with failure.

Penroses triangle As an object we may define anything that can be perceived or conceived as having a form or/and a content. In this sense, objects include mountains, lakes, clouds, thoughts, feelings, logical problems, notions, properties, everything. An object doesnt need to be composite or welldefined. Intelligent living objects could be us, for example.

Impossible triangle sculpture as an optical illusion, East Perth, Western Australia [3]

The previous picture depicts Roger Penroses triangle, which is an impossible object. The object in both three pictures is exactly the same although seen from different angles. What the brain does is to try to perceive the object in its totality. This is why we seem to be tricked by this optical illusion. We will return later on to the unconscious properties of our mind.

Escher and non-Euclidian geometries


A painter who expanded the perspective of impossible objects is M.C. Escher. His waterfall, depicted in the previous image, is an example of an impossible machine which carries water from the bottom to the top without any mechanical work.

Escher occupied himself with the so-called non-Euclidean geometry, as depicted in the previous figures. Impossible objects in general may be said not to be able to be represented by common 3D space. Non-Euclidean geometries are regarded those in which the so-called Euclids 5th postulate is violated. This postulate can be simply stated as follows:

Euclids 5th postulate

If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles. [4] In more simple words, two parallel lines dont meet each other. Of course this is wrong in the case, for example, of geodesics. We know that all the meridians of the Earth meet both at the South and at the North Pole. Earths geometry is spherical, not Euclidean-flat. But we may define a non-Euclidean geometry in general, as depicted by Lobachevsky (1840):

All straight lines which in a plane go out from a point can, with reference to a given straight line in the same plane, be divided into two classes- into cutting and non-cutting. The boundary lines of the one and the other class of those lines will be called parallel to the given line.

More simply put: There exist two lines parallel to a given line through a given point not on the line. [5] What Lobachevsky says is that at the point in the previous figure there may exist more parallel lines to BC (not only EE). Logic of course says that something like this seems impossible. If there exist such lines then they shouldnt pass from point A. Otherwise A is not unique. Or A could be even seen as a line which seems like a point from an extra-dimension. What we face here is another impossible object. If two parallel lines meet each other then they are not straight lines but curves. On the other hand, we have already assumed the axiomatic existence of a line and a point. When is a line straight? The question seems much more difficult to answer. If a line is a collection of infinite points, still it may represent a curve, or a plain, or even space itself.

This difficulty in the definition of a line in contrast to a curve led Einstein to use the notion of a geodesic cosmic line instead of a straight line. Everything in the universe must then move along geodesics. So in the real cosmos motions and the corresponding shape of space and things are far from being ideal. All things, including space and time, are subject to deformations caused by forces. In other words, geometry cannot be irrelevant to the nature of things it wants to describe. So what goes on in our minds? Is there such a thing as an empty mind, or do all thoughts, even the fainter ones, deform, in some sense, the mental space-time of our brain? By the action of a very strong force space-time may bend as much as to form a closed loop. Near black holes such an event can take place. Is there a black hole in our minds, a spiritual kind of singularity, which may wind up our thoughts in such a way as to give birth to what we commonly refer to as ingenuity, creativity, or inspiration? Are these loops within our minds spontaneous phenomena of thought creation, little time-machines which manage to produce the future of our thoughts even before we conceive them? And is there any connection with what we refer to as

impossible objects, or does the whole thing fall into the category of a mere optical, or mental in general, illusion?

Strange loops The notion of an infinite loop is portrayed in a vivid way by Douglas Hofstadter in his book I am a strange loop: And yet when I say strange loop, I have something else in mind- a less concrete, more elusive notion. What I mean by strange loop is- here goes a first stab, anyway- not a physical circuit but an abstract loop in which, in the series of stages that constitute the cycling-around, there is a shift from one level of abstraction (or structure) to another, which feels like an upwards movement in a hierarchy, and yet somehow the successive upward shifts turn out to give rise to a closed cycle. That is, despite ones sense of departing ever further from ones origin, one winds up, to ones shock, exactly where one had started out. In short, a strange loop is a paradoxical level-crossing feedback loop.

The Penrose stairs is a two-dimensional depiction of a staircase in which the stairs make four 90 turns as they ascend or descend; yet form a continuous loop so that a person could climb them forever and never get any higher.

As Wikipedia says, a strange loop is technically called tangled hierarchy consciousness and arises when, by moving only upwards or downwards through a hierarchical system, one finds oneself back where one started. Strange loops may involve self-reference and paradox. The paradox arises when what we perceive comes into conflict with common sense. The staircase, for

example, in the previous figure depicts the endless journey of someone ascending and descending forever. [6]

So the greatest paradox of thought is that it seems to be born spontaneously, like a strange loop, and then, as the mind tries to understand how this thought came into existence, the validity of such a syllogism stands as a self- evident and self-referring truth. I guess that this points towards something more than just a theorem of incompleteness. Its more like a theorem or axiom of selfconsistency. We have talked before about the moral aspect of our logic. But here ethics takes on a more universal meaning. Its not just the story of right and wrong but it is more like an insight into the moment of creation of our mental processes. This is why absurd conclusions or impossible objects are not mere faults of logic or illusions of perception, respectively. They hide a deeper and primal aspect of the nature of our intellect, a kind of spontaneous action which is based on strange loops and which is revealed unfolding through the process of analytical reasoning.

The impossibility of thought

Bertrand Russell, by co-authoring with A. N. Whitehead Principia Mathematica, attempted to ground mathematics on logic. He believed that all propositions of mathematics could be proved. Curt Gdel showed that truths in a logical system are not provable by its own premises. So Gdels theorem has given an end to our ambition that there could be any complete logical system, as Russell in his Principia would have expected.

Infinite regress

When we try to demonstrate a truth we use syllogisms until we find a contradiction:

P0: Logic is true. If P0 is true then P0 P1 (if P1 true) P1 P2 (if P2 true) P2 P3 (if P3 true) Pn Pn+1

We can repeat this process until we find a wrong proposition. Then the whole syllogism collapses. But if our first argument is quite strong (or just axiomatic) then it could hold true for ever. In this case, we assume that we are satisfied with an adequate number of repetitions which support our primary argument more and more

This is what we call an infinite regress. If P0 is our first proposition then it is true if P1 is true, and P1 is true if P2 is true, and so on:

Pn+1 = Pn + I (where I stands as the next step in the series.)

If the first proposition P0 stands as truth (like the sentence logic is true) then we will never end with a contradiction or with an affirmation.

Infinite loops

Another name for an infinite regress is an infinite causal chain. An infinite causal chain has no beginning or end. We can use any means of logical deduction but we can never reach an end or a first cause. Infinite causal chains are logically valid but, lets say, incomplete with regard to common experience. There is a fast way to escape from a causal chain. For example, if we ask how the universe started, we can simply reply that

the universe has always existed. So a previous state loses any meaning. This is an example of an infinite (causal) loop. But here we have to face a problem: If an infinite loop can explain its existence as its own cause, then logical reasoning loses any sense. Because logic will have to ask: What lies outside an infinite loop?

This is why Richard Hanley has argued that causal loops are not impossible but their only possibly objectionable feature they all share is that coincidence is required to explain them. Therefore infinite causal loops are in fact acausal. There still remains an important question: Does nature thinks the same way humans do? Are all human problems also problems of nature? If humans are nature then the obvious answer is yes. But this answer is still an answer of logic. Could nature have a more sophisticated way of thinking above human logic? But again the previous question is a question of logic. And so on

Our logic moves on using an infinite regress procedure. We can stop this procedure any time but if we want to give a definite answer then we have to start again. We may admit that logic is insufficient for the understanding of the world but if we abandon logic then we lose any ability of common sense. Is there a function within the limits of the mind which, if found, could lead us to a new way of reasoning and thinking? Perhaps this question is just another question of logic. But since logic and our common sense have proved worthy of understanding or at least having access to incredible things outside their realm, there is a final question left: Why do we understand? We may not be able to reply but I guess we can keep our mind open to the possibility of miracles.

We cannot demonstrate truths, we just accept them. On the other hand, by demonstration we cannot prove a truth. As Aristotle put it forward: Some hold that, owing to the necessity of knowing the primary premises, there is no scientific knowledge. Others think there is, but that all truths are demonstrable. Neither doctrine is either true or a necessary deduction from the premises. The first school,

assuming that there is no way of knowing other than by demonstration, maintain that an infinite regress is involved, on the ground that if behind the prior stands no primary, we could not know the posterior through the prior The other party agrees with them as regards knowing, holding that it is only possible by demonstration, but they see no difficulty in holding that all truths are demonstrated, on the ground that demonstration may be circular and reciprocal. Our own doctrine is that not all knowledge is demonstrative: on the contrary, knowledge of the immediate premises is independent of demonstration. Such, then, is our doctrine, and in addition we maintain that besides scientific knowledge there is its originative source which enables us to recognize the definitions.

The previous statement by Aristotle beautifully summarizes the nature of human thought. One the one hand, there is reasoning, which leads us to the knowledge of the world by demonstration. On the other hand, there are truths, on which our whole reasoning is based, truths that have to do not only with the physiological properties of our brain but also with the fundamental way the process of thought evolves. In other words, inconsistency or impossibility is not necessarily a fault of our weak minds but instead a reality which our minds are powerful enough to conceive and utilize. [7]

The truth of the unconscious

We saw that truths are logically impossible objects or elusive targets of intelligence, which seems to be amazed by the inexplicable character of its own fundamental aspects. Our thoughts seem to be drifted away and carried off by processes beyond our mental powers. Is our fate already written somewhere in our genes and in our minds, expressed through our instincts and predispositions? It seems that many functions of our brain lie in the unconscious, so that they pass unnoticed by analytical reasoning and uninfluenced by free will.

The facts of perception

Hermann von Helmholtz, who introduced the meaning of free energy in physics, was a polymath who also wrote about the way we perceive the world. He wondered how objective the information we get by the senses and process with our brain could be in order to form the picture of the external world: The problems which that earlier period considered fundamental to all science were those of the theory of knowledge: What is true in our sense perceptions and thought? And in what way do our ideas correspond to reality? Philosophy and the natural sciences attack these questions from opposite directions, but they are the common problems of both. Philosophy, which is concerned with the mental aspect, endeavors to separate out whatever in our knowledge and ideas is due to the effects of the material world, in order to determine the nature of pure mental activity. The natural sciences, on the other hand, seek to separate out definitions, systems of symbols, patterns of representation, and hypotheses, in order to study the remainder, which pertains to the world of reality whose laws they seek, in a pure form. Both try to achieve the same separation, though each is interested in a different part of the divided field...

Shortly before the beginning of the present century, Kant expounded a theory of that which, in cognition, is prior or antecedent to all experience; that is, he developed a theory of what he called the transcendental forms of intuition and thought. These are forms into which the content of our sensory experience must necessarily be fitted if it is to be transformed into ideas. As to the qualities of sensations themselves, Locke had earlier pointed out the role which our bodily and mental structure or organization plays in determining the way things appear to us. Along this latter line, investigations of the physiology of the senses, in particular those which Johannes Mller carried out and formulated in the law of the specific energies of the senses, have brought (one can almost say, to a completely unanticipated degree) the fullest confirmation. Further, these investigations have established the nature of - and in a very decisive manner have clarified the significance of - the antecedently given subjective forms of intuition. This subject has already been discussed rather frequently, so I can begin with it at once today.

Among the various kinds of sensations, two quite different distinctions must be noted. The most fundamental is that among sensations which belong to different senses, such as the differences among blue, warm, sweet, and high-pitched. In an earlier work I referred to these as differences in the modality of the sensations. They are so fundamental as to exclude any possible transition from one to another and any relationship of greater or less similarity. For example, one cannot ask whether sweet is more like red or more like blue.

The second distinction, which is less fundamental, is that among the various sensations of the same sense. I have referred to these as differences in quality. Fichte thought of all the qualities of a single sense as constituting a circle of quality; what I have called differences of modality, he designated differences between circles of quality. Transitions and comparisons are possible only within each circle; we can cross over from blue through violet and carmine to scarlet, for example, and we can say that yellow is more like orange than like blue.

Physiological studies now teach that the more fundamental differences are completely independent of the kind of external agent by which the sensations are excited. They are determined solely and exclusively by the nerves of sense which receive the excitations. Excitations of the optic nerves produce only sensations of light, whether the nerves are excited by objective light (that is, by the vibrations in the ether), by electric currents conducted through the eye, by a blow on the eyeball, or by a strain in the nerve trunk during the eyes' rapid movements in vision. The sensations which result from the latter processes are so similar to those caused by objective light that for a long time men believed it was possible to produce light in the eye itself. It was Johannes Mller who showed that internal production of light does not take place and that the sensation of light exists only when the optic nerve is excited

It is apparent that all these differences among the effects of light and sound are determined by the way in which the nerves of sense react. Our sensations are simply effects which are produced in our organs by objective causes; precisely how these effects manifest themselves depends principally and in essence upon the type of apparatus that reacts to the objective causes. What information, then, can the qualities of such sensations give us about the characteristics of the external causes and influences which produce them? Only this: our sensations are signs, not

images, of such characteristics. One expects an image to be similar in some respect to the object of which it is an image; in a statue one expects similarity of form, in a drawing similarity of perspective, in a painting similarity of color. A sign, however, need not be similar in any way to that of which it is a sign. The sole relationship between them is that the same object, appearing under the same conditions, must evoke the same sign; thus different signs always signify different causes or influences.

To popular opinion, which accepts on faith and trust the complete veridicality of the images which our senses apparently furnish of external objects, this relationship may seem very insignificant. In truth it is not, for with it something of the greatest importance can be accomplished: we can discover the lawful regularities in the processes of the external world. And natural laws assert that from initial conditions which are the same in some specific way, there always follow consequences which are the same in some other specific way. If the same kinds of things in the world of experience are indicated by the same signs, then the lawful succession of equal effects from equal causes will be related to a similar regular succession in the realm of our sensations. If, for example, some kind of berry in ripening forms a red pigment and sugar at the same time, we shall always find a red color and a sweet taste together in our sensations of berries of this kind.

What Helmholtz tells us is that our brain does not perceive the external objects directly but instead it reconstructs them using signs it receives from the senses. The image of external objects is reconstructed on the retina by spots of light which comes from the objects. The example of a photo of an object may convince us that what we see is a good representation of reality. Still, the image of the object in the photo remains a representation of the image of the object in our brains. What is also intriguing is that, except from the fact that perception of colors and sounds is highly subjective, space-time itself may be an internal representation of an external order of things, which helps us arrange them in our minds in a helpful way, but which may have little or none physical significance: Thus, our physiological make-up incorporates a pure form of intuition, insofar as the qualities of sensation are concerned. Kant, however, went further. He claimed that, not only the qualities

of sense experience, but also space and time are determined by the nature of our faculty of intuition, since we cannot perceive anything in the external world which does not occur at some time and in some place and since temporal location is also a characteristic of all subjective experience. Kant therefore called time the a priori and necessary transcendental form of the inner, and space the corresponding form of the outer, intuition. Further, Kant considered that spatial characteristics belong no more to the world of reality (the dinge an sich) than the colors we see belong to external objects. On the contrary, according to him, space is carried to objects by our eyes.

Even in this claim, scientific opinion can go along with Kant up to a certain point. Let us consider whether any sensible marks are present in ordinary, immediate experience to which all perception of objects in space can be related. Indeed, we find such marks in connection with the fact that our body's movement sets us in varying spatial relations to the objects we perceive, so that the impressions which these objects make upon us change as we move. The impulse to move, which we initiate through the innervation of our motor nerves, is immediately perceptible. We feel that we are doing something when we initiate such an impulse. We do not know directly, of course, all that occurs; it is only through the science of physiology that we learn how we set the motor nerves in an excited condition, how these excitations are conducted to the muscles, and how the muscles in turn contract and move the limbs. We are aware, however, without any scientific study, of the perceptible effects which follow each of the various innervations we initiate

From this point of view, space is the necessary form of outer intuition, since we consider only what we perceive as spatially determined to constitute the external world. Those things which are not perceived in any spatial relation we think of as belonging to the world of inner intuition, the world of self-consciousness. Space is an a priori form of intuition, necessarily prior to all experience, insofar as the perception of it is related to the possibility of motor volitions, the mental and physical capacity for which must be provided by our physiological make-up before we can have intuitions of space.

As far as time is concerned, Helmholtz says:

Let us try to set ourselves back to the state or condition of a man without any experience at all Let us assume that the man at first finds himself to be just one object in a region of stationary objects. As long as he initiates no motor impulses, his sensations will remain unchanged

If we call the entire group of sensation aggregates which can potentially be brought to consciousness during a certain period of time by a specific, limited group of volitions the temporary presentabilia in contrast to the present, that is, the sensation aggregate within this group which is the object of immediate awareness - then our hypothetical individual is limited at any one time to a specific circle of presentabilia, out of which, however, he can make any aggregate present at any given moment by executing the proper movement. Every individual member of this group of presentabilia, therefore, appears to him to exist at every moment of the period of time, regardless of his immediate present, for he has been able to observe any of them at any moment he wished to do so. This conclusion- that he could have observed them at any other moment of the period if he had wished- should be regarded as a kind of inductive inference, since from any moment a successful inference can easily be made to any other moment of the given period of time.

In this way the idea of the simultaneous and continuous existence of a group of different but adjacent objects may be attained At other times the circles of presentabilia related to this same group of volitions are different. In this way circles of presentabilia, along with their individual members, come to be something given to us, that is, they come to be objects. [8]

What is important to note here is that temporal perception is accompanied by a contemporary state, or circles of presentabilia to use Helmholtzs terminology, during which all adjacent objects are perceived simultaneously as potentialities, which in turn conscious analysis will determine as real objects or not. At the previous stage of unconscious inference, potential objects remain in a state of suspended animation. This conclusion bares tremendous consequences with respect to the way we perceive the world, space-time and things, as well as concerning what the world could really be at a deeper level of consciousness.

Unconscious inference
Hermann von Helmholtz, who is often credited with the first study of visual perception in modern times, examined the human eye and concluded that it was optically rather poor. The poor-quality information gathered via the eye seemed to him to make vision impossible. He therefore concluded that vision could only be the result of some form of unconscious inferences: a matter of making assumptions and conclusions from incomplete data, based on previous experiences. [9]

Inference requires prior experience of the world. This means that our brain is full of preestablished assumptions concerning the properties of events we have either experienced or learned about by others. Of course this means that we are always prejudiced against other things and living beings, since we impose them our pre-assumptions concerning what they are or what they should be. Most often this process takes place instantaneously and unconsciously. Even if we are confronted with a new event or we are missing parts of an object so that it is partially understood, our unconscious tends to reassemble the object or compare it with other similar objects in an automated way.

The Hermann grid illusion: an optical illusion characterized by virtual grey blobs perceived to appear at the intersections of the white lines with the black squares. The blobs disappear when looking directly at an intersection.

The previous image depicts a graphical way to show the phenomenon of unconscious sensory reconstruction. The underlying theory stems from gelstat psychology. The word gelstat in German means: essence or shape of an entitys complete form. This implies the operational principle of gestalt psychology- that the brain is holistic, parallel and analog, with selforganizing tendencies. This principle maintains that the human eye sees objects in their entirety before perceiving their individual parts. The so-called gestalt effect is the form-generating capability of our senses, particularly with respect to the visual recognition of figures and whole forms instead of just a collection of simple lines and curves. The phrase the whole is greater than the sum of the parts is often used when summarizing gestalt theory.

Gestalt theory allows for the breakup of elements from the whole situation into what it really is. In the same way our consciousness analyzes complete forms and shapes into their parts in order to recognize the details and find any possible mistakes made at the previous stage of unconscious inference. Our brain imprints a virtual reality within our memory using in-built logical channels consisting either from previous memories or from the same structures of our sensory system. So how can we be sure that the object we see inside our brain is really out there in the real world? Well, even if this is not always the case there are a lot of experimental ways to prove that our senses are right: We know that the child we see is our son; that the distant mountain is the one we visited last year; that the faint star we see in the night sky is the same one that we found on a map of astronomy; that the image we see in the mirror is not a ghost but us; and so on. Sometimes our estimations may be wrong, but generally speaking we have a certain idea about reality. It is the light reflected by objects that makes them visible, the eyes are our natural receptors and the brain is our logical processor. We dont know everything about the whole process, but if we had different senses and mind to perceive and understand the world, we wouldnt be irrational; we would just have a different reality to talk about. [10]

Backward causation

Electron (e-)- positron (e+) annihilation process depicted by a Feynman diagram. A photon () is produced, to give its place anew to an electron- positron pair.

According to Stanford encyclopedia of philosophy, the principle underlying backward causation, which is sometimes also called retro-causation, is quite simple: Let A and B be two events in the sense that A causes B to happen. If we have no direct knowledge of A, then we must deduce it by going backwards from B to A. This is the road of backward causation. The procedure neither implies the creation of A by B, nor a journey back in time. It helps us instead reconstruct the whole process in an abstract manner. [11] Backward causation is related with the following paradox: Lets suppose a causal chain consisting of particular events in which A causes B, B causes C, and C causes A. The problem here is that the occurrence of A presupposes the occurrence of C; in other words, the cause presupposes its effect. But how can something be required of what itself requires? However, the answer is very simple: the event caused by C is not the original event A but a new event because it happens at a different place and/or at a different time with respect to A. Furthermore, as far as causal loops are concerned, they dont consist of causal chains of events. They are unique events themselves. In fact, the events A, B and C in the previous example can be causal loops connected to each other causally or not. But if these three events are included in the same causal loop then they occur simultaneously otherwise they are retrospectively regarded. This is the key point.

A good example of backward causation which takes place in a natural way is vision. The problem of vision is divided into two basic categories: Emission or extra-mission, and intromission theories. Extra-mission theories regard vision as an active process, during which rays are supposed to be emitted from the eyes towards the objects. The previous figures as well as the following historical flashback are from Rupert Sheldrakes article The Sense of Being Stared At: Plato adopted the idea of an outward-moving current, but he also proposed that it was combined with light to form a single homogeneous body stretching from the eye to the visible object. This extended medium was the instrument of visual power reaching out from the eye. Through it, influences from the visible object passed to the soul. Aristotle followed Plato in emphasizing the importance of an intermediate medium between the eye and the seen object, which he called the transparent. He thought of light not as a material substance, but as a state of the transparent, resulting from the presence of a luminous body. The visible object was the source or the cause of change in the transparent, through which influences were transmitted instantaneously to the soul of the observer.

Another early advocate of an extra-mission theory was Euclid. His approach was strictly mathematical and excluded practically all aspects of vision that could not be reduced to geometry. He assumed that light rays travelled in straight lines and he worked out geometrically how eyes projected the images we see outside ourselves. He explained virtual images in terms of the movement of visual rays outwards from the eyes. He also clearly stated the principles of mirror reflection, recognizing the equality of what we now call the angles of incidence and reflection.

Intromission theories, on the other hand, treated vision as a passive process that was accomplished through light rays from bright objects. Democritus, propounding the doctrine that ultimate reality consists of particles of matter in motion, proposed that material particles streamed off the surface of things in all directions, so that vision depended on these particles entering the eye. In order to account for coherent images, he supposed that the particles were joined together in thin films that travelled into the eye. Other mathematicians, most notably

Claudius Ptolemy, took Euclids geometrical approach further. He also proposed that the visual flux coming out of the eyes consisted of ether, or quintessence, or fifth element. [12]

We can see that both theories were based on causation, either forward or backward. In order to explain vision, forward causation assumed that the eyes produced light rays, whereas backward causation supposed the existence of imaginary paths connecting the brain with the external object. After the Middle- Ages, technological advances made it clear that intromission theory was basically correct, even if there might be some underlying unconscious processes. Keplers theory of retinal images was published in 1604. Newton in his Opticks, first published in 1704, used the same kind of theory. His very reasonable explanation about vision was that the reflected rays incident on the spectators eyes make the same picture in the bottom of the eyes as if they had come from the real object without the interposition of the looking-glass; and all vision is made according to the place and shape of that picture. In fact Newtons theory of virtual images was first codified in Euclids Catoptrics, and his diagrams showing the location of virtual images behind plane mirrors are essentially identical to those in modern textbooks. The main difference is the reversal of the whole process and the paradigm shift from forward to backward causation.

Extended modes of consciousness

The extended present depicted as an infinite causal loop

Herman von Helmholtz realized that a large part of our understanding of the world takes place unconsciously. But he thought that the two worlds, the world of internal representations and the external, physical world, are independent from each other. Furthermore, he regarded space-time as an aspect of subjective reality. This is far from true. In fact objects do affect space-time. They deform it, in some sense, and they justify its existence by their motions or even by their mere physical presence. Without things space-time would be an absolute void, relative to nothing. Objects themselves are events which occupy space-time. So they have dimensionality. And as they have a back and a front or a bottom and a top in space, so they have a past and a future in time, with respect to another event that has a special place in what we call the present. But the present is an object itself, so it cant be dimensionless. It occupies and forms space-time too.

The world that we live in does not exist independently as a separate external object, but it is constantly changing because of conscious intervention. This is what the participatory principle demands, so that the properties of things always depend on free will, which determines what is going to be measured. Accordingly, when consciousness picks up some event instead of another, it simultaneously assumes the spatial and temporal character of this event. Before this conscious action of chronology determination, the event was itself conditional. Even if an event is regarded as well- recorded history, it is always considered at present, so that it gradually gets distorted by other contemporary events, such as thoughts and feelings, or social and political new trends.

When I first conceived the notion of the extended present, I was unaware of the work of Edmund Husserl on the phenomenology of temporality, where the same notion is expressed. Husserl uses the notions of retention and protention as key aspects of his theory. According to his view, as described in Wikipedia, our experience of the world is not of a series of unconnected moments. It would be impossible to have an experience of the world if we did not have a sense of temporality. That our perception brings an impression to our minds depends upon retention and protention. Retention is the process whereby a phase of a perceptual act is retained in our consciousness. It is a presentation of that which is no longer before us and is distinct from immediate experience. A simple example might be that of watching a ball being thrown. We

retain where the ball was in our minds to understand the momentum of the ball as we perceive it in the immediate present. Retention is not a representation or memory but a presentation of a temporally extended present. That is, a present that extends beyond the few short milliseconds that are registered in a moment of sense perception. Protention is our perception of the next moment. The moment that has yet to be perceived. Again, using the example of a ball, our focus shifts along the expected path the ball will take. According to Husserl, perception has three temporal aspects, retention, the immediate present and protention and a flow through which each moment of protention becomes the retention of the next.

Maurice Merleau-Ponty analyzes the temporal phenomenology of perception as follows: Husserl uses the terms protentions and retentions for the intentionalities which anchor me to an environment. They do not run from a central I, but from my perceptual field itself, so to speak, which draws along in its wake its own horizon of retentions, and bites into the future with its protentions. I do not pass through a series of instances of now, the images of which I preserve and which, placed end to end, make a line. With the arrival of every moment, its predecessor undergoes a change: I still have it in hand and it is still there, but already it is sinking away below the level of presents; in order to retain it, I need to reach through a thin layer of time. It is still the preceding moment, and I have the power to rejoin it as it was just now; I am not cut off from it, but still it would not belong to the past unless something had altered, unless it were beginning to outline itself against, or project itself upon, my present, whereas a moment ago it was my present. When a third moment arrives, the second undergoes a new modification; from being a retention it becomes the retention of a retention, and the layer of time between it and me thickens. [13]

The extended present is the natural space of consciousness, where all these trends and attitudes take place, while the events are constantly rearranged and reconsidered in space and time. Lets think about it for a while: Even the most undisputable historical fact is nothing more than a commonly accepted truth about some event, which may be revived but will never be reborn. As far as the future is concerned, the previous realization is easier to be supported. The future is conditional anyway, so that there is no point talking about its rebirth or revival. But in the

framework of the extended present, the future and the past are equally conditional, and the infinite loop that encircles both events is, more or less, simultaneous. So both events are at a primal level interchangeable and equivalent; that is before one throws the arrow of time towards the direction of one event or the other. This is exactly the meaning. Events do not form a causal chain of points in space -time, connected to each other in a linear and non- flexible way. Instead, they are extended objects spanning space- time. At a first stage, pairs of events are spontaneously created as past- like and future- like symmetrical conditions. Temporality is then involved, at the second stage, arranging events in a causal process of future- past division. Thus these conditional pairs of events consist of retained and proteined partners, which will either be expressed and realized, or the infinite loop associated with them will disintegrate and return to the vacuum.

Still, the parts of this spontaneous pair of events are non- locally connected to each other, so that any causal succession of events that consciousness recognizes is arbitrary, though, in a sense, necessary. Furthermore, post- conditions and pre- conditions, which correspond to Husserls protention and retention respectively, are not just perceptual representations of real events, but instead they represent true conditions realized by consciousness. So what is fundamental in the whole process is not time, which is just a form of order taking place at the second place and retrospectively, but the non- local collapse of the infinite loop and the instantaneous distribution of the events. Consciousness is certainly not an idle and stationary object at the center of its ego - universe, separated from all other events that it regards. On the contrary, it participates and forms spacetime by arranging things. It even gives cause and meaning to things. But at the same time, consciousness is not just a process, the path each time chosen from an infinite number of possible routes in the distribution of events. It possesses the holistic property of its non- local and symmetrical deepest nature. It also contains the holographic information of events in each part of its space- time. So, we may say that consciousness is the awareness of the distribution of events itself, having the ability both of causally considering the parts, at present, and spontaneously imagining the totality, in its extended present. (We will return later on to all these new notions,

such as the participatory principle, the holographic principle, the problem of simultaneity and non- locality, as well as to the problem of free will.) [14]

Archetypes as presets of brain function

1. The surface of consciousness. 2. The sphere of internal order. 3. The routes through which contents are submerged into the unconscious. 4. Archetypes and their corresponding magnetic fields, which make the contents deviate from their initial course by their attractive force. . The area where pure archetypal processes become invisible and where the primordial pattern is accumulated. We will now follow the way of thinking of Carl Jung, from his book Archetypes and the collective unconscious: In addition to the purely personal unconscious hypothesized by Freud, a deeper unconscious level is felt to exist. This deeper level manifests itself in universal archaic images expressed in dreams, religious beliefs, myths, and fairytales. The archetypes, as unfiltered psychic experience, appear sometimes in their most primitive and naive forms (in dreams), sometimes in a considerably more complex form due to the operation of conscious elaboration (in myths).

Archetypal images expressed in religious dogma in particular are thoroughly elaborated into formalized structures which, while by expressing the unconscious in a circuitous manner, prevent direct confrontation with it... The search into the unconscious involves confronting the shadow, mans hidden nature; the anima/animus, a hidden opposite gender in each individual; and beyond, the archetype of meaning. These are archetypes susceptible to personification; the archetypes of transformation, which express the process of individuation itself, are manifested in situations Archetypes are likened to instinctual behavior patterns. Examples of ideas such as the concept of rebirth, which occur independently in various cultures and ages, are advanced as evidence for the collective unconscious. It is felt that there are as many archetypes as there are recurring situations in life, that when a situation occurs that corresponds to a particular archetype, the archetype presses for completion like an instinctual drive... In a discussion of the concept of archetypes, Platos concept of the Idea, a primordial disposition that preforms and influences thoughts, is found to be an early formulation of the archetype hypothesis. Other investigators such as Hermann Usener are also noted to have recognized the existence of universal forms of thought. Jungs contribution is considered to be the demonstration that archetypes are disseminated not only through tradition, language, or migration, but that they can act spontaneously without outside influence. It is emphasized that an archetype is not predetermined in content; rather it is a possibility of representation which may be actualized in various ways. In this aspect the archetype is likened to the instincts; both are predetermined in form only, and both are only demonstrable through their manifestations

The formulation of the archetypes is described as an empirically derived concept, like that of the atom; it is a concept based not only on medical evidence but on observations of mythical, religious and literary phenomena, these archetypes are considered to be primordial images, spontaneous products of the psyche which do not reflect any physical process, but are reflected in them. It is noted that while the theories of materialism would explain the psyche as an epiphenomenon of chemical states in the brain, no proof has yet been found for this hypothesis; it is considered more reasonable to view psychic production as a generating rather than a generated factor. The anima is the feminine aspect of the archetypal male/female duality whose

projections in the external world can be traced through myth, philosophy and religious doctrine. This duality is often represented in mythical syzygy symbols, which are expressions of parental images; the singular power of this particular archetype is considered due to an unusually intense repression of unconscious material concerning the parental images. Archetypal images are described as preexistent, available and active from the moment of birth as possibilities of ideas which are subsequently elaborated by the individual. The anima image in particular is seen to be active in childhood, projecting superhuman qualities on the mother before sinking back into the unconscious under the influence of external reality. In a therapeutic sense, the concept of the anima is considered critical to the understanding of male psychology A definition of the word spirit is proposed and a description of the historical and mythical characteristics of the spirit is presented. The great number of different definitions of the term in use today is considered to make it difficult to delimit any one concept; however, these definitions in combination are considered to provide a vivid and concrete view of the phenomenon. In the psychological sense, spirit is defined as a fundamental complex which was originally felt as an invisible but dynamic living presence; this concept is seen to precede the Christian view of the spirit as superior to nature. The contrasting materialistic view, developed under anti-Christian influence, is based on the premise that the spirit is in fact determined by nature, just as the psychic functions are considered to depend on neurochemical phenomena. It is contended that while spirit and matter may eventually be revealed as identical, at present the reality of psychic contents and processes in themselves cannot be denied. The spirit is conceived as originally external to man; now, although it has been internalized in the consciousness, it is still creative rather than created, binding man and influencing him just as the external physical world does. It is seen as autonomous and therefore capable of manifesting itself spontaneously in the conscious.

Interpretations and implications of the psychic manifestations of the spirit in dreams are discussed. The spirit is considered to depend on the existence of an autonomous, primordial, archetypal image in the preconscious makeup of mankind. The moral character of spirits in dreams is considered impossible to establish, since the unconscious process which produces the spirit is capable of expressing both good and evil. The figure of the wise old man is observed to appear where insight is needed that the conscious is unable to supply; thus the archetype

compensates for conscious spiritual deficiency. Again, this insight is considered impossible to judge morally, as it often represents an interplay of good and evil

The picture of the spirit that appears in dreams and fairytales is distinguished from the conscious idea of spirit. Originally the spirit was conceived as a demon which came upon man from the outside; those demons have been partially transformed into voluntary acts by the expansion of consciousness, which has begun to transform formerly unconscious areas of the psyche. It is felt that superhuman positive and negative qualities that the primitive man assigned to the demons are now being ascribed to reason, but that the historical events of modern times, such as war, point to a lack of reason. It is suggested that the human spirit is unaware of the demonism that still clings to him. The advanced technology and science of modern man is described as placing mankind in danger of possession. It is felt that mankind must escape from possession by the unconscious through a better understanding of it. Although Christianity is credited with the understanding that mans inner nature is of prime importance, this understanding is not considered to have penetrated deeply enough.

Descriptions of the workings of the conscious, the unconscious and the individuation process, and their relationships to one another are discussed. Individuation denotes the process by which a person becomes a psychological unity or whole through conflict between the two fundamental psychic aspects, the conscious and the unconscious. This process is described as corresponding to alchemical symbols, especially the unity symbol. It is explained that many persons regard consciousness as the whole psychological individual, but that investigation of multiple personality has proved the existence of an unconscious area of personality in addition to the conscious area, There does not appear to be a ruling principle analogous to the ego in the unconscious, as unconscious phenomena are manifested in unsystematic ways. The conscious and unconscious may appear separate in that the conscious is unaware of the contents of the unconscious; yet cases are presented to demonstrate that it is possible for the unconscious to swamp the ego, or that under the influence of strong emotion, the ego and the unconscious may change places as the unconscious becomes autonomous. The unconscious contains not only elements of a primitive world of the past, but is directed toward the future as well. The conscious mind is easily influenced by the unconscious, as in the case of intuition which is defined as

perception via the unconscious. Elements which exist in the unconscious are described as the anima, that feminine personality hidden in a man, and the animus, the masculine personality hidden in a woman; the shadow, which personifies everything the subject does not wish to face in himself; the hero; and the wise old man. These elements are seen to exist in deep levels of the unconscious and bring into mankinds personality a strange psychic life from the remote past. The desired goal of harmony between conscious and unconscious comes about through the process of individuation, an irrational life experience also commonly expressed in symbols. The task of the analyst is defined as aiding in the interpretation of the symbols, in order to achieve a transcendent union of the opposites. The goal of psychotherapy is described as the development of the personality into a whole The importance of the archetypes in mans relationship to the world is emphasized; they are seen to express mans highest values, which would be lost in the unconscious if not for their projection onto the external environment. An example is the mother archetype, which expresses the ideal mother love. Although the projection of this archetype on the actual mother- an imperfect human being- may lead to psychological complications, the alternative of rejecting the ideal is seen as even more dangerous; the destruction of this ideal and all other irrational expression is seen as a serious impoverishment of human experience. Further, archetypes relegated exclusively to the unconscious may intensify to the point of distorting perceptive and reasoning powers. The equilibrium of rational and irrational psychic forces is thus considered essential

As archetypes penetrate consciousness, they influence the perceived experience of normal and neurotic people; a too powerful archetype may totally possess the individual and cause psychosis The therapeutic process takes the unconscious archetypes into account in two ways: they are made as fully conscious as possible, then synthesized with the conscious by recognition and acceptance. It is observed that since modern man has a highly developed ability to dissociate, simple recognition may not be followed by appropriate action; it is thus felt that moral judgment and counsel is often required in the course of treatment The therapeutic function of archetypes is described in terms of the patients gradual confrontation with the self through the understanding and demystification of fantasy. The differentiation of conscious and unconscious

processes through objective observation leads ideally to the synthesis of the two and to a shift in the center of the personality from the ego to the self [15]

The collective unconscious

We have already given some hints about the holistic aspect of the world. Here we should regard a different sort of connection between things. Events not only affect other objects, but also have a psychological effect on living beings. On the other hand, consciousness participates in the physical processes in a correlative and complementary way. The idea that physical events may be connected with psychic states of the individual was put forward by psychiatrist Carl Jung. According to him, this kind of connection is an acausal one, involving primordial patterns, called archetypes, and an eternal memory of mankind, called the collective unconscious. According to this theory, which is called synchronicity, under certain conditions, physical and psychic phenomena may take place as meaningful coincidences. This sort of connection happens spontaneously, while the events may occur either simultaneously or at different times. In the second case, a present psychic state may be connected with a future physical event which will be realized in due time. With regard to the collective unconscious, Jung, in his book Synchronicity: An acausal connecting principle, telegraphically says: Here I will only point out that it is the decisive factors in the conscious psyche, the archetypes, which constitute the structure of the collective unconscious. The latter represents a psyche that is identical in all individuals. It cannot be directly perceived or represented, in contrast to the perceptible psychic phenomena, and on account of its irrepresentable nature have called it psychoid. [16]

Jung interprets the appearance of such synchronistic phenomena in connection with a fundamental and not causally explained knowledge based on a class of both the microcosm and the macrocosm independent from our will. In such a world order, archetypes are playing the role of regulatory factors. The meaningful coincidence of an internal image with an external event, which characterizes synchronistic phenomena, reveals the spiritual and the physical aspect of the

archetype. The archetype, also, with its high charge of energy (or its divine effect) causes to the person who is experiencing the synchronistic phenomenon an excited emotional state or partial abaissement du niveau mental, which is needed so that synchronistic phenomena may occur.

Jung, indeed, goes as far as to suggest that the archetypes represent the eternal forms of a preexisting mental order. But what really are these archetypes and where do they come from? If we remember the words of George Boole, or even Gdels incompleteness theorem, we may become somewhat suspicious about the whole idea, since most, if not all, of our intellectual aspects seem to be born within the frame of our mental functions and capacities. Even the notion of meaningful coincidence loses any kind of understanding outside the realm of our basic mental functions and logic, since the notion of meaning is strongly connected with a causal interpretation of things and processes. So what meaning or coincidence can signify outside the basic logical structures to which they owe their own existence? Could we say that there exist some spiritual forms outside the realm of logic, which are just faintly perceived by human intelligence through properties of the mind which almost touch the world of miracles and magic, and which are expressed and interpreted with difficulty by such vague notions as archetypes, universals, moral ideas, physical constants, and so on? What could be the meaning of an acausal or non-local connection if both of these notions occur as natural opposites of causality and locality respectively?

In an essay that Jung wrote together with physicist Wolfgang Pauli, the notion of synchronicity is depicted in contrast to causality, as is shown in the previous figure. As Jung put it in his own words: Here synchronicity is to the three other principles as the one-dimensionality of time is to the three-dimensionality of space, or as the recalcitrant Fourth in the Timaeus, which, Plato

says, can only be added by force to the other three. Just as the introduction of time as the fourth dimension in modern physics postulates an irrepresentable space-time continuum, so the idea of synchronicity with its inherent quality of meaning produces a picture of the world so irrepresentable as to be completely baffling. The advantage, however, of adding this concept is that it makes possible a view which includes the psychoid factor in our description and knowledge of nature- that is, an a priori meaning or equivalence. [17]

With respect to this equivalence or symmetry, we could also represent synchronicity, which, in a specific context, has the characteristic of simultaneity, in a space-less and time-less diagram, with synchronicity on the vertical axis, and causality on the horizontal one. In this case, causality could define an event horizon regularly violated by synchronicity. Furthermore, in such a diagram synchronicity could be identified with time itself, as a parameter of order in a wider context. This context can be provided in relation to the notion of the extended present that we discussed before. If the present is regarded as a point then a coincidence of a present event with a future one seems absurd and is incomprehensible. But in the framework of the extended present both events happen here and now, while the future- like event is suspended until it is fulfilled. This view may also represent an extended notion of simultaneity because events that happen together may span different times. But if they are connected to an infinite loop, then the odds are that this spontaneous connection will be causally expressed in due time.

This may sound like some sort of fate, or like a new form of relentless determinism. But in fact this point of view is no more deterministic than spontaneity itself: An event will happen sooner or later, even though we dont know exactly when and where. We also dont know why. However, we may find out something about its temporal and spatial characteristics, as soon as the spontaneous perception of the event becomes rationalized so that the event will reveal its nature through the process of causality. So we may also attribute to the events a meaning. While we ourselves are events living in our extended present, our consciousness is an event itself, connected to all other events, confirming or rejecting them. So we have the power and the means not only to form space- time with our minds but also to designate the cause of things. Not only are we strange loops, exhibiting the same talent of instantaneity as nature does, but also we are time-machines, having the capability to set things in order with respect to space and time, thus also determining their temporal properties. We are creatures that consider our past while we regard the future. We constantly redefine pre- established assumptions while we anticipate future-like conditions to be confirmed or not. Eventually, we may say that future actions justify our past.

The psychology of attention

As I was considering what attention might be, I realized that it should be not a one-track process, but a, lets say, multi-dimensional one. In other words, even if we focus on a single object, it seems that we also have to be aware of the environment against which the object is superimposed. This means that we are fundamentally unable to understand the part if, simultaneously, we havent got an impression about the whole.

A causal agent or an emergent property?

The following question is posed by Elizabeth Styles in her book The psychology of attention: [19] Is attention a causal agent or an emergent property? From the way I have been talking about attention, it might sound as if it is a thing or a causal agent that does something Of course it

might well be that attention is an emergent property; that is it appears to be there, but plays no causal role in information processing. William James (1890) pointed out this distinction when he asked Is attention a resultant or a force? Johnston and Dark (1986) looked at theories of selective attention and divided them into cause theories and effect theories. Cause theories differentiate between two types of processing, which Johnson and Dark call Domain A and Domain B. Domain A is high capacity, unconscious and passive and equates with what various theorists call automatic or pre-attentive processing. Domain B is small capacity, conscious, active processing system and equates with controlled or attentive processing. In cause theories. Domain B is among other things an attentional mechanism or director, or cause of selective processing (1986). They go on to point out that this kind of explanation betrays a serious metatheoretical problem, as, if a psychological construct is to explain the intelligent and adaptive selection powers of the organism, then it cannot itself be imbued with these powers. (1986)

So it seems more likely that attention is an emergent property related to the environment, or even due to the environment. It is very interesting to note here that sounds mainly invoke instinctual feelings. We draw our attention to sounds almost with no power to resist. We cannot physically close our ears, as we can easily do with our eyes. Optical signs, on the other hand, are more easily filtered by the brain, and our eyes are intentionally guided by will, while our ears arent. Attention is also limited to these two senses. We dont focus attention on odors or flavors, even if we move our eyes when we think about them. So it seems that attention is an emergent property of free will, which comes after we have felt or sensed something. Only then we may turn our attention to the source which caused the corresponding stimulus in the first place. Attention is a reaction to forces and a result of causes, not a force or a cause itself. This realization also makes us wonder, in a wider context, how conscious our consciousness really is, and if it is something more than a, more or less, chanceful selection among myriads of simultaneous events: Initial research suggested that the human information-processing system was limited in its ability to perform multiple tasks. Broadbent (1958) proposed that the human brain could be likened to a single channel for processing information. The single channel selected information to be passed through a protective filter on the basis of physical characteristics. Only this selected

information was passed on to be identified. Evidence of semantic processing for material presented on the unattended channel led to the suggestion that all information was pre-attentively analyzed for meaning but only the most important signals were passed on to the response stage (Deutsch & Deutsch, 1963). Treisman (1964) introduced a compromise theory in which the unattended information was attenuated, so that only the most important signals were able to break through the filter. Other theories suggested that the selective bottleneck between preattentive parallel processing and serial attentive processing could move according to different circumstances and task demands (Johnston & Heinz. 1978; Norman, 1968). New ideas which viewed attention as a pool of processing resources, began to gain popularity.

Syntactic vs. semantic

But how does attention become a causal agent in order to form the awareness of events? This question has to do with the common distinction between syntactic (episodic) and semantic memory; that is a place or procedure in our brain by which only important or meaningful facts are kept and stored: Sperling (1960) found evidence for what he believed to be a high capacity fast decay visual memory, which faded within half a second. When presented with displays of brief duration, subjects can typically report only four or five items. However, early in the lifetime of the display, all items were available for report, as demonstrated by the discovery that when subjects were cued by color or location to selectively report a subset of the display, they could report any of the items. As selection on the basis of alphanumeric category seemed impossible, it was thought that categorical information was not encoded in iconic memory. However, just as in auditory experiments, evidence soon accumulated for semantic effects in visual attention tasks (Mewhort, 1967). The question arose as to why selective report on the basis of semantics was so difficult if semantic information was, in fact, available. Merikle (1980) found that selective report was enhanced by a category difference provided the category formed a perceptual group. While Sperling had been concerned to discover how much could be processed in a multi-element display, a new wave of experiments was aimed at discovering the extent to which attention could be selective within a multi-element display Overall, this issue is unresolved, but the bulk of

evidence seems to support selectivity operating after the identity, color and position have been analyzed. Many experiments show that errors in selective report are due to identity and position not being correctly combined. This evidence is consistent with the neuroanatomical organization of the brain, in which there is one system that knows what something is and another system that knows where something is (Ungerleider & Mislikin. 1982). It has been proposed that selection is usually made on the basis of physical information, but from identified stimuli which only evoke a conscious experience once integrated (Allport, 1977; Coltheart. 1980) Recent work on the interfering effects of incompatible distractors shows that there are different effects which depend on task demand. Treisman (1993) and Lavie (1995) believe the results found depend on the perceptual load of the task. When perceptual load is high, as in selective filtering experiments, evidence for early selection is found, but when perceptual load is low in selective set experiments, selection can be late.

Selective and divided attention

As Wikipedia says, in cognitive psychology there are at least two models which describe how visual attention operates. These models may be considered loosely as metaphors which are used to describe internal processes and to generate hypotheses that are falsifiable. Generally speaking, visual attention is thought to operate as a two-stage process. In the first stage, attention is distributed uniformly over the external visual scene and processing of information is performed in parallel. In the second stage, attention is concentrated to a specific area of the visual scene (i.e. it is focused), and processing is performed in a serial fashion. [19] The attentional spotlight

Returning to the book of Styles: Usually, we move our eyes to an object or location in space in order to fixate what we are attending to. However, as early as 1866, Helmholtz noted that attention and fixation were not necessarily coincident. In the introduction we noted that if you fixate in one place (for example, on the asterisk here*) you are able to read nearby words without shifting fixation to the location occupied by those words. Further, if you are attending and fixating in one place, you may find your attention drawn to a familiar word, such as your name or home town, elsewhere on the page, or by a movement in peripheral vision. It is as if there is some kind of breakthrough, or interrupt mechanism caused by information outside fixation.

One of the most popular metaphors for visual attention is that it is like a spotlight that allows us to selectively attend to particular parts of the visual environment. William James (1890) described visual attention as having a focus, a margin and a fringe. We have already seen that there is disagreement over the degree of processing that stimuli receive with and without attention. To some extent the same arguments will continue, but we shall mainly be concerned with the question of whether a spotlight is a good metaphor, how it knows where to go to what it can be directed, and what kinds of processing go on inside and outside the spotlight

Posner (1980) showed that directing attention to a valid stimulus location facilitates visual processing, and this led him to suggest that attention can be likened to a spotlight that enhances the efficiency of the detection of events within its beam (Posner et al., 1980). It is important to note here that attention is not synonymous with looking. Even when there is no time to make a voluntary eye movement to the cued location, facilitation is found. Thus, it seems, visual attention can be covertly directed to a spatial location other than the one we are fixating. Posner (1980) proposed two ways in which attention is oriented to a stimulus. He distinguished between two attentional systems: an endogenous system, which is overtly controlled by the subject's intentions (for example, when an arrow cue is believed to be informative or not): and an exogenous system, which automatically shifts attention according to environmental stimuli, is outside the subjects control, and cannot be ignored

Miiller and Rabbitt (1989) did a series of experiments aimed at refining and clarifying the question of whether there was only one attentional orienting mechanism controlled in different ways, as Posner had proposed, or whether there were two distinct mechanisms, one reflexive and the other voluntary. Their experiments pitted peripheral and central cues against each other to determine the difference in their time courses and whether they were equally susceptible to interruption. Results were consistent with an automatic reflexive mechanism which is strongly resistant to competing stimuli and a second voluntary mechanism which can be interfered with by the reflexive orienting mechanism.

In their second experiment Midler and Rabbitt found that when peripheral and central cues were compatible, facilitation of cued locations was greater than when the cues were incompatible, and that the inhibitory effects of peripheral cues were lessened when they were in unlikely locations. It appeared that voluntary orienting to central cues could modify orienting in response to reflexive, peripheral cues. Mller and Rabbitt (1989) claim this pattern is consistent with the idea that the reflexive and the voluntary mechanism can be active simultaneously. The fact that the automatic reflexive orienting can be modified by voluntary control processes, suggests that reflexive orienting is less than truly automatic (automatic processes cannot be voluntarily controlled). However, according to the two-mechanism model of attentional orienting

this can be explained. Reflexive orienting is triggered and proceeds automatically, and if both reflexive and voluntary orienting mechanisms are pulling in the same direction they have an additive effect. However, if they are pulling in different directions, their effects are subtractive.

So it seems that all the previous conclusions confirm the initial remark that our perception of the world cannot be achieved if we just focus our attention on a form without simultaneously having an idea of the background. This distinction between form and background is also related to impossible objects because the impossibility arises from the unconscious demand for wholeness. It is at a later, second, stage that focused concentration begins to explore the details of the observed object, while an idea about its general form has already been conceived. At this second stage, it is interesting to see how attention processes the details: Given that cues can direct attention, another question arises: how does attention move over the visual field? Is it an instantaneous shift or does it take time? Is it initially spread over the whole field, narrowing down when the cue indicates a location, or does a small spotlight travel to the cued location? Experiments by Posner and his collaborators have been taken to suggest that the spotlight takes time to move over visual space. When the cue indicates only the direction in which the target is likely to appear, rather than the precise location, it is better to have a longer time interval between cue and target when the target is distant from the currently attended point. Tsal (1983) showed that reaction time to a target gets faster as the interval between the cue and the target increases, suggesting that it takes time for the spotlight to travel to the cued location. It appeared as if there was a point of maximum selectivity moving through space, as if it were indeed a spatial spotlight

There is evidence that the spotlight can change the width of its focus depending on the task to be performed. La Berge (1983) used a probe to indicate which letter in a five-letter word was to be reported. Subjects' spread of attention was manipulated. In one condition they had to categorize the central letter in the row, which was expected to make them focus attention in the middle of the word. In the other condition they were to categorize the word, which was expected to encourage them to distribute attention over the whole word. La Berge found that response to a probe was affected by whether the subject was attending to the central letter or to the whole

word. When attention was focused on the center letter, responses to that letter were faster than to any other letter, but when the whole word was attended, responses to any letter position were as fast as that to the center letter in the focused condition. This result seems to show that the beam of the spotlight can be adjusted according to the task and is not of a fixed size.

Broadbent (1982) summarized the data on the selectivity in visual displays and suggested that we should think of selectivity as like a searchlight, with the option of altering the focus. When it is unclear where the beam should go, it is kept wide. When something seems to be happening, or a cue indicates one location rather than another, the beam sharpens and moves to the point of maximum importance.

It would be interesting to compare the attentional spotlight with a probability distribution in such a way that the probability is higher at the center of the focus and decreases away from it. Another characteristic of such a distribution is that the probability never goes to zero, so that even at the most remote places of attention there exists some degree of perception, even a very low one.

The attentional spotlight is also related to the way we perceive objects in space: There is increasing evidence that we do attend to objects rather than regions of space. Duncan (1984) showed that subjects found it easier to judge two attributes that belonged to one object than to judge the same attributes when they belonged to two different objects. The stimuli in Duncans experiment were a rectangle with a gap in one side over which was drawn a tilted line. Both the rectangle and the line had two attributes. The rectangle was long or short with the gap either to the left or the right of center. The line was either dotted or dashed and was tilted either clockwise or anticlockwise. Duncan asked subjects to make one or two judgments on the possible four attributes. When two judgments were required- say, gap position and tilt of linesubjects were worse at making the second judgment. However, when both the judgments related to the same object- say, gap position and the length of the box- performance was good. Duncan proposed that we attend to objects, and when the judgments we make are about two objects, attention must be switched from one object to another, taking time...

Object-based attention is clearly very important. But, if you remember Posner (1980) showed that the attentional spotlight could be summoned by spatial cues and covertly directed to locations in space. An associated effect, inhibition of return, was hypothesized to result from the tagging of spatial locations. What if you were searching for an object, found it, but then the object moved? If attention was spatially based, you would be left looking at an empty location! Tipper. Driver, and Weaver (1991) were able to show that inhibition of return is object based. They cued attention to a moving object and found that the inhibition moved with the object to its new location. Tipper et al. (1991) propose that it is objects, not space that are inhibited and that inhibition of return ensures that previously examined objects are not searched again

The attentional explanation for unilateral visual neglect given earlier assumed that it was space that was neglected rather than objects. However, there is an increasing body of evidence in favor of the suggestion that attention can be object based. Indeed the amount neglected by a patient will depend on what they are asked to attend to. In Bisiach and Luzattis (1978) experiment, the object was the Piazza del Duomo. What if the object had been the Duomo itself? Or if the patient had been asked to draw a single window? Then the patient would have neglected half of the building or half of the window. Driver and Halligan (1991) did an experiment in which they pitted environmental space against object-centered space. If a patient with visual neglect is given a picture of two objects about which to make a judgment and that picture is set in front of the patient so that both the environmental axis and object axis are equivalent, then it is impossible to determine which of the two axes are responsible for the observed neglect. Driver and Halligan (1991) devised a task in which patients had to judge whether two nonsense shapes were the same or different. If the part of the one shape which contained the crucial difference was in neglected space when the environmental and object axes were equivalent, the patient was unable to judge same or different.

Styles summarizes the above conclusions as follows: Visual attention has been likened to a spotlight which enhances the processing under its beam. Posner (1980) experimented with central and peripheral cues and found that the attentional spotlight could be summoned by either cue but peripheral cues could not be ignored whereas

central cues could. Posner proposed two attentional systems, an endogenous system controlled voluntarily by the subject and an exogenous system, outside the subjects control. Mller and Rabbitt (1989) showed that exogenous, or in their terms automatic reflexive, orienting could sometimes be modified by voluntary control. Although a cue usually facilitates target processing, there are some circumstances in which there is a delay in target processing (Maylor, 1985). This inhibition of return, has been interpreted as evidence for a spatial tagging of searched locations to help effective search. There is some debate over how many locations can be successively tagged. Inhibition of return can also be directed to moving objects (Tipper et al., 1994). Other experimenters have tried to measure the speed with which the spotlight moves (e.g. Downing & Pinker, 1985). The apparent movement of the spotlight might be more to do with the speed with which different areas of the retina can code information. Other researchers asked whether the spotlight could be divided but concluded that division was not possible. It was suggested that a zoom lens might be a better analogy than a spotlight as it seems that the size of the spotlight depends on what is being attended (LaBerge, 1983). Lavie (1995) argued that the size to which the spotlight could close down depended on the perceptual load of the task.

Visual attention can also be cued endogenously and exogenously to change between levels of representation when either the local or global attributes of a stimulus are to be attended (Stoffer, 1993). The right cerebral hemispheres are specialized for global processing and the left for local processing. The hemispheres are also specialized for orienting (Posner & Petersen, 1990) with the right parietal area able to orient attention to either side of space, but the left parietal area able to orient only to the right. Thus right parietal lesions often give rise to visual neglect of the left side of space. Posner et al. (1984) believed that normally there are three components of visual attention: disengage shift, and engage. According to Posner et al., patients with visual neglect have no difficulty engaging or shifting attention, but if attention is cued to the neglected side they have difficulty disengaging from the non-neglected side. Volpe et al. (1979) and Berti et al. (1992) have demonstrated that patients can make judgments about stimuli in neglected space, even when the stimuli can be judged only on a semantic property.

A mathematical model for attention?

It seems that semantic processing of information is necessary for cognition. Intentional observation of the external world is purpose-oriented even if in most of the cases it begins with a random stimulus. Even if there is a vague notion of space pre-established in thought, it is the causal relations between objects which give space a definite meaning. The same holds for time: Even if there exists an internal property in the mind relative to what we mean by time, it is the shift of attention which follows the motion of objects which gives time a definite physical context. If now we combine the attributes of objects and visual research, it may be possible to build a mathematical model of attention, and of consciousness in general: For objects to be formed the attributes that make them up must be accurately combined. Treisman and Gelade (1980) put forward feature integration theory (FIT) in which they proposed that focal attention provided the glue that integrated the features of objects. When a conjunction of features is needed to select a target from distractors, search is serial using focal attention, but if a target can be selected on the basis of a single feature, search is parallel and does not need focal attention. Initially the theory suggested that all conjunctions of features needed to be integrated if selection were to be possible, but as time has passed Treisman has accommodated a variety of data by modifying the theory to include a feature hierarchy and defining features behaviorally as any attribute which allows pop-out. Thus features may include some three-dimensional properties of objects, movement etc

Duncan and Humphreys (1989. 1992) suggested that, rather than serial or parallel processing depending on whether features need to be combined or not serial or parallel search will be necessary depending on the ease with which targets and distractors can be segregated, which in turn depends on target/non-target homogeneity and the homogeneity of the distractors. Humphreys and Mullers (1993) model of visual search SERR is based on the rejection of perceptually segregated groups in the visual display. In this model it is objects rather than space which mediates search FIT is more directly concerned with the binding problem than is Duncan and Humphreys theory. The binding problem could be explained neurophysiologically by the synchronization of activity over concurrently active neurons, as suggested by Crick and Koch (1990) and Singer

(1994). The idea here is that the brain knows what belongs together because of what is concurrently active and this coherent activity could then give rise to conscious experience of the object.

Other approaches to understanding visual attention are via formal mathematical theory, such as CTVA which is an attempt to combine both space-based and object-based visual attention within one theory Here we shall briefly consider Logans (1996) CODE theory of visual attention (CTVA) which integrates van Oeffelen and Voss (1982; 1983) COntour DEtector (CODE) theory for perceptual grouping with Bundersens (1990) theory of visual attention (TVA). Logan attempts to integrate theories of space-based attention with theories of object-based attention.

At the beginning of his paper Logan focuses on what he considers to be the five key questions that must be addressed by any theory of visual attention. The first question that any theory must consider is: how is space represented? Space-based theories such as FIT assume that space is represented by a map of locations, with objects represented as points in space. Further, the Euclidean distances between objects are important for space-based attention; for example, Eriksen and Eriksen (1974). On the other hand, object-based theories are, according to Logan, unclear about the way in which space is represented. When grouping factors counteract Euclidean distances (for example, Driver & Baylis, 1989) the theory is object based. Logan argues that as grouping factors such as proximity are very important for object-based theories, abandoning Euclidean space seems an odd thing for object-based theorists to do. Logans next important question is: what is an object? This question has no agreed answer. However, although theorists disagree, there is some consensus that objects are hierarchical and can be decomposed into component parts.

The next question is: what determines the shape of the spotlight? Logan says that theorists are generally vague on this matter and must be explicit about what determines spotlight shape, as this leaves less work for the omnipotent homunculus to do (1996).

The remaining two questions are: how does selection occur within the focus of attention; and how does selection between objects occur? In space-based and object-based theories of selection everything within the focus of attention is assumed to be processed. Yet the well-known Stroop effect demonstrates that selection can operate within a spatial location. The classic Stroop task requires the subject to name the color of the ink in which a color name is written. Although there is interference between the two representations of color, in that the ink interferes with the color word, selection is possible. So, some other intentional selective mechanism must exist which is not based on spatial representations.

About the aforementioned questions, I would like to make the following remarks: The internal representation of space is constructed by the relations between the objects found in this space. Objects are compound and coherent entities as realized by consciousness through a process of quantization. Objects themselves are not hierarchical, but their motions are subject to a hierarchical notion of time. Objects are arranged in space by time. Furthermore, objects need not be divisible. They may equivalently represent indivisible entities or fundamental constituents.

The shape of the spotlight seems like a probability distribution, widely spread in space-time, with the maximum probability being at the center or focus of attention. In our case, it seems that consciousness acts as a causal agent in order to maximize the probability here and there during the process of observation. This in physics is related to the so-called observation-selection effect. Focused attention certainly is a causal process initiated by consciousness, while the initial perception of an event is mostly unconscious.

As far as the Stroop effect is concerned, depicted in the previous image, we tend to name words instead of colors. This is rather straightforward, as visual perception can be unconscious, while semantic representation is always conscious. There is no natural meaning in nature, except what we consider to be the meaning of natural phenomena. Thus a color is meaningless while the word of a color is a colorless definition of a frequency as perceived by the senses.

As far as the consequences of a mathematical representation of the theory of attention is concerned, Styles notes: CTVA theory is mathematically complex and we shall not go into the maths here. However, in essence CTVA incorporates CODE theory (van Oeffelen & Vos, 1982, 1983; Compton & Logan, 1993) and TVA (Bundersen, 1990). CODE provides two representations of space: an analogue representation of the locations of items and another quasi-analogue representation of objects and groups of objects. The analogue representation is computed from bottom-up processes that depend entirely on the proximity of items in the display. The representation of objects and groups is arrived at from the interaction between top-down and bottom-up processes. In CODE, locations are not points in space, but distributions. The sum of the distributions of different items produces what is called the CODE surface and this represents the spatial array. Top-down processes can alter the threshold applied to the CODE surface. Activations above any given threshold belong to a perceptual group

Logan explains the way in which CODE can account for a variety of data, including the Eriksen effect, but in order to achieve within-object or within-region selection another selective mechanism is required. This is where TVA comes in. Essentially. TVA selects between categorizations of perceptual inputs and assumes two levels of representation. At the perceptual level representations are of the features of items in the display. At the conceptual level the representation is of the categorizations of features and items. These two representations are linked by a parameter which represents the amount of evidence that a particular item belongs to a particular category. In TVA location is not special: it is just another categorizable feature of an item like shape or color. Selection is achieved by TVA choosing a particular category or categorization for a particular item or items. There then ensues a race, and the first item or set of

items to finish wins the race. At the end of the race both an item and a category have been selected simultaneously, so this theory is both early and late at the same time.

Finally, Styles returns to answer the previous 5 fundamental questions within the context of a more rigid formulation of the theory of attention: Does CTVA answer the questions that Logan identified as essential to any theory of visual attention? First, is there explicit detail on the representation of space? In the theory, space is represented in two ways: bottom-up on the CODE surface and top-down by the imposition of the thresholds that result in perceptual groups. Second, what is an object? According to CTVA an object is a perceptual group defined by whatever threshold is set by the top-down mechanism. Thus an object may be defined by changing the threshold, at different hierarchical levels. Third, how is the shape of the spotlight determined? The spotlight is the above-threshold region of the CODE surface, which depends on both the perceptual input and the threshold set. Fourth, how does selection occur within the area of the spotlight or focus of attention? This is achieved by TVA biasing the categorization parameter which makes the selection of some categories more likely than others. Lastly, how does selection between objects happen? This is controlled by topdown language processes and will be discussed further in the chapter on the intentional control of behavior. While there are some limitations of CTVA, such as its inability to group by movement, or deal with overlapping objects, theories of this kind, although extremely abstract, offer a promising look into the future of cognitive modeling.

An attentional bottleneck
The events which take place in the surrounding space and, simultaneously, inside our mind, are gathered and processed by the brain in parallel, as experiments suggest. But at the moment that we need to take action we must chose a single event against all the others, otherwise we wont be able to take any action at all. This is a bottleneck created by attention, like a wide stream of probabilities that turns into a narrow pass at the end, to let just one alternative get through. This bottleneck is associated with the so-called quantum Zeno effect, which we will discuss later on. About the psychological aspect of the bottleneck, Styles says:

One of the most obvious behavioral properties of the human information processing system is that there seems to be a fundamental limit on our ability to do a number of things at once. A classic experiment by Hick (1952) showed that choice reaction time, to a single stimulus, increases with the number of possible alternatives (Hick's law). Simply preparing to respond to signals is costly. Also, evidence of the psychological refractory period (PRP) shows that when two stimuli are presented in rapid succession, so that the first stimulus has not been responded to when the second stimulus arrives, response to the second stimulus is slowed (Welford, 1952; Fagot & Pashler, 1992). This suggests that the response to the second stimulus must wait until the response to the first stimulus has been selected and provides clear indications of such a limit. At the same time there is now clear evidence that the brain can process an enormous amount of information simultaneously in parallel over a variety of modality specific subsystems. In fact Neisser (1976) said there was no physiologically established limit on the amount of information that can be picked up at once. Here we have a paradox. The brain is apparently unlimited in its ability to process information, yet human performance is severely limited even when asked to do two very simple tasks at once.

For early workers (e.g. Broadbent, 1958, 1971; Treisman, 1960), this bottleneck suggested a limited capacity system and psychologists were interested to find out where the bottleneck was located. The concept of a bottleneck necessarily implies one place where processing can proceed only at a limited rate, or a limit in capacity to process information. A bottleneck implies a point where parallel processing becomes serial and was originally couched in the metaphor of likening the mind to the old digital computer, which had buffer storage and limited capacity processing components, and whose programs were written as flow charts in which different stages had to be completed before others could begin

Over the last 10 to 15 years, however, there has been an explosion in the use and development of computers which can process information in parallel over multiple processing units, pioneered by Hinton and Anderson (1981). McClelland and Rumelhart (1986). and Rumelhart and McClelland (1986). This new connectionism, otherwise known as parallel distributed processing (PDP), or artificial neural network approach, has had profound influence on current

metaphors of mind. It would be fair to say that the new metaphor for the mind most currently in favor is that the brain is like (in fact, is) a neural network. The principal impact of PDP has been on modeling learning and memory, and such models very successfully solve all sorts of previously intractable modeling difficulties. More recently PDP has been successfully applied to show how, by damaging a normal system, a neuropsychological deficit can arise (e.g. Hinton & Shallice, 1989; Farah, 1988) and is beginning to be applied to modeling attention. So, we now know from neurophysiological studies that the brain is a massively parallel, highly interconnected and interactive computing device, with different specialized subsystems designed to respond selectively to particular perceptual events and compute specialized information processing operations, (e.g. van Essen & Maunsell, 1983). These processing events do not proceed in a stage-like serial fashion but happen simultaneously in parallel. Although we may draw flowcharts of information processing, where the boxes represent theoretical computational stages or specific modules for processing specific information, we need to remember that the brain is, in fact, a simultaneous parallel processing device with many neurons and pathways. It is also important to know that there are neurons and pathways and brain regions selectively responsive to particular types of information.

The attentional bottleneck seems to be vital for free will operations. It is the moment of truth: We must wait for a while to make sure we make the right decision. Then, we may forget about all the other alternatives, at least until another problem appears. We have the impression about a myriad of other alternatives, but this impression is faint, with all the light preserved for the point where attention is focused. This is exactly what concentration is all about. Our mind creates a bottleneck, a dam in the stream of consciousness, so that a great amount of energy can be locally gathered for a definite purpose: If the brain is concurrently processing vast amounts of information in parallel perhaps there is a problem to be solved. That problem is how to allow behavior to be controlled by the right information at the right time to the right objects in the right order. Perhaps the bottleneck, or change from parallel to serial, happens just before response. The brain processes all information as far as it can in parallel; but at the moment of response we are limited. This is certainly what the most recent evidence on the psychological refractory period suggests

Neuman (1987) considers this problem. If all potential actions were simultaneously trying to control action, there would be behavioral chaos. In order to prevent such disorganization of behavior there must be selection, and Neuman argues it is this need for selection which produces the limit on human performance. The psychological refractory period, which arises when two successive stimuli require rapid response, suggests that the response to the second stimulus must wait until the response to the first stimulus has, at least, been selected and may be a functional way of preventing two responses becoming available at once. However, Neuman (p. 374) suggests that there are a variety of selectional problems and consequently a variety of selectional mechanisms are needed: Hence, attention, in this view, does not denote a single type of phenomenon. Rather it should be viewed as the generic term for a number of phenomena each of which is related to a different selection mechanism

Neuman (1987) says that the problem of selecting the right effector at the right time, so that only one action is attempted, is rather like preventing train crashes on a busy railway network. One way to avoid crashes would be to have a central station monitoring the trains on the tracks, the other would be to have a system where the network was divided into sections and when one train was on a track within the section it automatically set the signals to prevent other trains coming along. He argues that the brain uses the blocking method. This results in a capacity limit, as one ongoing action inhibits all other possible actions. Of course, it would be dangerous to have a blocking mechanism that could not be interrupted by a change in environmental circumstances. Orienting responses to unexpected events which have been processed pre-attentively, will break through the block. Overall Neuman views attention as an ensemble of mechanisms which allow the brain to cope with the problem of selecting appropriate information to control action. The apparent limitation on our abilities is not a result of a limited processing capacity but has evolved to ensure coherent behavior.

The effort of attention

The bulk of the energy concentrated at the moment of decision making may quantify a parameter of attention, the effort of attention. Effort may be seen as a measure- and at the same time as proof- of free will: Fagot and Pashler (1992) suggest a straightforward model, based on a production system framework. Andersons (1983) ACT is a production system used by computer scientists. Fagot and Pashler (1992) propose that the model to explain the bottleneck in production system terms would have these properties:

1. Prior to the task being performed, a number of response selection rules are activated. The more rules that are activated the less the individual activation for each rule. 2. Each rule has a condition and an action. When the condition for the action is met, the rule applies and the action is carried out. The higher the activation of the rule, the faster it will be applied. 3. Only one rule can be applied at once. 4. A rule can specify multiple motor responses in its action statement.

In order to find the right action, given a particular condition specified by the perceptual input, a code must be generated or retrieved from memory. Fagot and Pashler say the code can be considered as a specification of where to find a description of how to make the response which, their experiments have shown, can be multiple motor actions. They suggest that the bottleneck is at the point of generating the code and only one response can be retrieved at a time. Later mechanisms which look up response specifications and translate them into action are not limited. Overall, Fagot and Pashler believe the evidence is consistent with a bottleneck in processing at the stage where action codes are retrieved and generated. They do however, point out some problems for the model. In Chapter 3 we looked at the question of early and late selection in visual attention and found evidence in the experiment by Eriksen and Eriksen (1974) for irrelevant letters which flank a target causing interference. This interference was interpreted as evidence for response activation from the distractors conflicting with the response to the target letter. This effect should not happen if only one response can be retrieved from memory at a time as the model above has just suggested. Fagot and Pashler (1992) suggest a way round this

paradox. If the system was incapable of implementing two rules at a time because the neural mechanisms which implement the rules cannot settle into two different patterns of activity at the same time, then, although two responses could not be made at once, the pattern of activity from redundant inputs could still interfere with the process of settling into one pattern, hence slowing response and producing the Eriksen effect

Kahnemans theory of attention and effort

Kahneman (1973) put forward a theory that likens attention to a limited resource which can be flexibly allocated as the human operator changes their allocation policy from moment to moment. Attention can be focused on one particular activity, or can be divided between a number of activities. When tasks are more difficult, more attention is needed. Unlike Broadbent's flowchart of information through a structural system, Kahnemans model is a model of mind. It includes enduring dispositions momentary intentions and an evaluative decision process that determines the current demand on capacity. Attention here is rather like a limited power supply: if you turn on the rings of a gas cooker, and the central heating boiler fires up, the height of the gas jets in the cooker rings goes down. There is only a limited supply of gas to these two appliances, and the demand from the boiler reduces the amount of fuel available to the cooker. However, in Kahneman's theory, if we put more effort into a task we can do better- for example, during increased demand the gas company might raise the gas pressure in the main supply. So the amount of attentional capacity can vary according to motivation.

The amount of effort available is also related to overall arousal level: as arousal increases or decreases, so does attentional capacity. Whilst there are some attractive properties in this model, such as the move away from structural limitations to processing limitations, there are some serious problems with the theory. Firstly, it is known that at low levels of arousal, performance is poor: according to Kahneman this would be because the attentional capacity is low when arousal is low. As arousal increases, so does performance, up to an optimum level, beyond which further increases in arousal, rather than improving performance, produce decrements. This is known as Yerkes-Dodsons law (Yerkes & Dodson. 1908).

We have probably all experienced situations, where, for example, a little background noise helps to keep us alert, and improves performance, but if the noise becomes extremely loud, we find it impossible to do anything else. If attentional effort were directly related to the arousing effect of the noise, task performance should improve monotonically with the increase in the noise. Secondly, defining arousal is very problematic (Revelle. 1993). Thirdly, and possibly this is the most serious problem: how can task difficulty be measured independently (Allport 1980)? Kahneman put forward the idea that task difficulty could be determined by the amount of interference on a concurrent task. However, if task difficulty is measured by interference, and interference is an index of difficulty, we have no independent measure.

Control of action
If effort is a measure of attention then a set of goals may determine the direction of attention. Styles analyzes how we may gain the control of action by a goal-directed prospect of attention: While it is well appreciated that complex behavior requires some kind of control process to coordinate and organize it, there is to date no clear idea of exactly how this is achieved. However, if we ask an experimental subject to do one task rather than another, respond to one aspect of a stimulus and ignore all others, the subject is able to do it. Somehow the cognitive system can be configured to do one task at one time and another task at another time on the basis of intentions. Thus a major question psychologists have to address is: how is behavior controlled by internal intentional states (endogenously) rather than by external perceptual states (exogenously)? Until recently little experimentation had been done on the internal control of tasks but this work is beginning and we shall examine some of it

Duncan (1986. 1993) stresses the importance of goals in the selection of inputs to the information processing system and in directing behavior. When we discussed Broadbenfs (1958) filter theory, a question left unanswered was Who sets the filter? In his (1993) paper, Duncan considers this question proposing that the filter is controlled by current goals. That is the filter will select information relevant to ongoing behavior. He suggests that both experimental and neurophysiological evidence support the idea that control of the selective filter is achieved by a process of matching inputs against an attentional template which specifies what information is

currently needed. This idea is similar to that of Broadbent (1971) who had, in his refinement of filter theory, proposed two mechanisms- pigeon-holing and categorization- which were able to bias central mechanisms toward one outcome rather than another. Duncan (1986) argued that in normal activities people set a list of task requirements. He called this a goal list. In everyday life goal lists originate from the environment and needs, whereas in the laboratory they may originate from the experimenter's instructions. Goal lists are used to create action structures which are the actions needed to achieve the goals. Duncan says that, to produce the necessary action structure from a goal list people use means-end analysis, which is a common heuristic useful in problem solving. Basically means-end analysis computes the difference between the current state and the desired end state and makes moves or actions that reduce the difference between where you are now (the present state) and where you would like to be (the goal state). Duncans overall theory involves three components. First, there must be a store of actions and their consequences: these, he sees as similar to a memory of productions as in ACT discussed earlier. Secondly, there is a process by which goals are selected to control behavior. This proceeds by means-end analysis whereby an action is selected to minimize the difference between the current and the goal state, and this process will continue until the mismatch between the states is minimal or nil. In order to keep behavior coherent it is important that the goal list inhibits other potential actions and allows relevant actions to continue Duncans emphasis on the importance of setting and maintenance of goals in normal behavior seems well justified and provides a parsimonious account of a variety of apparently inconsistent symptoms found in patients who have suffered frontal damage. For example, the fact that they can exhibit both perseveration and the inability to initiate spontaneous actions is easily explained by the difficulty they have with using goal structures.

Is intentional control an illusion? So our will to act seems to be related with a set of goals, a goal list, which each time our brain consults with in order to take the appropriate action. What is fundamental as a conclusion here,

however, is the fact that this leaflet of action, since it is so important for survival, must be deeply imprinted somewhere in our brains and most probably in the unconscious territory, if we want action to be taken as quickly as possible. But this brings again forward the question how conscious our actions are, as well as our free will for the actions taken: SOAR is another cognitive theory based on a production system architecture, developedby Laird. Newell, and Rosenbloom (1987) and Newell, Rosenbloom, and Laird (1988). Like ACT it is a symbolic, artificial intelligence architecture. In SOAR there is a single long-term memory, which is a production system, used for both procedural and declarative knowledge. There is also a working memory which holds perceptual information, preferences about what should be done, a goal hierarchy, and motor commands. This cognitive system uses a problem-solving principle to select the most appropriate course of action, given the current situation. When a decision is difficult, due to incomplete or inconsistent knowledge, the architecture automatically creates a new sub-goal and the problem solving goes back to resolve the impasse. This process of creating new sub-goals produces new goal hierarchies. In this way new productions are continuously being produced as a result of SOARs experience in goal-based problem solving, a process called chunking.

It could be that, although it appears as if a person sets goals internally and intentionally (endogenous control), they are in fact activated by environmental stimuli such as an instruction from the experimenter, or by internal needs and desires that arise out of fundamental biological processes (exogenous control). So, the need for food may activate the goal "Make a sandwich". What appears to be free will and goal-directed behavior, may be simply a complex behavior pattern that emerges from a whole conspiracy of internal needs and external stimulation. Kelley and Jacoby (1993) argue that we cannot distinguish between what they call conscious control and automatic control by simply asking people whether they intended to do something or not because intention is an attribution which may follow behavior as well as direct it. When we feel the intention to stand up, for example, this feeling of intention may be following the beginning of the action rather than preceding it. That is to say we may attribute our action to an intention, when in fact this was not so. Interpreting our actions in terms of intentions gives us the feeling of having rational, meaningful behavior. Thus it may be dangerous to assume that the subjective

experience of free will is evidence for its existence; and if indeed, this is the case, the distinction between automatic and controlled processing- which relies on the subject applying strategic control- is immediately blurred

The problems of consciousness

One of the major differences between automatic and controlled processing is that controlled processing is, by definition, said to be open to strategic, conscious control whereas automatic processing takes place outside consciousness. Although we may become aware of the outcome of automatic processing we are unable to consciously inspect the processing leading up to that outcome. By this account it sounds as if the difference between conscious and unconscious processing corresponds very closely to the distinction made between controlled/automatic processing. Some theorists have indeed tried to equate attentional processes with consciousness or awareness. To a large extent this is what Norman and Shallice (1986) have done in their model. But, beware, there is more than one meaning, or interpretation of conscious or consciousness Despite the problems associated with deciding what we actually mean by conscious and unconscious processing, there is a large literature on the fate of unattended information, where experimenters usually take the term unattended to mean unaware or without conscious identification.

In our discussion of the early-late debate, we saw that the ability of unattended information to bias responses given to attended information was taken as evidence for extensive pre-attentive (automatic, unconscious) processing. That is prior to the selective, attentional stage, where information became consciously available, unconscious information processing was producing subliminal semantic priming effects. These results were taken as evidence for late selection. Over the years there has been a long-standing debate about the validity of experiments said to provide evidence for semantic activation without conscious identification (SAWCI). There is argument about the best methodology to use, which criteria should be chosen to determine consciousness or awareness in the subject, and the right kind of thresholding techniques

If there really is semantic activation from stimuli that we are unable to report, then we should be able to look at the effect effect of that activation on a subsequent task. There have been a number

of experiments that have attempted to use the semantic activation from unreportable words to prime subsequent stimuli. In these experiments the first (prime) stimulus is presented very rapidly, usually in a tachistoscope, and immediately followed by a mask. The speed with which the mask follows the stimulus can be set such that the subject is not even able to determine whether a word was presented at all, let alone what the word was. Subsequent presentation of another word (the probe) at a supraliminal level is usually used to test for any effects of the first word on the second. This priming paradigm has produced some of the most controversial experiments in the SAWCI literature. Under these conditions there seems to be little possibility that the subject could pay any conscious attention to the first, priming stimulus, even if they tried, so we can be more certain that any effects effects are due to unconscious processing. Of course, there is always the problem of determining what exactly we mean by unconscious together with the difficulty of setting the prime-mask duration so that we can be sure that the subject really was unconscious. We shall discuss these problems in more detail after we have looked at some examples of visual masking experiments, said to demonstrate SAWCI.

Marcel (1980, 1983) has provided some of the most controversial data on high level information processing below the level of conscious awareness. Using an associative priming paradigm based on that of Meyer and Shavaneveldt (1971), Marcel presented his subjects with a masked prime and then measured how long it took for the subjects to make a lexical decision. Lexical decision tasks involve asking subjects to say as quickly as possible whether the letter string they see is a word or not. In normal, supraliminal conditions, a prime such as BREAD will facilitate the lexical decision for an associated word like BUTTER, but will not facilitate an unassociated word such as NURSE. Marcel's priming stimuli were masked so severely that the subject could not detect their presence on more than 60% of trials. Would the same results be found in this subliminal condition? When the primes were masked by a pattern mask, there was evidence of facilitation (i.e. BREAD primed BUTTER) just as in supraliminal experiments. However, when the mask was a random noise mask, there was no priming. This is taken as evidence for two different kinds of masking (see Turvey. 1973 for a review), one produced by the noise mask, which degrades the stimulus input early in processing, and another produced by the pattern mask. Marcel proposed that the pattern mask does not prevent the automatic, unconscious access to stored semantic knowledge, but does prevent perceptual integration, and hence access to

consciousness. This argument is similar to the one made by Allport (1977) and Coltheart (1980)...

Evidence from neuropsychology While there are numerous difficulties in determining whether or not normal subjects are aware or conscious of a stimulus at the time of presentation, patients with specific forms of neurological damage are never able to report certain stimuli, no matter how hard they try or how long the stimulus duration is. Studies on neuropsychological patients provide more evidence for the importance of consciousness in normal behavior as well as evidence that stimuli which cannot be overtly recognized are, in fact, processed outside consciousness. In the literature there are a number of striking examples of the way in which attention and consciousness can break down following brain damage. Cognitive neuropsychologists study the behavior of these patients in order to try to understand not only the damaged system, but also the normal cognitive system. Apart from throwing light on the processes underlying normal information processing, studies of patients demonstrate selective impairments of different varieties of consciousness.

One of the most important assumptions made by cognitive neuropsychologists is that the human brain is modular. This assumption stems from the very influential ideas of Man (1976) and Fodor (1983). In a modular system, large and complicated computations are achieved by lots of modules. These modules perform particular processing operations on particular, domain specific, kinds of information. Together they form the whole system, but each module acts as an independent processor for its own particular purpose. Fodor (1983) argues that modules are innately specified, hard wired and autonomous in that the functioning of the module is not under conscious control. In a modular system the failure of one module does not prevent the other remaining modules from working. Such a system would seem advisable in terms of survival: we would be in deep trouble if damage to one small part of the brain resulted in all the rest of the undamaged brain ceasing to work. Not only is a modular system a sensible design, but there is good evidence that, when patients suffer local damage to particular brain regions, only certain computational functions are lost. If we assume attention and consciousness are important cognitive processes or states, then it seems likely that cognitive neuropsychology may throw

light on them. Further, if there are varieties of attention and consciousness, we might expect to find patients who show deficits in just one or other variety.

Farah (1994) reviews disorders of perception and awareness following brain damage. She considers the relation between conscious awareness and other brain mechanisms and classifies the theoretical position occupied by consciousness in the view of a number of other authors. According to Farah, some psychologists give consciousness a privileged role. For example, Schacter, McAndrews, and Moscovitch (1988) propose that the conscious awareness system is separate from the modules which process other, domain specific information in the brain. According to this view consciousness may almost be considered another module which can become dissociated from the rest of the modules responsible for perception, cognition, and action. Schacter et al. (1988) call their model DICE (dissociated interactions and conscious experience).

Another view which Farah considers as giving consciousness a privileged role is that proposed by Gazzaniga (1988), who suggests that the conscious /non-conscious distinction is related to which cerebral hemisphere is responsible for information processing of particular tasks. The left hemisphere has language and is conscious whereas the right hemisphere does not have language and is unconscious. Unconscious processing occurs when percepuial representations fail to access the language areas of the left hemisphere. Again, rather as in the DICE model, consciousness can become disconnected from other processing. Recall that in Chapter 9 Logan (1995) proposed that language could be an important factor in conscious control.

Farah (1994) groups together another set of theories about consciousness because they put forward the view that consciousness is a state of integration amongst distinct brain systems. Kinsboumes (1988) integrated field theory sees consciousness as a state of the brain which arises when all the concurrently active modality specific information is mutually consistent. Normally these systems will produce an integrated conscious output, but brain damage may result in a situation where processes become disconnected and do not form an integrated consciousness. In this state there can be a dissociation between processes and consciousness.

Without the integrated state there can be processing but no conscious experience of that processing

Similar views were put forward by Crick and Koch (1990) who consider that consciousness of visual stimuli arises from the binding together of different separately represented visual properties of a stimulus. Damasio (1990) has also theorized that binding gives rise to conscious awareness. In her review. Farah (1994) points out that in all the cases above, consciousness must be all or nothing, it is disconnected or not domains are integrated or not. However, she argues that there is evidence for consciousness being a graded property

But what is consciousness?

Shallice (1988) says that the existence of consciousness is one of the greatest, if not the greatest, of the unsolved problems of science. So far we have talked about conscious and unconscious processing as if we knew what this distinction meant. At the subjective threshold a neurologically normal subject reports phenomenal awareness of a stimulus and can act upon it with confidence. A patient with blindsight has no awareness of stimuli which they can be shown to have knowledge of. Prosopagnosics and amnesics have no "conscious" representation or phenomenal awareness of stimuli which can be shown to affect their judgments. But what is this "phenomenal awareness"? Does it have a function and how can we determine if someone else is or was phenomenally aware? Could we make a machine that is conscious? Is consciousness of only one kind, or does it come in a variety of forms?

Over the past few years consciousness has come back into the field of psychological enquiry and two recent books, Marcel and Bisiach (1988) and Davies and Humphreys (1993), draw together the currently most influential thinking on the subject. Both are collections of essays by psychologists and philosophers, and the fact that both disciplines have important contributions to make emphasises the fact that psychology has its roots in philosophy and that consciousness was one of the most important issues for early psychologists like William James and Sigmund Freud. As it became increasingly clear that consciousness was difficult to define and study it was temporarily banned by the behaviorists. However, as psychologists rejected behaviorism,

consciousness began to creep back into psychology, both as an explanatory term (albeit undefined) and as the basis for a subjects experimental reports.

In the past 20 years more and more psychologists have begun to try to account for some kinds of consciousness in information processing terms, including Shallice (1972) and Norman and Shallice (1986) whose model we looked at in the last chapter. In their model, consciousness was hypothesized as being involved in intentional control. Other theorists, like Allport (1977), Coltheart (1980) and Marcel (1983) have proposed that consciousness is the outcome of some kind of perceptual integration or stabilization. This early idea fits well with more recent suggestions by Crick and Koch (1990), who advocate a neurophysiological approach to consciousness. Their suggestion is that what consciousness does is make available the results of underlying neuronal computations which have become linked together by synchronous neural activity. As different parts of the brain are specialized for the processing of different information, there is the problem of combining the different sources of information together: for example, the semantics of a word with its perceptual properties. One way of solving the binding problem would be by synchronizing activity over the groups of neurons which are related to the same object. In Chapter 5 we considered one theory put forward by Singer (1994), who proposed a neurobiological explanation of binding, attention, and consciousness.

There is not the space here to give an exhaustive review of current thinking on consciousness: the interested reader should refer to the reading list at the end of the chapter for or more ideas. Here we shall look at a selection of views to give a flavor of the area.

Umilta (1988) discusses the proposition with which we started this chapter- that the conscious/unconscious distinction corresponds to the controlled/automatic distinction- together with four other propositions about the disputed nature of consciousness. Lets examine his arguments. First, he discusses the notion that consciousness is equivalent to our phenomenal experience of what is going on in the limited capacity central processor- the supervisory attentional system (SAS) proposed by Norman and Shallice (1986) or Baddeleys (1986) central executive. Remember, this central processor is said to be in control of attention allocation and contention scheduling of other unconscious processes. As we have said before, this idea is

virtually the same as the homunculus and does not get us very far with respect to clearer understanding.

Second, Umilta discusses the proposition that while controlled processing is under the control of the central processor, automatic processing proceeds without central processor control. However, there is evidence, that the central processor does influence automatic processes in that these can run off as a consequence of consciously activated goal states.

Third, Umilta discusses whether attention and consciousness are synonymous. He says that, although the properties of attention and consciousness appear similar in that they are both said, amongst other things, to be limited capacity, slow, serial processes and active in working memory, they are in fact conceptually different. Crucially, consciousness uses attention to control lower order cognitive processes (Umilta, 1988). We are able to have the intention to attend to something: thus, as intention is the precursor of allocating attention, they cannot be the same thing. Lastly, Umilta considers what self-awareness is. He says that this kind of consciousness gives us a feeling of being in control of our mind.

Johnson-Laird (1983; 1988) points out that this ability for self-awareness is crucial for the formation of intentions. Intentions are based on models of what the world would be like if you did so-and-so. Without some awareness of the possible outcomes, planning actions and making decisions would be severely impaired. Self-awareness also allows us to know what we know; this is called meta-cognition. If I ask you the name of the fifth king of Norway, you will probably know immediately that you do not have this knowledge. On the other hand if I ask you for the fifth king of England, you might think that it is possible that you know the answer and begin a memory search. Naming the fifth day of the week is trivial; you know immediately that you have that knowledge. Knowing what we know depends on having access to the system's capabilities.

In his computational analysis of consciousness, Johnson-Laird (1988) argues that one way of solving the problem of what consciousness might be is to consider what would be necessary to produce a computer that had a high-level model, or awareness of its own operations. First, he assumes that consciousness is a computational matter that depends on how the brain carries out

certain computations, not on its physical constitution. As the physical constitution is irrelevant to the explanation, any being endowed with consciousness might be explained this way. In terms of Mans (1982) levels of explanation, we are concerned here only with the computational level. That is, to describe what needs to be computed, not the physical hardware that actually does the computing. According to Johnson-Laird, there are four problems which any theory of consciousness must solve. First, there is the problem of awareness; any theory must account for the difference between information that can and information that cannot be made available to awareness (i.e. the difference between the conscious and unconscious). The second problem is that of control; in Johnson-Lairds conception, this is equivalent to willpower and differs between individuals. Then, the third and fourth problems are ones discussed earlier, self-awareness and intention. Self-awareness, meta-cognition and intentions all depend on the same computational mechanism. The computational system that Johnson-Laird proposes is like the brain in that it is hierarchical and parallel. At the highest level in the hierarchy is the operating system, or working memory, which is relatively autonomous but does not have complete control over all the other processes. The contents of the operating system working memory are conscious but all other levels in the hierarchy are not. The operating system needs to be conscious so that it can construct a mental model of itself and how it is performing. JohnsonLaird takes the example of visual perception. The visual system sends data about the locations and identities of an object, then the operating system uses other procedures to construct a model of itself perceiving the world. Now the working memory has a model embedded within a model. This embedding of models could in principle continue ad infinitum- you can be aware that you are aware that you are aware etc. Once a computation system can represent itself and what it knows, it can display self-awareness, or be conscious, make plans, and exhibit intentional behavior. While all this seems promising, we have no idea what a machine which had a high level model of itself would be like. The operating system still sounds rather like a homunculus, but with a clearer description of what it needs to do. Norman (1986) gives thoughtful consideration to the problem of control in parallel distributed processing (PDP) computer networks.

Phaf, Mul and Wolters (1994) consider what kind of system could create conscious experience out of unconscious activation and suggest that conscious processing should be added to the general capabilities of PDP models. Some connectionist models of attention were described at the end of Chapter 5. where we considered how information concerning different attributes of an object are combined. Phaf et al. propose that, for conscious experience to arise, there must be an explicit construction process which is based on the process responsible for sequential recursive reasoning, and for temporarily joining together active representations in working memory. They suggest that the articulately loop would be a suitable candidate for this. Working memory is not generally mentioned in PDP models; long-term memory is considered to be the slowly changing weights within the network, and short-term memory the currently decaying activation (Grossberg, 1988).

Phaf et al. (1994) describe an extension to their CALM model which has a sequentially recurrent network (SeRN), or external rehearsal loop, which feeds back single localized activations, or chunks, to unconnected nodes in the input module for or CALM, so that the chunks do not interfere with each other. This model simulated serial position effects in short-term memory as well as primacy and recency effects. In addition, they claim to show that all the requirements of consciousness can be met within connectionist models, although, of course, you could never determine whether their model was conscious or not! The external rehearsal loop in SeRN is just one module in their model and activation in other modules must be transformed if it is to enter the loop. Activations which cannot, or do not, reach the recursive loop are unable to be part of the construction process which Phaf et al. (1994) propose is involved in conscious experience. A dissociable module for conscious experience can explain how processing in one part of the system can proceed without conscious awareness. It seems unlikely, however, that the recursive loop can be the sole explanation for conscious experience, especially if this is equated with the articulatory loop component of working memory. When the articulatory loop is fully occupied with, for example, a digit span task, subjects are still able to perform logical reasoning (Hitch & Baddeley, 1976) and are conscious of doing so.

So, from the preceding discussion it is evident that there may be many varieties of consciousness. We must, however be alert to the problem of using consciousness in any form

of its meaning to explain another phenomenon, unless we can explain the phenomenon of consciousness itself. This pitfall and the problems associated with determining criteria for different uses of the term conscious, are eloquently discussed by Allport (1988). We have seen that the trouble with experimenting on normal subjects is that we need some criterion for establishing whether or not the subject was consciously aware of the stimulus that was presented. What could we use for a criterion? Allport (1988) considers three possible options, all of which he finds to be seriously flawed.

First, he considers the criterion of potential action. With much qualification of his arguments, Allport suggests that if a person was "aware" of an event, they should be able, in principle, to respond to or act upon that event. Of course, if the subject chooses not to overtly respond to the event, we have no way of knowing whether they were aware of it or not. By this definition, we would then have no way of knowing whether they were conscious or unconscious of that event. Allport discusses other possible behavioral indicators of awareness that might be useful for determining a persons state of awareness. Some of these are involuntary- for example, pupil dilation, an autonomic response. Such involuntary indicators often tell us something different from what the person is telling us verbally; for example, when someone is lying, involuntary indicators may give them away. So, Allport argues, the proposal that awareness can be indexed by voluntary actions immediately runs into another problem. He concludes that there may be no behavioral indicators which can be reliably used to determine awareness.

The next criterion for ''conscious awareness" which Allport examines is whether the subject can remember an event. When a person can recall an event, it may be possible to say that the person was aware of that event. However, what if they are unable to recall an event? They may have been aware at the time, but have forgotten by the time you question them. There are further problems with the memory criterion in that we often exhibit absent-mindedness. We perform actions, presumably in response to the environment or internal goals, but do not remember doing them: does this mean we are not aware of these actions or the events that triggered them? How about the confidence criterion, proposed by Merikle and Cheesman (1985) and discussed earlier with respect to SAWCI experiments? The problem here is how much confidence is required for theacknowledgement of an event. Overall, it seems that there are a variety of indicators, which

suggest that there is no unique form of consciousness, rather a variety of forms which may be indicated in different ways. We shall see this most clearly when we review neurological patients in the next section. Third, Allport suggests, consciousness might be related to selection for action and that objects selected for action are likely to form an episodic memory, which can be recovered explicitly. Objects that are not directly selected for action are only in some sense conscious. This idea, however, does not seem to explain how objects which are selected for action, acted upon, and integrated into a coherent routine (for example, lifting the sugar spoon and adding sugar in tea) may be acted on twice, or not at all. We may have been "conscious" in one sense, but do not have a retrievable episodic memory of our action which we can subsequently report.

Despite the difficulty of defining consciousness and ascertaining its presence or absence, there are psychologists who believe that psychology cannot ignore phenomenal awareness. Marcel (1983, 1988) believes that consciousness is central to mental life; and, as psychology is the science of mental life, to ignore it would reduce psychology to cybernetics or biology. In their experiments psychologists generally ask people to perform tasks which rely on a report based on a conscious percept: Press a button as soon as you see a red light; Do you hear a high or low pitched tone? and so forth. Thus, Marcel argues, the data derived in experiments are based on phenomenal experience. Unless the subject has a conscious experience of the stimulus, they are unwilling to make a response. Here again, we see how important it is that the subject has confidence in their experience if they are to make a voluntary action.

Shallice (1988) agrees that consciousness is important because we rely on the phenomenal experience of our subjects in psychology experiments and because these experiments also depend on the subjects understanding the task instructions. As we treat subjects as if they were responsible conscious agents, we are acknowledging something about what it is to be conscious. He suggests that a useful way of approaching the problem might be to try to make a link between information processing and experiential accounts of the same events. This was what was attempted by Shallice (1972) and Norman and Shallice (1986). We have already discussed

Norman and Shallices model of willed and automatic behavior in the previous chapter, in which the supervisory attentional system (SAS) can bias schemata in order to allow intentional willed behaviour. Shallices (1988) version of the flow of information between control systems included two additional modules, the language system and an episodic memory.

However, within this model the problem arises as to what exactly corresponds to consciousness. Shallice identifies five levels within the model that might be candidates: input to the language system: the processing carried out by the SAS: the selection of schemata: the operation of some particular component of the system; or the episodic memory module. Shallice argues that it is not easy to decide which part of the system might correspond to consciousness, first, because a definition of consciousness has not yet been worked out (Shallice lists fourteen possible varieties in his paper): second, the information-processing models are too loosely specified: and last, because, as information processing involves so many subsystems, it is difficult to know which ones are critical for producing awareness. Shallice suggests that it would be misguided to attempt to find a one-to-one correspondence between any component of the information processing system and consciousness. Rather, control could be shared between subsystems, and as the control structures would be operating on information from the same schemata, there would be a coherent pattern of control over all other subsystems, which is shared between those control systems that are active. Might not this shared control be the basis for consciousness? We have met the idea that coherence between subsystems might be important for conscious experience at the beginning of our discussion of consciousness. As patterns of coherence might differ, so might conscious experience.

Finally, Elizabeth styles summarizes her discussion as follows: When I quoted William James (1890) at the beginning of the first chapter, I gave only a part of what he said: Everyone knows what attention is. However, James continued: It is the taking possession by the mind, in clear and vivid form, of one out of what would seem several simultaneously possible objects or trains of thought. Focalization. concentration, of

consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others.

In this statement, James refers to the selectivity of attention, its apparently limited nature, and he brings consciousness into the explanation. James reflected carefully on attention and consciousness, but as long as we have no agreed definition for either attention or consciousness, or for any of their varieties, we are in danger of trying to explain something we do not properly understand in terms of something else that we do not properly understand

Despite the lack of agreed definitions and confusion of terms, progress has been made. Forty years ago the abilities of the human operator were discussed in terms of information theory and a single channel limited capacity, general-purpose processor. The first theories of attention were general theories designed to account for general attentional phenomena. However, early theorists were alert to the problems of definition. If you look in the subject index of Decision and stress (Broadbent. 1971), there is no entry for attention or consciousness despite the book being considered by everyone else to be on attention. Even a decade later, Broadbenfs (1982) paper was entitled Task combination and the selective intake of information. Although he put forward a theory of attention he was himself wary of calling it that.

In the beginning, the prospect of considering psychological theories in terms of whole brain states was not on the horizon, the metaphor of mind was a communication channel. Forty years ago psychologists realized that far more information impinged on the senses than could be overtly responded to, and this is still the case. The original solution was to allow only a small amount of task relevant information to gain access to higher levels (Broadbent, 1958). Developments over the following 30 years made it increasingly evident that not only were the physical properties of task relevant objects concurrently available within the processing system, but so too were their higher level representations of conceptual and semantic properties. Further, task irrelevant information showed evidence of high levels of processing. As early as 1967, Fitts and Posner pointed out that the concept of channel capacity as employed in information theory should not be confused with concepts regarding mans capacities and limitations. Man does have a limited capacity for many tasks.... However, there is not a single human channel capacity for

all tasks and codes. Fitts and Posner were not yet talking about brain states, but as we have seen through this book, as time has passed, it has become increasingly evident that the brain codes information using many different special-purpose processing systems. This specialization has been demonstrated experimentally in laboratory experiments with neurologically normal subjects, by neurophysiological methods and by the analysis of the breakdown of behavior following brain damage. While each specialized processing system might have its own limitations there is no evidence for a general overall limit on the processing capacity of the human brain. There may be limits within each specialized sub-system and there is good evidence from studies on the psychological refractory period for a limit at the level of response retrieval. This may be functional in that it might maintain coherence of behavior.

Once it was agreed that the apparent limit on performance might not be a result of a limit on overall processing capacity', the problem of "attention" could be redefined. The problem then became: given the amount of information concurrently available in separate codes in different different parts of the brain, how is it all combined and controlled? How can one set of stimuli control one voluntary action in one circumstance and a different action in another circumstance?

Today, our improved understanding of the underlying neurophysiology of the brain, together with the powerful computer metaphor of mind- connectionism- allows a vision of information processing and decision making that was previously impossible. It is beginning to look as if our subjective feelings of attending or being conscious in any of the senses of these words, must be the outcome of a multiplicity of brain processes which cooperate and/or compete until a resolution is reached. The examples we have met are Crick and Koch (1990) and Singer (1994). The brain of a patient with damage to a particular process may not achieve an integrated brain state (Farah, 1994) and so render the patient unconscious or inattentive to information which, because it affects behavior, must have been encoded. Therefore, although consciousness or attention might have the subjective property of being limited, the brains computational capability is vast. Only a very small proportion of its computations, or their outcomes, are available for us to know about. Just because we know about only a little of what is going on below the level of conscious awareness, this does not mean that nothing else is being processed. This is where our subjective capacity is most limited.

From what I have just outlined, it is evident that contributions to our understanding of attention and consciousness come from quite diverse disciplines. There is the neurophysiology of the brain, there are computer models, mathematical theories, data from experimental subjects and from neuropsychological patients. While evidence from all these sources must ultimately be important, and should constrain psychological theory, the difficulty of incorporating all the evidence from all the sources into a single theory arises. On a small scale, there is a good number of theories which account well for a part of the evidence. On the larger scale, the choice of agreed terms and level of explanation is difficult and probably impossible. At the very least, it must be clear that a single term for attention or consciousness is almost certainly inappropriate. It is possible that there are as many varieties of "attention" and "consciousness" as there are experiments to investigate them. If different tasks recruit different sub-sets of specialized brain areas, then every task will impose a different demand on the neural substrate. Furthermore, if this is the case, we must rule out the possibility of formulating a unified theory of either attention or consciousness. I am sorry if this disappoints the reader, but to try to provide a unified theory of attention and/or consciousness would, at present, be misleading.

In fact, the whole previous discussion brings us back to the general and fundamental question: What is consciousness? How does it arise, and how deliberate is it? If consciousness is related to an individualization property or process against a conglomerate of other possible entities, then the remaining question is about the freedom of consciousness, after it has arisen from the fundamentally spontaneous and unconscious processes. If now we consider the fact that consciousness, at the next level of free action, also takes into account patterns of behavior already set since the primordial times of our species, than there is a little left of free will. Our consciousness seems like the peak of an iceberg, whose main part is submerged into the ocean of the unconscious.

The problem of free will

This is a problem connected with philosophy since the ancient times. Are humans deliberate agents, masters of their own fate? Or are they just automata, reacting to external signs in an instinctive way? The answer is likely more tricky than it appears because many actions which we

believe are conscious stem, at least at an initial stage, from a world within our brain which basically is unconscious.

The philosophy of free will

Leaving aside some scientific or political aspects of free will, in relation to determinism/indeterminism or totalitarianism/libertarianism respectively, here, we will regard its philosophical aspects, according to Stanford encyclopedia of philosophy, and make, at the end, some remarks.

Firstly, Stanford gives a definition of free will: On a minimalist account, free will is the ability to select a course of action as a means of fulfilling some desire. David Hume, for example, defines liberty as a power of acting or of not acting, according to the determination of the will. And we find in Jonathan Edwards (1754) a similar account of free willings as those which proceed from ones own desires...

A natural suggestion, then, is to modify the minimalist thesis by taking account of (what may be) distinctively human capacities and self-conception. And indeed, philosophers since Plato have commonly distinguished the animal and rational parts of our nature, with the latter implying a great deal more psychological complexity. Our rational nature includes our ability to judge some ends as good or worth pursuing and value them even though satisfying them may result in considerable unpleasantness for ourselves. We might say that we act with free will when we act upon our considered judgments/valuings about what is good for us, whether or not our doing so conflicts with an animal desire. But this would seem unduly restrictive, since we clearly hold many people responsible for actions proceeding from animal desires that conflict with their own assessment of what would be best in the circumstances. More plausible is the suggestion that one acts with free will when ones deliberation is sensitive to ones own judgments concerning what is best in the circumstances, whether or not one acts upon such a judgment So what we have here is the human versus the animal nature, the world of logic versus the world of instincts. By this distinction, however, free will could be limited to the suppression of

instincts. But suppression can arise from logic too, which may deny creative instincts together with destructive ones. Instincts are not always bad, while logic is not always good. So free will seems to arise from a combination of both logical and instinctive processes suitable under a certain occasion. As far as the pre-assumption of good is concerned, we have already seen in Booles theory of logic that good is treated like a fundamental and objective quality, which one finds within oneself, and which helps as a guide for free action: There are two general worries about theories of free will that principally rely on the capacity to deliberate about possible actions in the light of ones conception of the good. First, there are agents who deliberately choose to act as they do but who are motivated to do so by a compulsive, controlling sort of desire. Such agents are not willing freely. Secondly, we can imagine a persons psychology being externally manipulated by another agent (via neurophysiological implant, say), such that the agent is caused to deliberate and come to desire strongly a particular action which he previously was not disposed to choose. The deliberative process could be perfectly normal, reflective, and rational, but seemingly not freely made. The agent s freedom seems undermined or at least greatly diminished by such psychological tampering (Mele 1995).

Here again we go back to the ethics vs. necessity dilemma. We may do something ethically wrong because we need to, and we can do something good although we dont have the necessity. When may we say that we are justified and that we act freely? Stanford says that, On Frankfurts analysis, I act freely when the desire on which I act is one that I desire to be effective. This second-order desire is one with which I identify: it reflects my true self. Still, the problem remains how someone can control his justified desires in order to act freely: John Martin Fischer (1994) distinguishes two sorts of control over ones actions: guidance and regulative. A person exerts guidance control over his own actions insofar as they proceed from a weakly reasons-responsive (deliberative) mechanism. This obtains just in case there is some possible scenario where the agent is presented with a sufficient reason to do otherwise and the

mechanism that led to the actual choice is operative and it issues in a different choice, one appropriate to the imagined reason. In Fischer and Ravizza (1998), the account is elaborated and refined. They require, more strongly, that the mechanism be the persons own mechanism (ruling out external manipulation) and that it be moderately responsive to reasons: one that is regularly receptive to reasons, some of which are moral reasons, and at least weakly reactive to reason. Receptivity is evinced through an understandable pattern of reasons recognition- beliefs of the agent about what would constitute a sufficient reason for undertaking various actions. It seems that free will has more to do with regulation, resistance to instinctive impulses, than free action. So a better term corresponding to what we are used to call free will would be freedom of confrontation. But to what degree does this regulatory principle deprives as from true action? Do we finally have, to some extent, some sort or free will, or is our free will restricted to confronting events and processes that we cannot change? A recent trend is to suppose that agent causation accounts capture, as well as possible, our prereflective idea of responsible, free action. But the failure of philosophers to work the account out in a fully satisfactory and intelligible form reveals that the very idea of free will (and so of responsibility) is incoherent (Strawson 1986) or at least inconsistent with a world very much like our own (Pereboom 2001). Smilansky (2000) takes a more complicated position, on which there are two levels on which we may assess freedom, compatibilist and ultimate. On the ultimate level of evaluation, free will is indeed incoherent.

The will has also recently become a target of empirical study in neuroscience and cognitive psychology. Benjamin Libet (2002) conducted experiments designed to determine the timing of conscious willings or decisions to act in relation to brain activity associated with the physical initiation of behavior. Interpretation of the results is highly controversial. Libet himself concludes that the studies provide strong evidence that actions are already underway shortly before the agent wills to do it. As a result, we do not consciously initiate our actions, though he suggests that we might nonetheless retain the ability to veto actions that are initiated by unconscious psychological structures. Wegner (2002) amasses a range of studies (including those of Libet) to argue that the notion that human actions are ever initiated by their own conscious

willings is simply a deeply-entrenched illusion and proceeds to offer an hypothesis concerning the reason this illusion is generated within our cognitive systems. Mele (2009) and OConnor (2009b) argue that the data adduced by Libet, Wegner, and others wholly fail to support their revisionary conclusions.

If free will is an illusion, an impulse that stems from unconscious demands, becoming more familiar with the unconscious would certainly help us to recognize, unveil and analyze our deepest instincts and desires which secretly motivate our behavior and actions. This is the role of psychology. However, if there exist some deeply rooted, fundamental and primordial properties within our unconscious mind, some sort of archetypes as they have been called, one could argue that the instincts, or, under another interpretation, the ethics which drive human behavior could manifest within ourselves the aspect of God: A large portion of Western philosophical writing on free will was and is written within an overarching theological framework, according to which God is the ultimate source and sustainer of all else. Some of these thinkers draw the conclusion that God must be a sufficient, wholly determining cause for everything that happens; all suppose that every creaturely act necessarily depends on the explanatorily prior, cooperative activity of God. It is also presumed that human beings are free and responsible (on pain of attributing evil in the world to God alone, and so impugning His perfect goodness). Hence, those who believe that God is omni-determining typically are compatibilists with respect to freedom and (in this case) theological determinism. Edwards (1754) is a good example. But those who suppose that Gods sustaining activity (and special activity of conferring grace) is only a necessary condition on the outcome of human free choices need to tell a more subtle story, on which omnipotent Gods cooperative activity can be (explanatorily) prior to a human choice and yet the outcome of that choice be settled only by the choice itself.

Another issue concerns the impact on human freedom of knowledge of God, the ultimate Good. Many philosophers, especially the medieval Aristotelians, were drawn to the idea that human beings cannot but will that which they take to be an unqualified good. Hence, in the afterlife, when humans see God face to face, they will inevitably be drawn to Him. Murray (1993, 2002)

argues that a good God would choose to make His existence and character less than certain for human beings, for the sake of their freedom. (He will do so, the argument goes, at least for a period of time in which human beings participate in their own character formation.) If it is a good for human beings that they freely choose to respond in love to God and to act in obedience to His will, then God must maintain an epistemic distance from them lest they be overwhelmed by His goodness and respond out of necessity, rather than freedom.

Finally, there is the question of the freedom of God himself. Perfect goodness is an essential, not acquired, attribute of God. God cannot lie or be in any way immoral in His dealings with His creatures. Unless we take the minority position on which this is a trivial claim, since whatever God does definitionally counts as good, this appears to be a significant, inner constraint on Gods freedom. Did we not contemplate immediately above that human freedom would be curtailed by our having an unmistakable awareness of what is in fact the Good? And yet is it not passing strange to suppose that God should be less than perfectly free?

One suggested solution to this puzzle begins by reconsidering the relationship of two strands in (much) thinking about freedom of will: being able to do otherwise and being the ultimate source of ones will. Contemporary discussions of free will often emphasize the importance of being able to do otherwise. Yet it is plausible (Kane 1996) that the core metaphysical feature of freedom is being the ultimate source, or originator, of ones choices, and that being able to do otherwise is closely connected to this feature. For human beings or any created persons who owe their existence to factors outside themselves, the only way their acts of will could find their ultimate origin in themselves is for such acts not to be determined by their character and circumstances. For if all my willings were wholly determined, then if we were to trace my causal history back far enough, we would ultimately arrive at external factors that gave rise to me, with my particular genetic dispositions. My motives at the time would not be the ultimate source of my willings, only the most proximate ones. Only by there being less than deterministic connections between external influences and choices, then, is it be possible for me to be an ultimate source of my activity, concerning which I may truly say, the buck stops here.

As is generally the case, things are different on this point in the case of God. Even if Gods character absolutely precludes His performing certain actions in certain contexts, this will not imply that some external factor is in any way a partial origin of His willings and refrainings from willing. Indeed, this would not be so even if he were determined by character to will everything which He wills. For Gods nature owes its existence to nothing. So God would be the sole and ultimate source of His will even if He couldnt will otherwise.

Well, then, might God have willed otherwise in any respect? The majority view in the history of philosophical theology is that He indeed could have. He might have chosen not to create anything at all. And given that He did create, He might have created any number of alternatives to what we observe. But there have been noteworthy thinkers who argued the contrary position, along with others who clearly felt the pull of the contrary position even while resisting it. The most famous such thinker is Leibniz (1710), who argued that God, being both perfectly good and perfectly powerful, cannot fail to will the best possible world. Leibniz insisted that this is consistent with saying that God is able to will otherwise, although his defense of this last claim is notoriously difficult to make out satisfactorily. Many read Leibniz, malgre lui, as one whose basic commitments imply that God could not have willed other than He does in any respect. On might challenge Leibnizs reasoning on this point by questioning the assumption that there is a uniquely best possible Creation (an option noted by Adams 1987, though he challenges instead Leibnizs conclusion based on it). One way this could be is if there is no well-ordering of worlds: some worlds are sufficiently different in kind that they are incommensurate with each other (neither is better than the other, nor are they equal). Another way this could be is if there is no upper limit on goodness of worlds: for every possible world God might have created, there are others (infinitely many, in fact) which are better. If such is the case, one might argue, it is reasonable for God to arbitrarily choose which world to create from among those worlds exceeding some threshold value of overall goodness.

However, William Rowe (2004) has countered that the thesis that there is no upper limit on goodness of worlds has a very different consequence: it shows that there could not be a morally perfect Creator! For suppose our world has an on-balance moral value of n and that God chose to

create it despite being aware of possibilities having values higher than n that He was able to create. It seems we can now imagine a morally better Creator: one having the same options who chooses to create a better world. Finally, Norman Kretzmann (1997, 22025) has argued in the context of Aquinass theological system that there is strong pressure to say that God must have created something or other, though it may well have been open to Him to create any of a number of contingent orders. The reason is that there is no plausible account of how an absolutely perfect God might have a resistible motivation- one consideration among other, competing considerations- for creating something rather than nothing. (It obviously cannot have to do with any sort of utility, for example.) The best general understanding of God's being motivated to create at all- one which in places Aquinas himself comes very close to endorsing- is to see it as reflecting the fact that God's very being, which is goodness, necessarily diffuses itself. Perfect goodness will naturally communicate itself outwardly; God who is perfect goodness will naturally create, generating a dependent reality that imperfectly reflects that goodness. (Wainwright (1996) is a careful discussion of a somewhat similar line of thought in Jonathan Edwards. See also Rowe 2004.) [20]

Here we just return to the (logical-) ethical aspect of the nature of our thought. But the problem of free will, although assumed rationally, is not fundamentally a problem of reason. It has more to do with the recognition, expression and confrontation of impulses, which at an initial stage come from a completely unconscious level. If free will was to be regarded as dominant then it would have been able not only to deal with needs and desires but also to create them. But since this is not the case, we had better limit the power of free will within the context of impulse regulation. And this, until someday we may become really free spiritual beings!

As far as the problem of absolute good and, consequently, of the existence of God is concerned, I would like to say this: If free will exists, this is thanks to imperfection and relativity! Because, simply put, you cannot act otherwise if you are faced with an absolute reality. In order for an alternative condition to exist, we must have the freedom of choice in the first place. Of course this should not necessarily be seen as an argument against God. But if He wants to be able to

freely chose, He should give up something of His own omnipotence. But this is why God could really be a wise, great and alive Being, instead of an absolute, cold and non-existent Thing!

The mind-body problem

According to Wikipedia, the mind-body problem in philosophy examines the relationship between mind and matter, and in particular the relationship between consciousness and the brain.

The problem was famously addressed by Ren Descartes in the 17th century, resulting in Cartesian dualism, and by pre-Aristotelian philosophers, in Avicennian philosophy, and in earlier Asian traditions. A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one kind of stuff, and that mind and matter are both aspects of it

The rejection of the mind-body dichotomy is found in French Structuralism, and is a position that generally characterized post-war French philosophy. The absence of an empirically identifiable meeting point between the non-physical mind and its physical extension has proven problematic to dualism and many modern philosophers of mind maintain that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology, and the neurosciences

Philosophers David Robb and John Heil introduce mental causation in terms of the mind-body problem of interaction: Mind-body interaction has a central place in our pretheoretic conception of agency... Indeed, mental causation often figures explicitly in formulations of the mind-body problem.... Some philosophers... insist that the very notion of psychological explanation turns on the intelligibility of mental causation. If your mind and its states, such as your beliefs and desires, were causally isolated from your bodily behavior, then what goes on in your mind could not explain what you do... If psychological explanation goes, so do the closely related notions of agency and moral

responsibility... Clearly, a good deal rides on a satisfactory solution to the problem of mental causation [and] there is more than one way in which puzzles about the minds causal relevance to behavior (and to the physical world more generally) can arise.

In neuroscience much has been learned about correlations between brain activity and subjective, conscious experiences. Many suggest that neuroscience will ultimately explain consciousness: ...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells... However, this view has been criticized because consciousness has yet to be shown to be a process and the hard problem of relating consciousness directly to brain activity remains elusive: Cognitive science today gets increasingly interested in the embodiment of human perception, thinking, and action. Abstract information processing models are no longer accepted as satisfactory accounts of the human mind. Interest has shifted to interactions between the material human body and its surroundings and to the way in which such interactions shape the mind. Proponents of this approach have expressed the hope that it will ultimately dissolve the Cartesian divide between the immaterial mind and the material existence of human beings (Damasio, 1994; Gallagher, 2005). A topic that seems particularly promising for providing a bridge across the mind-body cleavage is the study of bodily actions, which are neither reflexive reactions to external stimuli nor indications of mental states, which have only arbitrary relationships to the motor features of the action (e.g., pressing a button for making a choice response). The shape, timing, and effects of such actions are inseparable from their meaning. One might say that they are loaded with mental content, which cannot be appreciated other than by studying their material features. Imitation, communicative gesturing, and tool use are examples of these kinds of actions

We could say that the mind-body distinction is an artificial one. The mind cannot create the body in the same sense that we cannot make, lets say, a cup of coffee by just thinking about it. On the other hand, the body cannot produce intelligence if there are not some properties within matter which lead to intelligence. So we see that information is the key word, while the mind-body dichotomy has mainly to do with some sort of different levels of information. Ice, liquid water

and steam are just different levels, or phases, of the same substance, while what we call spirit is the awareness, what is to be informed, of the processes and the state of this substance.

Aristotle, put this in a unifying and clarifying way: For Aristotle mind is a faculty of the soul. Regarding the soul, he said: It is not necessary to ask whether soul and body are one, just as it is not necessary to ask whether the wax and its shape are one, nor generally whether the matter of each thing and that of which it is the matter are one. For even if one and being are spoken of in several ways, what is properly so spoken of is the actuality.

In sum, Aristotle saw the relation between soul and body as uncomplicated, in the same way that it is uncomplicated that a cubical shape is a property of a toy building block. The soul is a property exhibited by the body, one among many. Moreover, Aristotle proposed that when the body perishes, so does the soul, just as the shape of a building block disappears with destruction of the block. [21]

Mind-Body interactions
The effort of attention may be considered a good example of free will. When we concentrate on what we observe or on what we think about, this is a purposeful process despite the fact that attention could have been involuntarily driven by a circumstantial stimulus. Furthermore, the ability to concentrate on our own thoughts reveals a high degree of self-awareness, well beyond the world of instincts and of mechanical responses.

Here, we will regard the problem of free will in relation to a study of Jeffrey M. Schwartz, Henry P. Stapp and Mario Beauregard, Quantum physics in neuroscience and psychology: a neurophysical model of mind-brain interaction. As this study explains, there are altogether three quantum-like processes:

First, there is the purely mechanical process, called process 2. This process, as it applies to the (material) brain, involves important dynamic units that are represented by complex patterns of brain activity that are facilitated (i.e. strengthened) by use, and are such that each unit tends to be activated as a whole by the activation of several of its parts. The activation of various of these complex patterns by cross referencing-that is, by activation of several of its parts- coupled to feedback loops that strengthen or weaken the activities of appropriate processing centers, appears to account for the essential features of the mechanical part of the dynamics in a way that is not significantly different from what a classic model can support, except for the existence of a host of parallel possibilities that according to the classic concepts, cannot exist simultaneously. The second process, von Neumanns process 1, is needed in order to pick out from a chaotic continuum of overlapping parallel possibilities some particular discrete possibility and its complement. The third process is natures choice between Yes and No. Natures choice conforms to a statistical rule, but the agents choice is, within contemporary quantum theory, a free choice that can be and is consistently treated as an input variable of the empirical protocol.

Process 1 has itself two modes. The first is passive, and can produce temporally isolated events. The second is active and involves mental effort. Active process 1 intervention has, according to the quantum model described here, a distinctive form. It consists of a sequence of intentional purposeful actions, the rapidity of which can be increased with effort. Such an increase in attention density, defined as an increase in the number of observations per unit time, can bring into play the QZE, which tends to hold in place both those aspects of the state of the brain that are fixed by the sequence of intentional actions and also the felt intentional focus of these actions. Attention density is not controlled by any physical rule of orthodox contemporary quantum theory, but is taken both in orthodox theory and in our model, to be subject to subjective volitional control. This application in this way of the basic principles of physics to neuroscience constitutes our model of the mind-brain connection.

After these clarifications, we may proceed with this study:

The introduction into neuroscience and neuropsychology of the extensive use of functional brain imaging technology has revealed, at the empirical level, an important causal role of directed attention in cerebral functioning... It is becoming increasingly clear, however, that there is at least one type of information processing and manipulation that does not readily lend itself to explanations that assume that all final causes are subsumed within brain, or more generally, central nervous system mechanisms. The cases in question are those in which the conscious act of willfully altering the mode by which experiential information is processed itself changes, in systematic ways, the cerebral mechanisms used. There is a growing recognition of the theoretical importance of applying experimental paradigms that use directed mental effort to produce systematic and predictable changes in brain function (e.g. Beauregard et al. 2001; Ochsner et al. 2002). These willfully induced brain changes are generally accomplished through training in, and the applied use of, cognitive reattribution and the attentional re-contextualization of conscious experience. Furthermore, an accelerating number of studies in the neuroimaging literature significantly support the thesis that, again, with appropriate training and effort, people can systematically alter neural circuitry associated with a variety of mental and physical states that are frankly pathological (Schwartz et al. 1996; Schwartz 1998; Musso et al. 1999; Paquette et al. 2003). A recent review of this and the related neurological literature has coined the term selfdirected neuroplasticity to serve as a general description of the principle that focused training and effort can systematically alter cerebral function in a predictable and potentially therapeutic manner (Schwartz & Begley 2002).

It is interesting that the effects of free will and of the effort of attention can be experimentally observed, up to a certain degree. Examples from the same study follow: There is already a wealth of data arguing against this view. For instance, work in the 1990s on patients with obsessive compulsive disorder demonstrated significant changes in caudate nucleus metabolism and the functional relationships of the orbitofrontal cortex- striatum- thalamus circuitry in patients who responded to a psychological treatment using cognitive reframing and attentional refocusing as key aspects of the therapeutic intervention. More recently, work by Beauregard and colleagues (Paquette et al. 2003) has demonstrated systematic changes in the

dorsolateral prefrontal cortex and parahippocampal gyrus after cognitive-behavioral therapy for phobia of spiders, with brain changes significantly related to both objective measurements and subjective reports of fear and aversion. There are now numerous reports on the effects of selfdirected regulation of emotional response, via cognitive reframing and attentional recontextualization mechanisms, on cerebral function.

This sort of therapeutical effect of thought on the body was considered by the ancient Chinese: Bare Attention is the clear and single-minded awareness of what actually happens to us and in us at the successive moments of perception. It is called Bare because it attends just to the bare facts of a perception as presented either through the five physical senses or through the mind without reacting to them (Nyanaponika 1973).

The aforementioned view belongs to the German Siegmund Feniger, who became a Buddhist and settled in Sri-Lanka, changing his name to Nyanaponika Thera.

Modern quantum mechanics gave a more active role to the observer than classical physics. Von Neumann in fact went on to regard an interpretation of quantum mechanics which includes the observer (thus his consciousness too) as another parameter of the observables of a system: The key philosophical and scientific achievement of the founders of quantum theory was to forge a rationally coherent and practicable linkage between the two kinds of description that jointly comprise the foundation of science. Descriptions of the first kind are accounts of psychologically experienced empirical findings, expressed in a language that allows us to communicate to our colleagues what we have done and what we have learned. Descriptions of the second kind are specifications of physical properties, which are expressed by assigning mathematical properties to space-time points and formulating laws that determine how these properties evolve over the course of time. Bohr, Heisenberg, Pauli and the other inventors of quantum theory discovered a useful way to connect these two kinds of description by causal laws. Their seminal discovery was extended by John von Neumann from the domain of atomic

science to the realm of neuroscience and, in particular, to the problem of understanding and describing the causal connections between the minds and the brains of human beings

The participation of the agent continues to be important even when the only features of the physically described world being observed are large-scale properties of measuring devices. The sensitivity of the behavior of the devices to the behavior of some tiny atomic-scale particles propagates first to the devices and then to the observers in such a way that the choice made by an observer about what sort of knowledge to seek can profoundly affect the knowledge that can ever be received either by that observer himself or by any other observer with whom he can communicate. Thus the choice made by the observer about how he or she will act at a macroscopic level has, at the practical level, a profound effect on the physical system being acted upon.

Thus the property of focused attention may prove the existence of free will: The second key point is this: the agents choices are free choices, in the specific sense specified below... This freedom of choice stems from the fact that in the original Copenhagen formulation of quantum theory the human experimenter is considered to stand outside the system to which the quantum laws are applied. Those quantum laws are the only precise laws of nature recognized by that theory. Thus, according to the Copenhagen philosophy, there are no presently known laws that govern the choices made by the agent/experimenter/observer about how the observed system is to be probed. This choice is thus, in this very specic sense, a free choice.

Then the study connects quantum theory with neuroscience by noting the relationship between the spiritual world of the mind and the material world of reality: To study quantum effects in brains within an orthodox (i.e. Copenhagen or von Neumann) quantum theory one must use the von Neumann formulation. This is because Copenhagen quantum theory is formulated in a way that leaves out the quantum dynamics of the human observers body and brain. But von Neumann quantum theory takes the physical system S upon which the crucial process 1 acts to be precisely the brain of the agent, or some part of it. Thus

process 1 describes here an interaction between a persons stream of consciousness, described in mentalistic terms, and an activity in their brain, described in physical terms.

A key question is the quantitative magnitude of quantum effects in the brain. They must be large in order for deviations from classic physics to play any significant role. To examine this quantitative question we consider the quantum dynamics of nerve terminals. Nerve terminals are essential connecting links between nerve cells. The general way they work is reasonably well understood. When an action potential travelling along a nerve fiber reaches a nerve terminal, a host of ion channels open. Calcium ions enter through these channels into the interior of the terminal. These ions migrate from the channel exits to release sites on vesicles containing neurotransmitter molecules. A triggering effect of the calcium ions causes these contents to be dumped into the synaptic cleft that separates this terminal from a neighboring neuron, and these neurotransmitter molecules inuence the tendencies of that neighboring neuron to re.

At their narrowest points, calcium ion channels are less than a nanometer in diameter (Cataldi et al. 2002). This extreme smallness of the opening in the calcium ion channels has profound quantum mechanical implications. The narrowness of the channel restricts the lateral spatial dimension. Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region where the ion will be absorbed as a whole, or not absorbed at all, on some small triggering site.

This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released. Consequently, the quantum state of the brain has a part in which the neurotransmitter is released and a part in which the neurotransmitter is not released. This quantum splitting occurs at every one of the trillions of nerve terminals. This means that the quantum state of the brain splits into a vast host of classically conceived possibilities, one for each possible combination of the releaseor-no-release options at each of the nerve terminals. In fact, because of uncertainties on timings and locations, what is generated by the physical processes in the brain will be not a single discrete set of non-overlapping physical possibilities but rather a huge smear of classically

conceived possibilities. Once the physical state of the brain has evolved into this huge smear of possibilities one must appeal to the quantum rules, and in particular to the effects of process 1, in order to connect the physically described world to the streams of consciousness of the observer/participants.

This focus on the motions of calcium ions in nerve terminals is not meant to suggest that this particular effect is the only place where quantum effects enter into the brain process, or that the quantum process 1 acts locally at these sites. What is needed here is only the existence of some large quantum of effect. The focus upon these calcium ions stems from the facts that (i) in this case the various sizes (dimensions) needed to estimate the magnitude of the quantum effects are empirically known, and (ii) that the release of neurotransmitter into synaptic clefts is known to play a significant role in brain dynamics.

The brain matter is warm and wet and is continually interacting intensely with its environment. It might be thought that the strong quantum decoherence effects associated with these conditions would wash out all quantum effects, beyond localized chemical processes that can be conceived to be imbedded in an essentially classic world. Strong decoherence effects are certainly present, but they are automatically taken into account in the von Neumann formulation employed here. These effects merely convert the state S of the brain into what is called a statistical mixture of nearly classically describable states, each of which develops in time (in the absence of process 1 events), in an almost classically describable way.

Accordingly, an explanation is given about the origin of these processes of free will: It has been repeatedly emphasized here that the choices by which process 1 actions actually occur are free choices in the sense that they are not specied by the currently known laws of physics. On the other hand, a persons intentions are surely related in some way to their historical past. This means that the laws of contemporary orthodox quantum theory, although restrictive and important, do not provide a complete picture. In spite of this, orthodox quantum theory, while making no claim to ontological completeness, is able to achieve a certain kind of

pragmatic completeness. It does so by treating the process 1 free choices as the input variables of experimental protocols, rather than mechanically determined consequences of brain action. In quantum physics the free choices made by human subjects are regarded as subjectively controllable input variables. Bohr emphasized that the mathematical structure of the quantum mechanical formalism offers the appropriate latitude for these free choices. But the need for this strategic move goes deeper than the mere fact that contemporary quantum theory fails to specify how these choices are made. For if in the von Neumann formulation one does seek to determine the cause of the free choice within the representation of the physical brain of the chooser, one nds that one is systematically blocked from determining the cause of the choice by the Heisenberg uncertainty principle, which asserts that the locations and velocities of, say, the calcium ions, are simultaneously unknowable to the precision needed to determine what the choice will be. Thus, one is not only faced with merely a practical unknowability of the causal origin of the free choices, but with an unknowability in principle that stems from the uncertainty principle itself, which lies at the base of quantum mechanics. There is thus a deep root in quantum theory for the idea that the origin of the free choices does not lie in the physical description alone and also for the consequent policy of treating these free choices as empirical inputs that are selected by agents and enter into the causal structure via process 1.

Moreover, the role of the effort of attention is analyzed, in relation to the so-called QZE (quantum Zeno effect); To minimize the input of consciousness, and in order to achieve testability, we propose to allow mental effort to do nothing but control attention density, which is the rapidity of the process 1 events. This allows effort to have only a very limited kind of influence on brain activities, which are largely controlled by physical properties of the brain itself. The notion that only the attention density is controlled by conscious effort arose from an investigation into what sort of conscious control over process 1 action was sufcient to accommodate the most blatant empirical facts. Imposing this strong restriction on the allowed effects of consciousness produces a theory with correspondingly strong predictive power. In this model all signicant effects of consciousness

upon brain activity arise exclusively from a well-known and well-veried strictly quantum effect known as the quantum Zeno effect (QZE).

This effect is named for the Greek philosopher Zeno of Elea, and was brought into prominence in 1977 by the physicists Misra & Sudarshan (1977). It gives a name to the fact that repeated and closely spaced observational acts can effectively hold the Yes feedback in place for an extended time-interval that depends upon the rapidity at which the process 1 actions are happening. According to our model, this rapidity is controlled by the amount of effort being applied... This holding effect can override very strong mechanical forces arising from process 2. The Yes states are assumed to be conditioned by training and learning to contain the template for action which if held in place for an extended period will tend to produce the intended experiential feedback. Thus, the model allows intentional mental efforts to tend to bring intended experiences into being. Systems that have the capacity to exploit this feature of natural law, as it is represented in quantum theory, would apparently enjoy a tremendous survival advantage over systems that do not or cannot exploit it.

Now, from the field of quantum physics we pass to psychology, in relation to these interactions between thought and brain, or, more generally, between thought and body: Does this quantum-physics-based concept of the origin of the causal efcacy of will accord with the ndings of psychology? Consider some passages from Psychology: the briefer course, written by William James. In the nal section of the chapter on attention, James (1892) writes:

I have spoken as if our attention were wholly determined by neural conditions. I believe that the array of things we can attend to is so determined. No object can catch our attention except by the neural machinery. But the amount of the attention which an object receives after it has caught our attention is another question. It often takes effort to keep the mind upon it. We feel that we can make more or less of the effort as we choose. If this feeling be not deceptive, if our effort be a spiritual force, and an indeterminate one, then of course it contributes coequally with the cerebral conditions to the result. Though it introduces no new idea, it will deepen and prolong the stay in consciousness of innumerable ideas which else would fade more quickly away.

In the chapter on will, in the section entitled Volitional effort is effort of attention, James (1892) writes:

Thus we find that we reach the heart of our inquiry into volition when we ask by what process is it that the thought of any given action comes to prevail stably in the mind.

And, later: The essential achievement of the will, in short, when it is most voluntary, is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will.

Still later, James says: Consent to the ideas undivided presence, this is efforts sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away.

This description of the effect of will on the course of mental- cerebral processes is remarkably in line with what had been proposed independently from purely theoretical considerations of the quantum physics of this process. The connections specified by James are explained on the basis of the same dynamic principles that had been introduced by physicists to explain atomic phenomena. Thus the whole range of science, from atomic physics to mind-brain dynamics, has the possibility of being brought together into a single rationally coherent theory of an evolving cosmos that is constituted not of matter but of actions by agents. In this conceptualization of nature, agents could naturally evolve in accordance with the principles of natural selection, owing to the fact that their efforts have physical consequences. The outline of a possible rationally coherent understanding of the connection between mind and matter begins to emerge...

A huge amount of empirical work on attention has been done since the nineteenth century writings of William James. Much of it is summarized and analyzed in Harold Pashlers (1998) book The psychology of attention. Pashler organizes his discussion by separating perceptual processing from post-perceptual processing. The former type covers processing that, first of all, identifies such basic physical properties of stimuli as location, color, loudness and pitch and, secondly, identifies stimuli in terms of categories of meaning. The post-perceptual process covers the tasks of producing motor actions and cognitive action beyond mere categorical identification. Pashler emphasizes that the empirical findings of attention studies argue for a distinction between perceptual attentional limitations and more central limitations involved in thought and the planning of action (p. 33). The existence of these two different processes with different characteristics is a principal theme of Pashlers book.

A striking difference that emerges from the analysis of the many sophisticated experiments is that the perceptual processes proceed essentially in parallel, whereas the post-perceptual processes of planning and executing actions form a single queue... This is in line with the distinction between passive and active processes. The former are essentially a passive stream of essentially isolated process 1 events, whereas the active processes involve effort-induced rapid sequences of process 1 events that can saturate a given capacity. This idea of a limited capacity for serial processing of effort-based inputs is the main conclusion of Pashlers book. It is in accord with the quantum based model, supplemented by the condition that there is a limit to how many effortful process 1 events per second a person can produce during a particular stage of their development. Examination of Pashlers book shows that this quantum model accommodates naturally all of the complex structural features of the empirical data that he describes. Of key importance is his chapter 6, in which he emphasizes a specific finding: strong empirical evidence for what he calls a central processing bottleneck associated with the attentive selection of a motor action. This kind of bottleneck is what the quantum-physics-based theory predicts: the bottleneck is precisely the single linear sequence of mind-brain quantum events that von Neumann quantum theory describes.

The queuing effect is illustrated in a nineteenth century result described by Pashler: mental exertion reduces the amount of physical force that a person can apply. He notes that This puzzling phenomenon remains unexplained. However, it is an automatic consequence of the physics-based theory: creating physical force by muscle contraction requires an effort that opposes the physical tendencies generated by the Schrdinger equation (process 2). This opposing tendency is produced by the QZE and is roughly proportional to the number of bits per second of central processing capacity that is devoted to the task. So, if part of this processing capacity is directed to another task, then the applied force will diminish.

The important point here is that there is in principle, in the quantum model, an essential dynamic difference between the unconscious processing done by the Schrdinger evolution, which generates by a local process an expanding collection of classically conceivable experiential possibilities and the process associated with the sequence of conscious events that constitute the willful selection of action. The former are not limited by the queuing effect, because process 2 simply develops all of the possibilities in parallel. Nor is the stream of essentially isolated passive process 1 events thus limited. It is the closely packed active process 1 events that can, in the von Neumann formulation, be limited by the queuing effect.

The previous conclusions can be further expanded into the field of neuropsychology: Quantum physics works better in neuropsychology than its classic approximation precisely because it inserts knowable choices made by human agents into the dynamics in place of unknowable-in-principle microscopic variables. To illustrate this point we apply the quantum approach to the experiment of Ochsner et al. (2002). Reduced to its essence, this experiment consists rst of a training phase in which the subject is taught how to distinguish, and respond differently to, two instructions given while viewing emotionally disturbing visual images: attend (meaning passively be aware of, but not try to alter, any feelings elicited by) or reappraise (meaning actively reinterpret the content so that it no longer elicits a negative response). Second, the subjects perform these mental actions during brain data acquisition. The visual stimuli, when passively attended to, activate limbic brain areas, and when actively reappraised, activate prefrontal cerebral regions.

Quantum theory was designed to deal with cases in which the conscious action of an agent- to perform some particular probing action- enters into the dynamics in an essential way. Within the context of the experiment by Ochsner et al. (2002), quantum theory provides, via the process 1 mechanism, an explicit means whereby the successful effort to rethink feelings actually causesby catching and actively holding in place- the prefrontal activations critical to the experimentally observed deactivation of the amygdala and orbitofrontal cortex. The resulting intention-induced modulation of limbic mechanisms that putatively generate the frightening aversive feelings associated with passively attending to the target stimuli is the key factor necessary for the achievement of the emotional self-regulation seen in the active cognitive reappraisal condition. Thus, within the quantum framework, the causal relationship between the mental work of mindfully reappraising and the observed brain changes presumed to be necessary for emotional self-regulation is dynamically accounted for. Furthermore, and crucially, it is accounted for in ways that fully allow for communicating to others the means used by living human experimental subjects to attain the desired outcome. The classic materialist approach to these data, as detailed earlier in this article, by no means allows for such effective communication. Analogous quantum mechanical reasoning can of course be used mutatis mutandis to explain the data of Beauregard et al. (2001) and related studies of self-directed neuroplasticity (see Schwartz & Begley 2002).

This study is based on the so-called Copenhagen interpretation of quantum mechanics (the orthodox interpretation). However, the study also mentions alternative approaches, such as those of Roger Penrose, Hugh Everett and David Bohm: All three of the alternative approaches accept von Neumanns move of treating the entire physical world quantum mechanically. In particular, the bodies and brains of the agents are treated as conglomerations of such things as quantum mechanically described electrons, ions and photons.

Penrose (1994) accepts the need for conscious related process 1 events, and wants to explain when they occur. He proposes an explanation that is tied to another quantum mystery, that of quantum gravity.

Suppose the quantum state of a brain develops two components corresponding to the Yes and No answers to some query. Penrose proposes a rule, based on the gravitational interaction between these two parts, that specifies approximately how long before a collapse will occur to one branch or to the other. In this way the question of when the answer Yes or No occurs is given a physical explanation. Penrose and his collaborator Hameroff (1996) calculate estimates of this typical time-interval on the basis of some detailed assumptions about the brain. The result is a time of the order of one-tenth of a second. They argue that the rough agreement of this number with time-intervals normally associated with consciousness lends strong support to their theory. The PenroseHameroff model requires that the quantum state of the brain has a property called macroscopic quantum coherence, which needs to be maintained for around a tenth of a second. But, according to calculations made by Max Tegmark (2000), this property ought not to hold for more than about 10-13 s. Hameroff and co-workers (Hagen et al. 2002) have advanced reasons why this number should actually be of the order of a tenth of a second. But 12 orders of magnitude is a very big difference to explain away and serious doubts remain about whether the Penrose-Hameroff theory is technically viable.

Everett (1957) proposed another way to deal with the problem of how the quantum formulae are tied to our conscious experiences. It is called the many-worlds or many-minds approach. The basic idea is that nature makes no choices between the Yes and No possibilities: both options actually do occur. But, owing to certain features of quantum mechanics, the two streams of consciousness in with these two alternative answers appear are dynamically independent: neither one has any effect on the other. Hence the two incompatible streams exist in parallel epistemological worlds, although in the one single ontological or physical quantum world.

This many-minds approach is plausible within the framework provided by quantum mathematics. It evades the need for any real choices between the Yes and No answers to the question posed by the process 1 action. However, von Neumann never even mentions any real choice between Yes and No and the founders of quantum theory likewise focus attention on the crucial choice

of which question shall be posed. It is this choice, which is in the hands of the agent, that the present paper has focused upon. The subsequent choice between Yes and No, is normally deemed to be made by nature. But it is enough that the latter choice merely seems to be made in accordance with the quantum probably rules. The real problem with the many-minds approach is that its proponents have not yet adequately explained how one can evade the process 1 choices. This difficulty is discussed in detail in Stapp (2002). David Bohms pilot-wave model (Bohm 1952) seems at first to be another way of evading the problem of how to tie the formulae of quantum mechanics to human experiences. Yet in David Bohms book with Basil Hiley (Bohm & Hiley 1993) the last two chapters go far beyond the reasonably well-defined pilot-wave model and attempt to deal with the problem dealt with in the works of Stapp (1990) and of Gell-Mann & Hartle (1989). This leads Bohm into a discussion of his concept of the implicate order, which is far less mathematically well-defined than his pilotwave model.

Bohm saw a need to deal with consciousness and wrote a detailed paper on it (Bohm 1986, 1990). His proposals go far beyond the simple well-defined pilot-wave model. It involves an infinite tower of pilot waves, each controlling the level below. The engaging simplicity of the original pilot-wave model is lost in this infinite tower.

The sum of all this is that the structure of quantum theory indicates the need for a nonmechanistic consciousness-related process, but that the approaches to quantum theory that go beyond the pragmatic Copenhagen- von Neumann approach have serious problems that have yet to be resolved. We, in this paper, have chosen to stay on the safer ground of orthodox pragmatic quantum theory and to explore what can be said within that framework. [22]

I would just like to add here the following thought. Even if the effort of attention, so that our thoughts can concentrate on an object or subject, is a very important property, even a quantification parameter, of free will, there is an another aspect of intelligence which may prove itself even more powerful or flexible than the power of concentration. I am talking about the capability of setting our minds free from what we mostly pay attention to. Because many times

the focus of our attention is not the result of strong determination, but instead a mere attachment of consciousness on something that caused an involuntary response or that perhaps it was regarded by consciousness as peculiar enough. So the ability to (purposefully) withdraw attention from an object by which consciousness was more or less unwillingly trapped may prove to be the greatest aspect of free will.

Genes and memes

Many obscure notions about the origin of human thought and behavior were clarified with the discovery of genes. Genes are biological pools of information containing not only personal traits but also traits of a whole species. But genes may not explain acquired behavior, culture, and so on. This was when Richard Dawkins introduced the notion of memes. Memes are patterns which behave like genes but are not biological. They form complexes and they can be transmitted from person to person through speech, mimicry, written language, mass media, the internet, and so on. Memes, we could say, are the successors of archetypes. But memes, on the contrary to archetypes, are not primordial, unchanged quantities existing since the beginning of the universe or since the dawn of our species, but instead they can change, they can be born and they may disappear. Dawkins, who used the word meme in his book The shellfish gene, points out the insufficiency of genes to explain the whole spectrum of human behavior: As an enthusiastic Darwinian, I have been dissatisfied with explanations that my fellow enthusiasts have offered for human behavior. They have tried to look for biological advantages in various attributes of human civilization. For instance, tribal religion has been seen as a mechanism for solidifying group identity, valuable for a pack-hunting species whose individuals rely on cooperation to catch large and fast prey. Frequently the evolutionary preconception in terms of which such theories are framed is implicitly group- selectionist, but it is possible to rephrase the theories in terms of orthodox gene selection. Man may well have spent large portions of the last several million years living in small kin groups. Kin selection and selection in favor of reciprocal altruism may have acted on human genes to produce many of our basic psychological attributes and tendencies. These ideas are plausible as far as they go, but I find that

they do not begin to square up to the formidable challenge of explaining culture, cultural evolution, and the immense differences between human cultures around the world I think we have got to start again and go right back to first principles. The argument I shall advance, surprising as it may seem coming from the author of the earlier chapters, is that, for an understanding of the evolution of modern man, we must begin by throwing out the gene as the sole basis of our ideas on evolution. I am an enthusiastic Darwinian, but I think Darwinism is too big a theory to be confined to the narrow context of the gene. The gene will enter my thesis as an analogy, nothing more. Memes can be seen as carriers of cultural information in an analogous sense that genes transmit genetic information. Memes reproduce differently from genes, using written language, music, the internet, radio and television, and so on. In turn, they are subject to the laws of natural selection: Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation. If a scientist hears, or reads about, a good idea, he passes it on to his colleagues and students. He mentions it in his articles and his lectures. If the idea catches on, it can be said to propagate itself, spreading from brain to brain. As my colleague N. K. Humphrey neatly summed up an earlier draft of this chapter: ...memes should be regarded as living structures, not just metaphorically but technically. When you plant a fertile meme in my mind you literally parasitize my brain, turning it into a vehicle for the memes propagation in just the way that a virus may parasitize the genetic mechanism of a host cell. And this isnt just a way of talking-the meme for, say, belief in life after death is actually realized physically, millions of times over, as a structure in the nervous systems of individual men the world over.

Dawkins mentions another interesting example of a meme: Most of what is unusual about man can be summed up in one word: culture Cultural transmission is analogous to genetic transmission in that, although basically conservative, it can

give rise to a form of evolution. Geoffrey Chaucer could not hold a conversation with a modern Englishman, even though they are linked to each other by an unbroken chain of some twenty generations of Englishmen, each of whom could speak to his immediate neighbors in the chain as a son speaks to his father. Language seems to evolve by non-genetic means, and at a rate which is orders of magnitude faster than genetic evolution.

Cultural transmission is not unique to man. The best non-human example that I know has recently been described by P. Jenkins in the song of a bird called the saddleback which lives on islands off New Zealand. On the island where he worked there was a total repertoire of about nine distinct songs. Any given male sang only one or a few of these songs. The males could be classified into dialect groups. For example, one group of eight males with neighboring territories sang a particular song called the CC song. Other dialect groups sang different songs. Sometimes the members of a dialect group shared more than one distinct song. By comparing the songs of fathers and sons, Jenkins showed that song patterns were not inherited genetically. Each young male was likely to adopt songs from his territorial neighbors by imitation, in an analogous way to human language. During most of the time Jenkins was there, there was a fixed number of songs on the island, a kind of song pool from which each young male drew his own small repertoire. But occasionally Jenkins was privileged to witness the invention of a new song, which occurred by a mistake in the imitation of an old one. He writes: New song forms have been shown to arise variously by change of pitch of a note, repetition of a note, the elision of notes and the combination of parts of other existing songs The appearance of the new form was an abrupt event and the product was quite stable over a period of years. Further, in a number of cases the variant was transmitted accurately in its new form to younger recruits so that a recognizably coherent group of like singers developed. Jenkins refers to the origins of new songs as cultural mutations.

The properties of memes look much like those of genes. This is justified because memes are the offspring of genes at a non- biological level. Apart from their ability to survive and to multiply, they also exhibit all the relative mechanisms within the context of natural selection. Dawkins says:

Fundamentally, the reason why it is good policy for us to try to explain biological phenomena in terms of gene advantage is that genes are replicators. As soon as the primeval soup provided conditions in which molecules could make copies of themselves, the replicators themselves took over. For more than three thousand million years, DNA has been the only replicator worth talking about in the world. But it does not necessarily hold these monopoly rights for all time. Whenever conditions arise in which a new kind of replicator can make copies of itself, the new replicators will tend to take over, and start a new kind of evolution of their own. Once this new evolution begins, it will in no necessary sense be subservient to the old. The old gene selected evolution, by making brains, provided the soup' in which the first memes arose. Once selfcopying memes had arisen, their own, much faster, kind of evolution took off

Imitation, in the broad sense, is how memes can replicate. But just as not all genes that can replicate do so successfully, so some memes are more successful in the meme-pool than others. This is the analogue of natural selection. I have mentioned particular examples of qualities that make for high survival value among memes. But in general they must be the same as those discussed for the replicators of Chapter 2: longevity, fecundity, and copying-fidelity. The longevity of any one copy of a meme is probably relatively unimportant, as it is for any one copy of a gene As in the case of genes, fecundity is much more important than longevity of particular copies Some memes, like some genes, achieve brilliant short-term success in spreading rapidly, but do not last long in the meme pool This brings me to the third general quality of successful replicators: copying-fidelity At first sight it looks as if memes are not high-fidelity replicators at all The memes are being passed on to you in altered form. This looks quite unlike the particulate, all-or- none quality of gene transmission. It looks as though meme transmission is subject to continuous mutation, and also to blending In the examples that Dawkins uses, it seems that he doesnt forget to mention the meme of God: Consider the idea of God. We do not know how it arose in the meme pool. Probably it originated many times by independent mutation. In any case, it is very old indeed. How does it replicate itself? By the spoken and written word, aided by great music and great art. Why does it

have such high survival value? Remember that survival value here does not mean value for a gene in a gene pool, but value for a meme in a meme pool. The question really means: What is it about the idea of a god that gives it its stability and penetrance in the cultural environment? The survival value of the god meme in the meme pool results from its great psychological appeal. It provides a superficially plausible answer to deep and troubling questions about existence. It suggests that injustices in this world may be rectified in the next. The everlasting arms hold out a cushion against our own inadequacies which, like a doctors placebo, is none the less effective for being imaginary. These are some of the reasons why the idea of God is copied so readily by successive generations of individual brains. God exists, if only in the form of a meme with high survival value, or infective power, in the environment provided by human culture.

Dawkins also refers to groups of memes which form complexes like those formed by genes (such as chromosomes, etc.). He particularly mentions the notion of hell fire and of faith, as memes that accompany and apparently reinforce the power of the meme of god. With respect to faith, he says: Another member of the religious meme complex is called faith. It means blind trust, in the absence of evidence, even in the teeth of evidence. The story of Doubting Thomas is told, not so that we shall admire Thomas, but so that we can admire the other apostles in comparison. Thomas demanded evidence. Nothing is more lethal for certain kinds of meme than a tendency to look for evidence The meme for blind faith secures its own perpetuation by the simple unconscious expedient of discouraging rational inquiry. Blind faith can justify anything Memes for blind faith have their own ruthless ways of propagating themselves.

The question how different memes are from genes with respect to their ability to diversify beyond their initial function is an interesting one. Furthermore, the possibility that memes can exist without the corresponding genes which created them is very doubtful. However, if we consider the unlimited capabilities of artificial intelligence, memes may not only be real but also threatening. Against this negative perspective, Dawkins seems to be reassuring:

I have been a bit negative about memes, but they have their cheerful side as well. When we die there are two things we can leave behind us: genes and memes. We were built as gene machines, created to pass on our genes. But that aspect of us will be forgotten in three generations. Your child, even your grandchild, may bear a resemblance to you, perhaps in facial features, in a talent for music, in the color of her hair. But as each generation passes, the contribution of your genes is halved. It does not take long to reach negligible proportions. Our genes may be immortal but the collection of genes that is any one of us is bound to crumble away We should not seek immortality in reproduction. But if you contribute to the worlds culture, if you have a good idea, compose a tune, invent a sparking plug, write a poem, it may live on, intact, long after your genes have dissolved in the common pool. Socrates may or may not have a gene or two alive in the world today, as G. C. Williams has remarked, but who cares? The meme-complexes of Socrates, Leonardo, Copernicus and Marconi are still going strong [23] The altruistic meme Here I remembered Plato and his world of Ideas. According to Dawkins, all these Platonic forms will be Memes, together with Jungs archetypes. I dont know if its worth it such a linguistic trespass, or offence, using a new ridiculous world such as the word meme in place of words so powerful, like forms or archetypes. However, it is a fact that memes possess a new, modern potential, as well as all the modern tools helping them better understand themselves and spread in the world. In addition, memes have the same lust for immortality as human genes have. However, memes need humans as much as we need our ideas in order to go on living. In other words, as a virus will try not to kill his host for the sake of its own survival, so memes need humans as their own hosts. This, fundamentally, creates a kind of balance between the egoistic and altruistic tendencies of memes themselves, as well as of human thoughts who carry these memes. As Dawkins puts it: It is possible that yet another unique quality of man is a capacity for genuine, disinterested, true altruism. I hope so, but I am not going to argue the case one way or the other, nor to speculate over its possible memic evolution. The point I am making now is that, even if we look on the dark side and assume that individual man is fundamentally selfish, our conscious foresight-our

capacity to simulate the future in imagination-could save us from the worst selfish excesses of the blind replicators. We have at least the mental equipment to foster our long-term selfish interests rather than merely our short-term selfish interests. We can see the long-term benefits of participating in a conspiracy of doves, and we can sit down together to discuss ways of making the conspiracy work. We have the power to defy the selfish genes of our birth and, if necessary, the selfish memes of our indoctrination. We can even discuss ways of deliberately cultivating and nurturing pure, disinterested altruism- something that has no place in nature, something that has never existed before in the whole history of the world. We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on earth, can rebel against the tyranny of the selfish replicators.

The aspect of altruism with respect to the theory of memes is explained by Elizabeth Styles, in her book The meme machine: Altruism is defined as behavior that benefits another creature at the expense of the one carrying it out. In other words, altruism means doing something that costs time, effort, or resources, for the sake of someone else. This might mean providing food for another animal, giving a warning signal to protect others while putting yourself at risk, or fighting an enemy to save another animal from harm. Examples abound in nature, from the social insects whose lives revolve around the good of their communion to rabbits that thump warnings of approaching footsteps, and vampire bats that share meals of blood. Humans are uniquely cooperative and spend a great deal of their time doing things that benefit others as well as themselves: what psychologists sometimes refer to as prosocial behaviour. They have moral sensibilities and a strong sense of right and wrong. They are altruists.

Altruism is a problem for many social psychologists and economists who assume that humans rationally pursue their own interests. It is also a problem for Darwinism, although it was not always seen that way. The problem varies according to the level at which you think selection takes place- or, putting it another way- what you think evolution is for. If you believe, as many early Darwinians did, that evolution ultimately proceeds for the good of the individual, then why should any individual behave in such a way as to incur serious costs to itself while benefiting

someone else? All individuals ought to be out for themselves alone, and nature ought truly to be red in tooth and claw. Yet clearly it is not. Many animals live social and cooperative lives, parents lavish devotion on their offspring, and many mammals spend hours of every day grooming their friends and neighbors. Why do they do it?...

The answer that has so successfully transformed the problem of altruism is selfish gene theory. If you put the replicator at the heart of evolution and see selection as acting to the advantage of some genes rather than others, then many forms of altruism make perfect sense. Take parental care, for example. Your own children inherit half of your genes. Your children are the only direct way your genes can be carried on into future generations and so parental care is obviously needed, but this same principle can be applied to many other kinds of altruism. William Hamiltons paper, The genetical evolution of social behavior (1964), became a classic. He put numbers to Haldanes suggestion and developed what has come to be known as the theory of kin selection. He imagined a gene G that tends to cause some kind of altruistic behavior, and explained that Despite the principle of survival of the fittest the ultimate criterion which determines whether G will spread is not whether the behavior is to the benefit of the behaver but whether it is to the benefit of the gene G. (Hamilton 1963). This means that altruistic behavior can spread in a population if animals are altruistic towards their own kin.

Another success for biology has been reciprocal altruism. Darwin (1871) speculated that if a man aided his fellow-men he might expect to get aid in return. A hundred years later Robert Trivers (1971) turned this speculation into the theory of reciprocal altruism, explaining how selection might favor animals who reciprocated friendship, for example, by sharing surplus resources in good times in the hope of help in bad times Gratitude, friendship, sympathy, trust, indignation, and feelings of guilt and revenge have all been attributed to reciprocal altruism, as has moralistic aggression, or our tendency to get upset over unfairness. If we have evolved to share resources with other humans, but to make sure our genes benefit, then our feelings are the way evolution has equipped us to do it. On this theory not only moral sentiments, but ideas of justice and legal systems can be traced to the evolution of reciprocal altruism (Matt Ridley 1996; Wagstaff 1998; Wright 1994).

It seems that altruism cannot be fully understood as a mere advantage of the shellfish gene. Instead, Styles proposes that its source is in fact the altruistic meme: Until now there have been only two major choices in accounting for altruism. The first is to say that all apparent altruism actually (even if remotely) comes back to advantage to the genes. On this view there is no true altruism at all- or rather, what looks like true altruism is just the mistakes that natural selection has not managed to eradicate. That is the sociobiological explanation. The second has been to try to rescue true altruism and propose some kind of extra something in human beings- a true morality, an independent moral conscience, a spiritual essence or a religious nature that somehow overcomes selfishness and the dictates of our genes; a view that finds little favor with most scientists who want to understand how human behavior works without invoking magic. Neither choice appears satisfactory to me.

Memetics provides a third possibility. With a second replicator acting on human minds and brains the possibilities are expanded. We should expect to find behavior that is in the interests of the memes, as well as behavior serving the genes. Magic is no longer required to see why humans should differ from all other animals, nor why they should show far more cooperative and altruistic behavior

The essential memetic point is this- if people are altruistic they become popular, because they are popular they are copied, and because they are copied their memes spread more widely than the memes of not-so-altruistic people, including the altruistic memes themselves. This provides a mechanism for spreading altruistic behavior. I am going to speculate about the origins of such behavior in our evolutionary past People are nice to each other to get kindness in return, and their emotions are designed appropriately that is, people want to be generous to those who might repay them, and they want to be liked. Now, add the capacity to imitate, and the strategy copy-the-altruist, and two consequences follow. First, kind and generous behaviors will spread by imitation. Second, behaviors that look like kind and generous ones, or are prevalent in kind and generous people, will also spread by imitation

This strategy is, at first, of benefit to the genes but because it involves the second replicator the genes cannot keep it under control. Copy-the-altruist starts as a strategy for biological gain, and ends up as a strategy for spreading memes- including (but not restricted to) memes for altruism itself

Imagine two early hunters who go out with bows and arrows, leather quivers, and skin clothing, and both come back with meat. One, let us call him Kev, shares his meat widely with surrounding people. He does this because kin selection and reciprocal altruism have given him genes for at least some altruistic behavior. Meanwhile Gav keeps his meat to himself and his own family, because his genes have made him somewhat less generous. Which behaviors are more likely to get copied? Kevs of course. He sees more people, these people like him, and they tend to copy him. So his style of quiver, his kind of clothing and his ways of behaving are more likely to be passed on than Gavs including the altruistic behavior itself. In this way Kev is the early equivalent of the meme-fountain, and he spreads memes because of his altruistic behavior.

Note that there are two different things going on here. First, the altruistic behavior serves to spread copies of itself. Second, it spreads copies of other memes from the altruistic person. This second possibility could produce odd results. As with biological evolution, accidents of history can have profound effects. So, if it just happened that in one particular group of our ancestors the generous people happened to have made specially natty blue-feathered arrows, then bluefeathered arrows would spread more widely than brown-feathered ones, and so on. Whatever the kind of memes we are talking about, they may be driven to increase by the altruism of their bearers

I have already argued that the best imitators, or the possessors of the best memes, will have a survival advantage, as will the people who mate with them. So the strategy mate-with-the-bestimitator spreads. In practice, this means mating with those people who have the most fashionable (and not just the most useful) memes, and we can now see that altruism is one of the factors that determined which memes come to be fashionable.

So, which is more powerful? Genes or memes? It would seem that we are only kind for the sake of our selfish genes. But sometimes we show an altruistic behavior without expecting to take something in return. So there are unselfish deeds and actions which as soon as they appear they tend to prevail and to perpetuate, because those who behave this way become popular. In conclusion, memes seem to be able to prevail, no matter how selfish our genes may be: Any act of meme-driven altruism potentially lowers the actors genetic fitness. In other words, the arena of human altruism can be seen as a competition between memes and genes. Kevs behavior will make him friends but it may reduce his chances of survival, or the chance of his childrens surreal, by reducing their share of the meat. His genes only care about his generosity if it serves in the long run to pass them on, and they have equipped him with feelings and behaviors that generally serves their interests. But his memes do not care about his genes at all. If they can get copied they will. And they will, because people copy people they like. Thus we can imagine a human society, in which meme-driven altruistic behavior could spread- even if it put a heavy burden on individuals. In other words, once people start to copy the altruists, the genes will not necessarily be able to stop them.

But can altruism become so exaggerated as to put someone in danger? Styles mentions the practice of potlatch. Perhaps this is an example where memes take control of our behavior. Suicidal tendencies in general can be of such nature. This shows how powerful memes may be, as well as that altruistic behavior may lead to personal damage or general hysteria. But the key point is that memes tend to cooperate with each other. It is under this friendly spirit that pure altruism may evolve and flourish: Could memetic altruism get completely out of hand- and stretch the leash to breaking point? Sometimes people do give more than they can really afford. They vie with each other to be the most generous, or give the most ostentatious of gifts. As Matt Ridley (1996) points out, gifts can become bargains, bribes, and weapons. Most extraordinary is the practice of potlatch. The term comes from the Chinook language and potlatch is best known from American Indian groups, but it also occurs in New Guinea and other places. A potlatch is a special event in which opposing groups try to impress their rivals by giving away, or destroying, extravagant gifts. They may give

each other canoes and animal skins, beads and copper plates, blankets and food. They may even burn their most valuable possessions, kill their slaves, and pour precious oil onto a huge fire.

Note that this wasteful tradition is not like ordinary reciprocal altruism. In most forms of reciprocal altruism, both parties benefit from cooperating, but in a potlatch everyone loses (at least in purely material terms). Note also that potlatch depends upon imitation. Such a tradition could only spread by one person copying it from another until it becomes the norm for a whole society. It is imitation that makes such peculiar behavior possible, and once the genes have given us imitation they cannot take it back. We could see the potlatch behavior as like a parasite that may, or may not, kill its host, while most of our altruistic behavior is symbiotic or even beneficial

Once again we can see that it is our capacity to imitate that makes humans so different from other species There is some observational evidence that human infants show a tendency to share (as well as to be selfish of course) at a young age, while infants of other primate species do not, suggesting an innate basis. Certainly, humans have a far more cooperative society than any other species, apart from the social insects such as ants and bees that operate by kin selection. This theory of memetic altruism could provide the explanation. It might also help explain why the relationship between memes and genes is apparently so successful, even though the two replicators are so often at odds. Perhaps memes are more like a symbiont and less like a parasite precisely because they encourage people to cooperate with each other.

If there were many other species with memes, comparisons would be easy; but there are not. Many birds imitate each others songs and so perhaps we should expect these birds to show more altruism to each other than closely related non-imitators. Dolphins are among the very few other species capable of imitation, and they are renounced for stories of heroic rescues. Dolphins have been reported to push a drowning human up to the surface of the sea, and even to push someone onto land- a very strange thing for another species to do. But this is only anecdote; much research would be needed to find out whether the idea is valid or not. Other research to find out whether memetic driving of altruism has ever occurred would be difficult, as is all research on behavior in our distant past.

The prospects for research are much brighter when it comes to modern humans and their behavior, and I want therefore to leave speculation about Kev and Gav and return to their modern counterparts. We shall see that being kind, generous, and friendly plays an important role in spreading memes in todays complicated society. [24]

Game theory and free will

Elizabeth Styles also mentions game theory: Game theory has made it possible to explore how and why various strategies might evolve. Trivers used a game called the Prisoners Dilemma in which two people are kept apart and told they are accused of a crime with a penalty of, say, ten years in prison. If both stay silent they can be convicted only on a lesser charge and both get a shorter sentence, say three years, but if one gives evidence against the other the defector gets off free. What should they do? Obviously the best outcome all round is for both to stay silent- but there is a strong temptation to defect- and what if the other one is tempted?- you might as well be tempted too. There are many other versions using points, money, or other resources. The important point is that a perfectly rational and selfish person will always gain by defecting. So how does cooperative behavior ever come about?

The answer is that in a one-off game it never should, but life is not a one-off game. We meet people again, and form judgements about their trustworthiness. The answer to the Prisoners Dilemma lies in repetition. In iterated Prisoners Dilemmas people assess the others likely behavior and then both can gain by cooperating. Players who have not met before often copy each other, cooperating with cooperators and not with defectors. Persistent detectors are shunned, and so lose their chance of exploiting others.

Games like this are also used by economists, mathematicians and computer modellers. In 1979, the American political scientist Robert Axelrod set up a tournament and asked computer programmers to submit strategies for playing the game. The fourteen entries each played 200

times against all the others, themselves, and a random program. To many peoples surprise, the winning program Tit-for-tat was both simple and nice. Tit-for-tat began by cooperating and then simply copied what the other player did. If the other player cooperated then both continued to cooperate and both did well; if the other player defected, Tit-for-tat retaliated and so did not lose out too badly against defectors. In a second tournament over sixty programs tried to beat Tit- for- tat but failed.

Subsequent research has used more complex situations, with many players, and has been used to simulate evolutionary processes. It turns out that unless Tit-for-tat begins against overwhelming numbers of defecting strategies, it will spread in a population and come to dominate it. It is what is known as an evolutionarily stable strategy. However, the real world is more complex, and Titfor-tat does not do so well when mistakes are made, or when there are more players and more uncertainty. Nevertheless, this approach shows how group advantage can emerge out of purely individual strategies without the need to appeal to evolution for the greater-good.

The tit-for-tat tactic of a winning program or even as a possible function of human intelligence is an interesting aspect of game theory and of free will. However, altruism is still something more than an eye for an eye strategy.

As far as game theory is concerned, Wikipedia says: Game theory is a study of strategic decision making. More formally, it is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. An alternative term suggested as a more descriptive name for the discipline is interactive decision theory. Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one persons gains exactly equal net losses of the other participants. Today, however, game theory applies to a wide range of class relations, and has developed into an umbrella term for the logical side of science, to include both human and non-humans, like computers

Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior, with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.

This theory was developed extensively in the 1950s by many scholars. Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. Eight game-theorists have won the Nobel Memorial Prize in Economic Sciences, and John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology.

Wikipedia also refers to altruism within the context of game theory, as well as to applications of game theory in logic and in computer science: Maynard Smith, in the preface to Evolution and the Theory of Games, writes, paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behavior for which it was originally designed. Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.

One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a nights hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to Vervet monkeys that warn group members of a predator's

approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual.

Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.

Separately, game theory has played a role in online algorithms. In particular, the k-server problem, which has in the past been referred to as games with moving costs and request-answer games (Ben David, Borodin & Karp et al. 1994) The emergence of the internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of complex systems with economic theory. [25]

The Newcombs paradox A potentially interesting example of game theory is the so- called Newcombs paradox, named after its inventor, William Newcomb. The narration about the paradox that follows comes from the site of Franz Kiekeben: Newcombs paradox, named after its creator, physicist William Newcomb, is one of the most widely debated paradoxes of recent times. It was first made popular by Harvard philosopher Robert Nozick. The following is based on Martin Gardners and Robert Nozicks Scientific American papers on the subject, both of which can be found in Gardners book Knotted Doughnuts. The paradox goes like this:

A highly superior being from another part of the galaxy presents you with two boxes, one open and one closed. In the open box there is a thousand-dollar bill. In the closed box there is either

one million dollars or there is nothing. You are to choose between taking both boxes or taking the closed box only. But theres a catch.

The being claims that he is able to predict what any human being will decide to do. If he predicted you would take only the closed box, then he placed a million dollars in it. But if he predicted you would take both boxes, he left the closed box empty. Furthermore, he has run this experiment with 999 people before, and has been right every time.

What do you do? On the one hand, the evidence is fairly obvious that if you choose to take only the closed box you will get one million dollars, whereas if you take both boxes you get only a measly thousand. Youd be stupid to take both boxes. On the other hand, at the time you make your decision, the closed box already is empty or else contains a million dollars. Either way, if you take both boxes you get a thousand dollars more than if you take the closed box only.

As Nozick points out, there are two accepted principles of decision theory in conflict here. The expected-utility principle (based on the probability of each outcome) argues that you should take the closed box only. The dominance principle, however, says that if one strategy is always better, no matter what the circumstances, then you should pick it. And no matter what the closed box contains, you are $1000 richer if you take both boxes than if you take the closed one only

Martin Gardner seems to be of the opinion that the situation is logically impossible. Basically he argues that, since the paradox presents us with two equally valid but inconsistent solutions, the situation can never occur. And he implies that the reason it is a logical impossibility has to do with paradoxes that can arise when predictions causally interact with the predicted event. For instance, if the being tells you that he has predicted you will have eggs for breakfast, why couldn't you decide to have cereal instead? And if you do have cereal, then did the being really predict correctly? He may very well have predicted correctly in the sense that, had he not told you about it, he would have been correct. But by giving you the information, he added something to the equation that was not there when he made his prediction, thereby nullifying it

Lets turn now to Nozicks analysis. One possible solution that Nozick considers is the following: The dominance principle is not valid in Newcombs paradox because the states (1 million is placed in the box and nothing is placed in the box) are not probabilistically independent of the actions (take both and take only closed box). The dominance principle is acceptable only if the states are probabilistically independent of the actions.

Nozick disregards this solution by means of a counter-example: Suppose there is a hypochondriac who knows that a certain gene he may have inherited will probably cause him to die early and will also make him more likely to be an academic than an athlete. He is trying to decide whether to go to graduate school or to become a professional basketball player. Would it be reasonable for him to decide to become a basketball player because he fears that, if his decision were to go to graduate school, that means he probably has the gene and will therefore die? We certainly would not think that is reasonable. Whether he has the gene or not is already determined. His decision to become a basketball player will not alter that fact. And yet, here too, the probabilities of the states (has the gene and does not have the gene) are not probabilistically independent of the actions (decides to go to graduate school and decides to become a basketball player).

Both Gardner and Nozick conclude (though for different reasons) that, if they were faced with the situation presented by the paradox, they would take both boxes.

Franz Kiekeben gives his own solution to the paradox: A paradox occurs when there are apparently conclusive reasons supporting inconsistent propositions. In the case of Newcombs paradox, we have two arguments (both of which seem equally strong) for making opposite choices. The question is whether the paradox succeeds in making the opposing arguments equally strong. If it doesnt, then there actually is no paradox (or, to put it another way, the paradox will have been resolved). I dont think the two arguments are in fact equally strong, for, given the setup of the paradox, one can find a reasonable explanation for the alien beings ability to predict correctly. And in that case, the argument for taking the closed box is stronger than that for taking both boxes.

In order to explain why taking only the closed box is the more reasonable decision, lets first consider what it means to predict something. Prediction can mean at least one of two things. There's scientific prediction, where someone has observed similar conditions many times and predicts the outcome of a situation based on this experience (and on the assumption of some principle of uniformity in nature). This is how you predict that if you let go of your pencil it will fall, and how the weatherman predicts (though usually less successfully) what tomorrow will be like. And then theres prescience, where someone supposedly senses the future. Nostradamus isnt supposed to have been just a really good weatherman, he is supposed to have foreseen the future. This second kind of prediction is the equivalent of information traveling back in time. Now, whether the prediction is scientific or prescient, the solution to the paradox is essentially the same. But since I dont accept prescience, and because I dont think that that is how the paradox is usually understood, I will limit my explanation to scientific prediction only.

If the being predicts in the manner of a scientist, that means that there is a certain state of affairs, A, which holds at some point in time prior to your decision and the prediction, and which causes both. This connection between the prediction and the decision is what prevents your actions from being probabilistically independent of the states of the box. And it is realizing this that makes it rational to take the closed box only (i.e., it is what invalidates the dominance principle) Nozick and Gardners choice to take both boxes, on the other hand, make them much less likely to make a million. [26] While Newcombs paradox seems to be a greatly debated issue, I personally think that it has more to do with logic than game theory. There are two boxes (either open or close). One certainly contains 1000$ (the open one) while the other (the closed one) contains either 1,000,000$ or nothing. First of all the fact that the first box is open doesnt matter, because it doesnt contain any information about the context of the second box. So if someone choses both boxes (which is by itself an absurd option because in a game we chose one option against another, not all options) he will win either 1,000$ or 1,001,000$.

Now, if someone is able to predict the context of the boxes and/or which box(es) will be chosen by the player, this is something that has to do with prophesy, not probabilities or game theory. It resembles Schrdingers cat, but in the latter case what makes the difference is exactly the choice of the observer, which is in fact an action of free will. Therefore, what Newcombs misses is falsifiability, which is the aspect that separates logic from magic.

However, an interesting case arises if the two events (to choose the closed box or to choose both boxes) are somehow correlated. Lets say, for example, that each time someone cho oses both boxes, the probability that the closed box will contain 1 million becomes less than 50%, which will be the case if he chooses only the closed box. In this case, it would be interesting to calculate the probabilities of winning 1 million. This reminds of Bells inequalities in relation to EPR paradox and quantum entanglement (which we will refer to later on). But if the two events, naturally, remain independent of each other, there doesnt seem to exist any reason why one might choose one box instead of both.

Prisoners dilemma Another interesting and more famous paradox concerning game theory is the prisoners dilemma. We saw a brief introduction of Elizabeth Styles on the subject previously. Here we will follow Wikipedias description on the subject: Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they dont have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail.

In fact, both prisoners cannot testify against each other simultaneously. So they will either both get, lets say, 1 year, or one will be set free and the other will be sentenced to three years. But the point is that the prisoner who snitches on his partner is exposed to the danger of retaliation and of

discredit by the other fellow outlaws. Even if he gets away with it, he would be faced with the stigma and remorse for the rest of his life. So both prisoners may be finally forced to cooperate: In this classic version of the game, collaboration is dominated by betrayal; if the other prisoner chooses to stay silent, then betraying them gives a better reward (no sentence instead of one year), and if the other prisoner chooses to betray then betraying them also gives a better reward (two years instead of three). Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them both to betray each other. The interesting part of this result is that pursuing individual reward logically leads the prisoners to both betray, but they would get a better reward if they both cooperated. In reality, humans display a systematic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of rational self-interested action. Robert Axelrod analyzed the iterated prisoners dillema in his book The Evolution of Cooperation. He found some basic conditions necessary for a successful strategy: Nice The most important condition is that the strategy must be nice, that is, it will not defect before its opponent does (this is sometimes referred to as an optimistic algorithm). Almost all of the top-scoring strategies were nice; therefore, a purely selfish strategy will not cheat on its opponent, for purely self-interested reasons first.

Retaliating However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as nasty strategies will ruthlessly exploit such players.


Successful strategies must also be forgiving. Though players will retaliate, they will once again fall back to cooperating if the opponent does not continue to defect. This stops long runs of revenge and counter-revenge, maximizing points.

Non-envious The last quality is being non-envious, that is not striving to score more than the opponent (note that a nice strategy can never score more than the opponent).

It is also thought that a new strategy may be more powerful than the cooperative tit-for-tat tactic: Although tit for tat is considered to be the most robust basic strategy, a team from Southampton University in England introduced a new strategy at the 20th-anniversary iterated prisoners dilemma competition, which proved to be more successful than tit for tat. This strategy relied on cooperation between programs to achieve the highest number of points for a single program. The university submitted 60 programs to the competition, which were designed to recognize each other through a series of five to ten moves at the start. Once this recognition was made, one program would always cooperate and the other would always defect, assuring the maximum number of points for the defector. If the program realized that it was playing a non-Southampton player, it would continuously defect in an attempt to minimize the score of the competing program. As a result, this strategy ended up taking the top three positions in the competition, as well as a number of positions towards the bottom.

This strategy takes advantage of the fact that multiple entries were allowed in this particular competition and that the performance of a team was measured by that of the highest-scoring player (meaning that the use of self-sacrificing players was a form of minmaxing). In a competition where one has control of only a single player, tit for tat is certainly a better strategy. Because of this new rule, this competition also has little theoretical significance when analyzing single agent strategies as compared to Axelrods seminal tournament. However, it provided the framework for analyzing how to achieve cooperative strategies in multi-agent frameworks, especially in the presence of noise. In fact, long before this new-rules tournament was played, Richard Dawkins in his book The Selfish Gene pointed out the possibility of such strategies

winning if multiple entries were allowed, but he remarked that most probably Axelrod would not have allowed them if they had been submitted. It also relies on circumventing rules about the prisoners dilemma in that there is no communication allowed between the two players. When the Southampton programs engage in an opening ten move dance to recognize one another, this only reinforces just how valuable communication can be in shifting the balance of the game. [27]

The aforementioned strategy looks like that of a team operating against all other teams. In this case, it is the extension of competition between single individuals to that between groups of people. But the rules which apply to individuals also apply to groups of people. Whatever happens to individuals also happens to groups of people when they get familiar with each other. Therefore it seems that even this case reduces into a generalized form of the tit-for-tat game.

The spirit of cooperation seems to become dominant in all cases- individuals, groups of people, states, nations, and so on. In real life seldom someone brakes his oath against his fellow team. It is a matter of honor and of negative consequences which he will face by the others. In the case of groups, the more groups enter the same alliance, the more tit-for-tat becomes the most popular strategy. Either in the singular or in the plural case, it is again a matter of altruism! As Elizabeth concludes, the important point is that life is not a one-off game. And in real life the altruistic meme (even for its own sake) always wins!

Memetics, Game Theory, and Free Will I found an article on the internet which combines Memetics, Game Theory, Free Will, and also Transfinite Math What Memeticss Got to Do with It I believe humanitys unique adaptability, growth, power, and promise all stem from the fact that humanity is defined not just by the usual replicator- genes- but just as much by a second replicator- memes. Memes are units of human language meaningful to a community of human beings. Memes can meet all the same criteria for replicators that genes do, but with a serious

catch: memes are not absolutely fixed in form. In any given context, yes, a meme is as concrete and non-negotiable a unit as a gene, with a particular shape, a particular medium, a particular size, a particular function or range of possible functions. And it is thanks to this solidity, this particularity, that a meme qualifies as a replicator, as a building block of the sort of game we call life. But a meme can do something that a gene cannot: it can transcend. For example, the word water can be translated into the word agua. Or the spoken word water can be represented by the written word water. Or the handwritten ink-on-paper symbol water can be copied into typewritten form, or typed in a word processor, so that it exists as unlit pixels on a screen and as binary code on a hard drive. The font of a word can be changed, as well as the size; an alphabetic word like agua can be translated into a hieroglyph or into an ideogram. A word can be carved into stone, or turned into radio waves, or turned into electric pulses or fiber optic pulses. All without losing its meaning, its ability to serve a similar function in multiple hosts or carriers or vessels or minds. This is what I mean by saying a meme can transcend. Elsewhere I define spirit as translingual and transmaterial content. By that definition, one could well say memes are more spiritual replicators than are genes.

Because memes can transcend, or are more spiritual, they are indeed more indefinite. Genes have built-in limitations on their endurance, because they only exist in one form, one language-DNAand this form inevitably breaks down in the presence of radiation; inevitably breaks apart at temperatures slightly higher than in our biosphere and inevitably stops moving and working in temperatures much below the freezing point of water; inevitably, therefore, cannot possibly endure for more than a few billion or trillion more years, depending on whether the universe collapses in a big crunch or expands into a frozen emptiness. Memes, by contrast, are real-world entities with no built-in limits on their endurance, based on what weve seen of their mutability, their spirituality, their lingual and material transcendence.

Because memes and genes differ in this regard, they represent two distinctly different forms of life. But forms is vague; I dont mean they have different shapes, or exist in different media; what I really mean is that they are two distinctly different games of life. What Game Theorys Got to Do with It

Game theory seeks to formalize or mathematize goal-oriented, purposeful, teleological activity in order to arrive at general truths about such activity; it is the science of goal-oriented activities or games. Each game is defined by the presence of at least one goal, one boundary or limitation or rule, and one player. The game of climbing Mt. Everest, for example, requires reaching the summit of Mt. Everest (the goal), getting there alive (a rule) and using your own muscle power to do so (another rule), and it also requires that someone be the climber (the player). A second example: The game of splitting a cake evenly between two people so that both deem the split an even one (a classic game theory example) requires, what else, splitting a cake evenly so that both recipients deem the split an even one (the goal), not deceiving either player about how much cake there is to start with or how big the split pieces are (a rule), and it also requires that at least one person do some cake cutting and that two people do some cake-slice fairness-deeming (the players).

Game theory is relevant to the memetic and genetic forms of life because both these forms of life fit game theorys definition of a game- they are defined by a goal, rules, and players. The goal of both forms of life is perpetuation. Both forms of life are likewise bound by rules or limits, such as the incompatibility of certain memes or genes, or the need for memes and genes to be expressed in particular media. And both forms of life are defined by the presence of players genes and organisms in genetic life; memes and persons in memetic life. What Free Wills Got to Do with It Note my distinction between genetic lifes organisms and memetic lifes persons. Genetic life is goal-oriented enough to fit the definition of a game, but as most biologists will be quick to assert, genetic lifes players are not purposeful, are not mindfully pursuing a goal of perpetuation. We speak of what the amoeba wants to do, but we dont imagine for a moment that the amoeba has a mind at all. But were often haunted by our inability to explain exactly where on the spectrum of biological complexity this mind or purposefulness appears.

I submit that in genetic life, as in any game, there must be at least one real player, one agent whose function is to weigh multiple branching possibilities in order to determine which ones lead closer to the games goal. The catch is: in genetic life, the organisms in large part serve not to weigh such possibilities, but to be the possibilities weighed. Genetic lifes method, natural selection, actualizes every possibility in order to weigh its viability. Every variant individual organism that genetic life brings into existence represents one possible route leading closer or less close to the games goal of perpetuation. Hence the blindness of natural selection to individual organisms suffering and deathsuffering and death turn the wheels of genetic lifes calculations. The calculating mind behind genetic life, is a gestalt of the entire planetary history of natural selection. We might, in keeping with my definition of spirit elsewhere, call genetic lifes one real player the spirit of genetic life. If this concept seems foreign to our basic understanding of games and how they work, its because we are used to thinking of ourselves as game players the way were players in the game of memetic life. That is, were used to thinking of ourselves as real players, agents who serve to weigh multiple branching possibilities in order to determine which ones lead closer to a games goal. Memetic beings model, within themselves, multiple futures and determine where in these virtual, imagined causal chains where they can, by their actions, make some futures more likely than others, and then they act in such a way as to make more likely the futures that lead closer to their goals. Note that this process is synonymous with choosing or having free will. Note that this process presupposes a power of imagination great enough to model more than one future and great enough to see how these futures could be affected by ones own actions. The ability to model multiple future worlds with such facility and such power of prediction requires that the models be made of something extremely dense or condensible and extremely flexible and yet extremely reliable. This something is the meme. The ability to relate these modeled futures to ones own modeled self, to determine how ones actions can lead to one future or another, is synonymous with our prized virtue of self-awareness, sentience, consciousness, ego. Note that without the ability to imagine more than one future, or the ability to imagine accurately ones self, one would not be capable of free will or choice. And note that its the meme that enables these imaginative feats of accurate modeling. If you doubt it, consider that our ability to model complex dynamic systems such as hurricanes with any accuracy has advanced in direct

proportion with our ability to build such systems out of languagein this case, mathematical language and computer code. And consider the following statement from Helen Keller, in support of the idea that accurate self-models are built of memes: When I learned the meaning of I and me and found that I was something, I began to think. Then consciousness first existed for me. If you require testimony from a different sort of authority, consider Darwins assertion from his Descent of Man: If it be maintained that self-consciousness, abstraction etc. are peculiar to man, it may well be that these are incidental results of other highly-advanced intellectual faculties; and these again are mainly the result of the continued use of a highlydeveloped language. What Transfinite Maths Got to Do with It Its all well and good to say, as an armchair theorist, that genetic and memetic life can be defined and analyzed using game theory. However, attaching actual math to these particular games can be tricky, for one or several reasons, depending on how you look at the tricky part.

For starters, even with the simpler game of genetic life, we find that the goal- perpetuation- is open-ended. One way or another, this means genetic lifes goal is infinite. And infinity is a tricky quantity to incorporate into mathematical formulations Memes dont work quite the same way. Memes somehow accomplish evolution- and act as a form of life- without having a certain mass, an inherent vulnerability to radiation, a certain volume- or rather, they do have such limitations, but only in small contexts. That is, memes always take on conventions, but are beholden to none So memes do, in each particular manifestation, struggle against limitations, and do exhibit the process we call evolution or life. But they are bound by no particular limitations, and in fact seem to make possible the collapse of any or most possible game limitations. The upshot: the memetic form of life promises to offer its players a better game than the inevitable death and inescapable competition of genetic life. Still, unless memetic life can offer its players individual immortality, its really not qualitatively different from, or better than, genetic life- the goals are the same in both games- for the replicator vessels, a small finite victory of replicative success relative to ones neighbors, and for

the spirit of the game, perpetuation of life and suffering and death. But its one thing to say memetic life seems more unbounded, more indefinite, than genetic life; its another to assert that it could actually offer to be an infinite game for its individual players. After all, weve never seen infinity, and even talking about infinities in mathematical terms such as game theory strives for, leads to absurdities galore. Not so fast, though. Were forgetting about a little branch of math called transfinite math. Developed by a guy named Cantor, transfinite math allows us to distinguish between different sizes of infinity, and therefore to analyse infinities with formal language that doesnt always result in answers of infinity and 1/infinity. In other words, Cantors transfinite math means we could, through game theory, study the possibilities of infinite games without having to take out a mortgage in cloud cuckoo land. The real problem with proposing the possibility of an infinite game isnt that people will think youre crazy. The real problem is that when we confront the possibility of infinity, we experience existential nausea, or vertigo- we stare into the abyss and it stares back into us. By making infinities relative and manipulable, Cantors transfinite numbers allow us to imagine how an infinite game could have structure and purpose and interest

The deadliness of the genetic game means organisms and genes cannot afford to break free of self-bias. And so long as this bias is present, pure rationality and love are not possible. With the meme comes the possibility of a truly better life, centered on the love described in my previous posts. But its no mean feat to overcome the billions of years of genetic hardwiring in each of us that predisposes us to believe only in the genetic type of replicator game, and to adhere only to its assumptions and rules. If memetics, game theory, an understanding of the nature of free will, and transfinite math can help us, its by giving our better hopes the backing of science and the relative confidence science can give us.

One might object that science can have nothing to do with spiritualistic conjecture. However, if memes are able to transcend material and form while still carrying the same information and still behaving as replicators, then they are, according to my definition of spirit, a more spiritual

replicatori.e., their content is transmaterial and translingual. Everything else described above simply follows. Well, not simply. [28]

Concerning memes, game theory and free will, we have already said some things. So we are left here to deal with the problem of infinity. Genes are not infinite by themselves, and the same holds for memes. But they both are indirectly immortal through replication. We may compare them with mass and energy respectively. Mass and energy cannot be destroyed because they transform in other forms of mass and energy. Maybe we should quit using this bipolarity and use instead the term information. Both genes and memes, mass and energy, are information. Is information infinite? Cantor would say that a string could be infinite but countable, just like natural or real numbers. In our case a string of information would consist of an infinite number of ones and zeros. The string itself is not infinite, neither a certain configuration of bits. Even all possible combinations of bits are countable if the string is finite. But again infinity seems to be hidden within these bits of information. Infinity itself could be a word represented by the string. This is what free will is all about. Although it is bounded by rules and by the nature of its physical carrier, the human brain, it is infinite in its separate properties, and immortal by its heroic nature. Even if we consider life and the way we confront problems as a game with its own rules, sooner or later this game will become so real that winning or losing will finally lose any meaning. As the game progresses, what becomes important is the game itself.

Is game theory a zero-sum game? According to Wikipedia, in game theory and economic theory, a zero- sum game is a mathematical representation of a situation in which a participants gain (or loss) of utility is exactly balanced by the losses (or gains) of the utility of the other participant(s). If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus cutting a cake, where taking a larger piece reduces the amount of cake available for others, is a zero-sum game if all participants value each unit of cake equally. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality, or with Nash equilibrium.

In common parlance, the expression zero-sum game is often used in a pejorative sense. It is especially used to point out cases where both sides may gain by an interaction (such as a business transaction), contrary to an assumption that the event must be competitive (zero-sum).

Many economic situations are not zero-sum, since valuable goods and services can be created, destroyed, or badly allocated in a number of ways, and any of these will create a net gain or loss of utility to numerous stakeholders. Specifically, all trade is by definition positive sum, because when two parties agree to an exchange each party must consider the goods it is receiving to be more valuable than the goods it is delivering. In fact, all economic exchanges must benefit both parties to the point that each party can overcome its transaction costs, or the transaction would simply not take place.

The most common or simple example from the subfield of social psychology is the concept of social traps. In some cases we can enhance our collective well-being by pursuing our personal interests- or parties can pursue mutually destructive behavior as they choose their own ends. It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. As former US President Bill Clinton states:

The more complex societies get and the more complex the networks of interdependence within and beyond community and national borders get, the more people are forced in their own interests to find non-zero-sum solutions. That is, win-win solutions instead of win-lose solutions Because we find as our interdependence increases that, on the whole, we do better when other people do better as well- so we have to find ways that we can all win, we have to accommodate each other [29]

However, we may say that all games, and more generally all kinds of processes, are and must be zero-sum. This is why: If we suppose a closed system, if one wins, another one must lose, because in a closed system resources are limited. More generally, this is what a conservation law demands: The input equals the output. Otherwise energy could be created out of nothing.

Conservation laws hold even in open systems. In fact open systems are collections of closed ones, with all possible processes taken into account.

Beyond the world of economy or of mathematics, and above all, there is the world of logic. Logic is certainly a closed system, no matter how far we may expand its boundaries. Lets remember Gdels incompleteness theorem for example: There will always be sentences not able to be proved by the premises.

So in game theory there will always be a factor of uncertainty which will lead to fair play. Even if some people win more often than others, they dont want to win all the time because in such a case the distinction between winning and losing will be lost. Consequently all sorts of games will tend to reach equilibrium, even in the most strict, thermodynamic sense of the word, as all natural processes tend to do so. But any process destined to reach equilibrium certainly is a zerosum game. In the end of the game, we are left with neither winners nor losers.

Delayed choice experiments

The Newcombs paradox reminded me of another case from physics concerning delayed choice experiments- the possibility of going back in time to alter some initial conditions so that the future result will be met or explained. One such example is Wheelers delayed choice experiment, proposed in 1978, and it is believed that the results Wheeler predicted have since been confirmed by actual experiment. As Wikipedia says, it is a variation on the famous double-slit experiment. In Wheelers version, the method of detection used in the experiment can be changed after a photon passes the double slit, so as to delay the choice of whether to detect the path of the particle, or detect its interference with itself. Since the measurement itself seems to determine how the particle passes through the double slits- and thus its state as a wave or particle- Wheelers experiment has been useful in trying to understand certain strange properties of quantum particles. Several implementations of the experiment 1984-2007 showed that the act of observation ultimately determines whether the photon will behave as a particle or wave, verifying the unintuitive results of the thought experiment.

Wheelers experiment consisted of a standard double-slit experiment, except that the detector screen could be removed at the last moment, thereby directing light into two more remote telescopes, each one focused on one of the slits. This allowed a delayed choice of the observer, i.e. a choice made after the presumed photon would have cleared the midstream barrier containing two parallel slits. The two telescopes, behind the (removed) screen could presumably see a flash of light from one of the slits, and would detect which path the photon traveled.

According to the results of the double slit experiment, if experimenters do something to learn which slit the photon goes through, they change the outcome of the experiment and the behavior of the photon. If the experimenters know which slit it goes through, the photon will behave as a particle. If they do not know which slit it goes through, the photon will behave as if it were a wave when it is given an opportunity to interfere with itself. The double-slit experiment is meant to observe phenomena that indicate whether light has a particle nature or a wave nature. The fundamental lesson of Wheelers delayed choice experiment is that the result depends on whether the experiment is set up to detect waves or particles. [30]

This ability of the experimenter to determine the nature of the electron- either wave or particle- is certainly a good example of free will. Another case of delayed choice, or, more generally, of backward causation, is Wheeler- Feynman absorber theory of radiation. It is an approach to electrodynamics introduced in 1945 by physicists Archibald Wheeler and Richard Feynman, which proposes a time-symmetric boundary condition for electromagnetic waves in such a way that the waves form a half-amplitude retarded wave and a half-amplitude advanced wave, in an emission- absorption process.

Minkowski diagram of type I emitter-absorber transaction. The emitter produces a retarded half-amplitude wave Re and an advanced half-amplitude wave Ae. The absorber produces a half-amplitude retarded wave Ra which cancels Re in region 3. It also produces a half-amplitude advanced wave Aa which reinforces Re in region 2 and cancels Ae in region 1. An observer sees only a full-amplitude retarded wave (Re + Aa) in region 2 passing from emitter to absorber. (Dashed lines indicate a 180 phase shift.)

The previous figure, where an emitter- absorber event is illustrated, is from the site of John G. Cramer. According to his description, the time-symmetric boundary conditions postulated by Wheeler and Feynman may be restated as follows: (1) The process of emission produces an electromagnetic wave consisting of a half-amplitude retarded wave and a half-amplitude advanced wave with opposite time directions. (2) The process of absorption is identical to that of emission and occurs in such a way that the wave produced by the absorber is 180 out of phase with the wave incident on it from the emitter. (3) An advanced wave may be reinterpreted as a retarded wave by reversing the signs of the energy and momentum (and therefore the time direction) of the wave, and likewise a retarded wave may be reinterpreted as an advanced wave. Thus in the Wheeler-Feynman scheme, emission and absorption will correspond to timesymmetric combinations.

According to the previous figure, the absorber, according to rule (2) above, can be considered to perform the absorption by producing a canceling retarded wave which is exactly 180 out of phase with an incident radiation, so that the incident wave stops at the absorber. But the Wheeler-Feynman time-symmetric boundary condition tells us that the production of this

canceling wave will be accompanied by the production of an advanced wave, which will carry negative energy in the reverse time direction and travel back, both in space and in time, to the point and the instant of emission. This advanced wave, according to rule (3) above, may be reinterpreted as a retarded wave traveling in the opposite direction and will reinforce the initial retarded wave, raising it from half to full amplitude. When the new advanced wave passes the point (and instant) of emission, it will be superimposed on the initial half-amplitude advanced wave and, because of the 180 phase difference imposed by the absorber, it will cancel this wave completely. Thus, an observer viewing this process will perceive no advanced radiation, but will describe the event as the emission of a full-amplitude retarded wave by the emitter, followed by the absorption of this retarded wave by the absorber at some later time. [31]

Wheeler-Feynman absorber theory was proposed in order to explain the energy loss (damping factor) of an oscillating (i.e. accelerating) particle radiating some form of energy. The advanced waves coming from another interacting particle were used instead of a self-interaction of the radiating particle with itself, in order to explain the damping. However, this interpretation raised the problem of causality, since the advanced wave from the absorber seemed to travel back in time to the emitter. This as an example could raise serious questions about the nature and the origin of thought and free will. Is our thought a sort of delayed response to an advanced mode of consciousness coming from somewhere out there? Could this be related with the unconscious? And is free will, according to this interpretation, an advanced player who sets the rules of the game, or perhaps is it the illusion of a retarded one who always joins the game too late?

The paradox of a journey in time

Maybe the most dramatic consequence of delayed choices, within the context of free will, is the aspect of time travel.

The lower light cone is characteristic of light cones in flat space; all space-time coordinates included in the light cone have later times. The upper light cone not only includes other spatial locations at the same time, but it also includes earlier times.

Stanford encyclopedia of philosophy says that the idea of backward causation doesnt necessarily imply time travel. This might be obvious because when we use backward causation we do it in the present and this process of thought has nothing to do with time travel. It is also supposed that these two notions are related to the extent that both agree that it is possible to causally affect the past. However, time travel involves a causal loop whereas backward causation does not. In fact, as we have previously seen, backward causation is one half of a causal loop, the other half being forward causation. Causal loops for their part can only occur in a universe in which one has closed time-like curves. In contrast, backward causation may take place in a world where there are no such closed time-like curves. So neither backward causation nor time travel logically entail each other and time travel is distinct from back-in-time travel.

On the other hand, time travel is not necessarily impossible if we give space and time a different meaning. An infinite causal loop, for example, has the properties of a closed time-like curve. Such a curve is a world-line, in space-time, that is circular, so that a traveler could return to the point where he started, both in space and time. This means that somebody travelling on a closed time-like curve could return to his own past. This possibility was first raised by Kurt Gdel. According to Wikipedia, he discovered in 1949 closed time-like curves as a solution to the equations of Einsteins general relativity and since then other similar solutions have been

found. Closed time-like curves could occur near strong gravitational fields or could be caused by great accelerations. [32]

A light-cone

More generally, we can represent events that take place in space-time with the above Minkowski diagram. A light-cone is the path that a flash of light, emanating from a single event and traveling in all directions, would take through space-time. The previous figure is from Bernd Schneiders site. The light- cone is described by the yellow lines, which stand for the world-line, or cosmic path, of light. All communication for a stationary observer must take place within his light-cone otherwise causality would be violated. Let us consider the four marked events which could be star explosions (supernovae), for instance. Event A is below the x-axis and within the light cone. It is possible for the resting observer at O (here and now) to see or to learn about the event in the past, since a -45 light beam would reach the t-axis at about one and a half years prior to t=0. Therefore this event belongs to Os past. Event B is also below the x-axis, but outside the light cone. The event has no effect on O in the present, since the light would need almost another year to reach him. Strictly speaking, B is not in Os past. Similar considerations are possible for the term future. Since his signal wouldnt be able to reach the event C, outside the light cone, the observer is not able to influence it. Its not in his future. Event D, on the other hand, is inside the light cone and may therefore be caused or influenced by the observer. [33]

A well-known paradox concerning a journey in time is the grandfather paradox. According to Wikipedia, it was first conceived by Rene Barjavel in his 1943 book Le Voyageur Imprudent. The paradox is described as following: the time traveler went back in time to the time when his grandfather had not married yet. At that time, the time traveler kills his grandfather, and therefore, he is never born when he was meant to be. The paradoxs namesake example is merely the most commonly thought of when one considers the whole range of possible actions. Another example is to invent a time machine, then go back in time impeding its invention. An equivalent paradox is known as auto-infanticide, going back in time and killing oneself as a baby. [34]

The extended present represented as a circle extending beyond the light-cone, into the forbidden area, where events C and B are found.

Despite the name, the grandfather paradox does not exclusively regard the impossibility of ones own birth. Rather, it regards any action that makes impossible the ability to travel back in time in the first place. Indeed there is a fundamental difference between causally violated and causally unrelated events. Events B and C that were previously described in the light- cone figure, are causally unrelated to the observer because he cannot have knowledge of them, not in the sense of any local means of communication. If he could, causality would be violated because there would be a faster than light transmission of information. In the same sense, if event B or C represented the death of the observers grandfather, then the observer wouldnt have any sort of immediate access to these events in order to cause his grandfathers death.

But lets consider that here and now is not a single point in space-time but that it has dimensionality. So the observers present extends in an area, as is depicted in the previous figure, for example forming a circle, which includes parts of elsewhere where events B and C take place. The shape of a circle was arbitrarily chosen for reasons of convenience, and the cycle itself may be disproportionally large with respect to the light- cone, for the sake of comparison. Nevertheless, events B and C are now connected, even if not causally related, to the observer. Event B may be regarded as past- like, and event C as future- like. Thus, even if he has not any direct access to them, he has some indirect knowledge of them. This non- local connection between the observer and the events, may acquire a notion of space- time at the later stage of causal relation. At the moment, I am not aware of any act of consciousness that could deform the light- cone in order to causally relate events B and C as its own real past or future, respectively. I dont even know if the previous thought has any meaning. In any case, the process according to which consciousness attributes time to events does not imply a physical travel in time. Instead it represents an event in the extended present, that is going either to take place in due time or to remain conditional forever.

Zenos Paradox and the Problem of Free Will

Ive found an interesting article in Skeptic magazine about Zenos paradox and the problem of free will. What is the connection? Zenos paradoxes are about the impossibility of motion, in order to support the superiority of the spirit (against matter, which is subject to all sorts of motions.) Zenos paradox, however, ends up as a paradox of logic itself. But if motion, as logic concludes, is impossible, then, within a completely determinist universe, free will is impossible too. Zenos paradox, which in fact was solved by Archimedes, has great consequences in all fields of science. It has to do with the unresolved issue of determinism against a more probabilistic approach of nature and ourselves (which doesnt necessarily mean that probabilities are not deterministic). However, we could acknowledge a, lets say, strong, versus a weak, principle concerning the problem of free will (analogous to the strong and weak anthropic principle); In simple words, it could be stated as follows: There are things which depend on us and which we can change, while other things dont and they are found beyond the powers of our

free will. But lets see what the article says about Zenos paradox in relation to the problem of free will: The arguments for and against free will have circulated through the intellectual world for millennia, with minor variations. We may sympathize with Andr Gide, who once mused that all the arguments about free will have already been made, but we must continue repeating them because nobody listens. Indeed, although some of the terms used in the debate may vary, the basic arguments continue to center on the likelihood of uncaused causes and the possibility of autonomy from natural laws. Whether the movements of atoms or the influence of genes and cultural conditioning control us, we are not the ultimate cause of our actions, and cannot truly be free. The existence of free will seems to depend on a logically impossible reconciliation of incompatible concepts. As Martin Gardner quips, A free will act cannot be fully predetermined. Nor can it be the outcome of pure chance. Somehow it is both. Somehow it is neither My own view, which is Kants, is that there is no way to go between the thorns. The best we can do (we who are not gods) is, Kant wrote, comprehend its incomprehensibility.

But are we really looking at the problem correctly? Perhaps we need to escape from the cycle of rehashed arguments and take a new look at our approaches to the problem. One way to do this may be to look for analogous dilemmas encountered during the long history of philosophy, and see if we can gain insights relevant to discussions of free will.

To my mind, some important aspects of the free will debate invite comparison to another celebrated philosophical puzzle- the motion paradoxes of the Greek philosopher Zeno. Living in the fifth century BCE, Zeno was a disciple of the great thinker Parmenides, who famously argued that change in the physical world is impossible. We cannot speak of what is not, Parmenides said, since that would involve the contradiction of speaking of things that dont exist. Change is therefore impossible because it involves something becoming what it is not, which plainly involves an impassable contradiction.

Zeno defended the rather paradoxical conclusions of his mentor by developing a number of paradoxes of his own. In one of his most famous examples, Zeno describes a race between the

swift runner Achilles and a tortoise. Since Achilles runs much faster than the tortoise, we give the tortoise a fair head start. Everyone knows Achilles will outrun the slow, heavy tortoise, right? Dont be so sure, Zeno answers. Suppose the tortoise has a ten-meter head start. Achilles catches up that distance, but in that time, the tortoise has moved a small distance ahead. Achilles must now catch up the new distance, but meanwhile the tortoise has made further small progress. It turns out that Achilles can never overtake his slower opponent, because each time he moves the tortoise has trudged another tiny increment ahead.

Figure 1: This illustration depicts Zenos famous paradox of the race between Achilles and the tortoise. Achilles cannot win the race because each time he tries to catch up, the tortoise has moved another small distance ahead. Redrawn from The Philosophers Magazine.

This paradox also implies that Achilles not only fails to outrun the tortoise, but neither Achilles nor anything else can truly move at all. Before I can move from one side of the room to the opposite side, I first have to travel half the total distance. But before I can reach the center of the room, I have to travel half of that distance, or one-quarter of the total. We can still divide the quarter distance by two again, and continue this operation until we have added an infinite number of small distances to our journey. This same argument applies if, instead of wishing to cross the room, we merely want to move a tiny fraction of an inch forward. Any potential distance, no matter how great or small, is divisible into an infinite number of smaller distances. We can never move anywhere, because we would have to travel across an infinitely large number of distances just to move a vanishingly small distance. Zenos paradox is one of those philosophical arguments that is obviously wrong, but resists attempts to find the error. His argument baffled generations of philosophers who struggled unsuccessfully to locate the fallacies in his thinking. We needed new developments in mathematics to clearly understand where Zenos reasoning goes astray. The mathematical

concept of series convergence allows us to see that an infinite series of small increments can comprise a finite sum. An infinite number of increments does not necessarily produce an infinite number, but may converge on a finite number. This can be expressed mathematically as

For instance, suppose we sum an infinite series beginning with ___, and each new term is exactly ___ of the previous term. We are adding an infinite number of terms, but do not obtain an infinite sum, because our series converges to one:

Thus, the mathematics of convergence shows that an infinite series does not necessarily imply an infinite sum. The seeming paradox in Zenos argument arises because of our mistaken t endency to see the concepts of infinite and finite as mutually exclusive. Zenos rigid rationalism convinces us that an infinite series of small increments prevents a finite increase in distance, but the narrow focus and hidden assumptions in Zenos argument have tricked us into believing a fallacy. We can cross the room after all, and Achilles really does outrun the tortoise.

After this analysis of Zenos paradox, the article makes the connection with free will: I would like to hypothesize that free will arguments contain common misunderstandings of the concepts of cause and will, and these misunderstandings are analogous to Zenos erroneous assumptions about the concepts of infinite and finite. Just as Zeno agonized about infinite numbers of small distances and convinced himself that all movement was impossible, most participants in the free will debate devote so much attention to the causes affecting us that they feel compelled to deny free will. Indeed, many philosophers believe the case against free will to be rock solid. Every effect has a cause, and humans cannot be the causes of their own consciousness, so we may as well just admit that free will is illusory. A few of these philosophers even smugly claim that anyone can see the logical impossibility of free will by reflecting on the relevant arguments from the comfort of his own couch.

However, Zeno also thought he used flawless logic in his demonstration of the impossibility of motion. Just as modern determinists intimidate us by speaking of infinite chains of causes precluding our freedom, Zeno intimidated his audience by showing how infinite numbers of small increments rendered motion impossible. What if, just as in Zenos paradox, there is nothing truly paradoxical going on in the realm of free will after all? What if our actions could remain genuine acts of will and outcomes of a complex chain of causality, just as we could have an infinite series of small increments converge on a finite sum?

These possibilities are similar in many ways to other counterintuitive conclusions rendered understandable through careful mathematical reasoning. For instance, we tend to think that the concepts of randomness and symmetry are at odds with each other. A symmetrical pattern seems to be the very antithesis of randomness. But as physicist Taner Edis shows in his remarkable book The Ghost in the Universe, order and chance are closely linked. A long series of fair coin tosses likely results in random sequences of heads and tails, but the resulting randomness follows directly from the symmetry in the probabilities of obtaining two possible outcomes for each coin flip. Similarly, we observe magnets to be rotationally symmetric at high temperatures, meaning that they align themselves in every possible direction. The overall magnetization of this system is zero, because the magnets do not favor any particular direction

and cancel each other out. Ironically, if the equations describing magnetism were not symmetrical, the directions of these magnets could not be random, because non-symmetrical equations would result in a non-zero net magnetization of the system, dictating that the magnets align themselves along a single direction. Symmetry and randomness are not antagonists. They are inseparable elements of a universe in which mathematically elegant laws create opportunities for contingencies.

Free will certainly poses vexing philosophical problems, but many of these problems appear to result from conceptual confusions. When we talk about free will and determinism, we immediately confront a series of conflicts between seemingly contradictory terms. When we ask if a deterministic universe implies the absence of freedom, we seem to encounter a conflict between the concepts of cause and choice. We stumble upon another impasse when we ask if quantum indeterminacy somehow enables us to have free will, because we see randomness and rational choice as complete antagonists. But weve fooled ourselves as much by our framing of our questions as Zeno fooled himself, and many others, by the framing of his paradoxes. We do not have to choose between complete determinacy and complete chance, or believe that free choice necessitates complete isolation from the world of causes and effects. Instead, we can explore the ways that chance and order combine in physical laws to allow free will to exist. The first thing we need to do is clarify what free will really means. It clearly cannot imply total freedom to do whatever we want, because few people worry about their inability to suddenly become lighter than air. Most people willingly accept that the nature of our human bodies imposes limits on our actions. To claim we have free will, then, is merely to claim that we have some range of possible choices. The mere presence of limits on our choice does not negate our freedom as long as real choices still exist.

To see why, consider an example provided by philosopher Daniel Dennett in his interesting book Elbow Room. If we see an animal at the zoo in a tiny cage restricting even the smallest movement, we deplore the poor beasts condition, because he seems to lack freedom to do anything at all. But now imagine seeing the same animal in a spacious zoo habitat. The animal still faces limits on his freedom, but he can roam around in his quarters and choose to be in one

place rather than another. This is the kind of freedom we would probably consider sufficient. We are mostly comfortable with the idea of limits on our choices, as long as we truly have a variety of options.

If the tradeoff between freedom and limitations is not all or nothing, neither is the tradeoff between freedom and deterministic predictability. Recall that traditional determinists argue that an omniscient being seeing all of the causes affecting us would be able to perfectly predict our actions. This hypothetical being would be able to see that we do not actually act freely, but act under the compulsion of countless causes undetected by our limited mortal senses. This is a nice argument, but it suffers from the serious deficiencies that no one knows if such a being exists, and no one certainly knows how such a being would perceive reality. It may very well be the case that a being capable of seeing all of the causes acting on us would have more difficulty predicting our behavior. After all, the simpler of two competing scientific models often allows us to make the most accurate predictions. Predictive accuracy often decreases, not increases, with the number of parameters we include in our model! An all-knowing being may very well wind up with all-powerful headaches.

One reason that determinism does not imply an absence of alternatives is the role of emergent phenomena in complex systems. An emergent phenomenon is neither a property of any individual component of a system, nor simply the result of summing the properties of all components. Emergent phenomena are novel, and unpredicted by our knowledge of the system. There are many examples of emergent phenomena at all levels. For instance, the atomic properties of hydrogen and oxygen do not convey all possible information about the properties of water, which is simply a molecule made from the combination of the two elements. Water has distinct properties, such as its surface tension and heating capacity, that belong neither to hydrogen nor oxygen and do not arise from simply combining the known properties of each. Some evolutionary biologists think that stable reproductive species in evolutionary history are emergent phenomena, which changed the whole course of natural selection. This illustrates another important feature of emergent phenomena- their tendency to affect other parts of the system that produced them. Genes control the inheritable traits of species, but species take the evolutionary game to a completely new level, and affect the distribution of genes themselves in

complex ways. This is a feature of complex systems often overlooked by strict determinist deniers of free will. Emergent phenomenon themselves are not merely affected by their surroundings, but interact dynamically with other parts of the system. By almost anyones definition, the human mind is a complex system. Consciousness is an emergent phenomenon of the billions of neuron interactions in our brains, and seems to be able to influence the behavior of these neurons in novel ways. Some of this novelty may also be linked to quantum level uncertainty in the states of the neurons involved. Determinist opponents of free will, hearing this, may reiterate their objection that quantum uncertainty cannot provide a foundation for the kind of rationally considered choices we associate with free will. But as we have already seen, this objection is unwarranted, because randomness and order are not incompatible concepts. As Taner Edis magnetic field example showed, randomness is an inherent characteristic of deterministic laws. Quantum mechanics may supply more variety for these laws to act upon, and the neurons of our brain may be close enough to the quantum size level for this variability to be considerable.

How, then, does free will work? We do not completely understand, but we have clues. And just as we needed the mathematical development of calculus to clearly resolve Zenos paradox, we may find that the burgeoning mathematics of complexity theory will finally help us dispel our conceptual confusions about free will. Currently, it seems probable that complexity theory, together with our growing understanding of cognitive neuroscience, will throw much light on the process of making willed decisions. We will better understand how the complex arrangement of neurons in our brains leads to emergent states of conscious awareness, and the conscious mind feeds back on its neural networks to place itself in alternate conscious states. With time, we will also better comprehend how the brain converts sensory stimuli and knowledge of our environment into neural impulses and becomes part of the intricate network of causes and effects at work in our conscious minds. Finally, we will realize the conceptual confusions that cause us to see determinism and rational choice as incompatible, and will renounce our error. We will live in a deterministic world without fear, for we will no longer see determinism as a threat to the free will we cherish. [35]

After all this, I am personally inclined to think more and more that free will is the aspect of a universal intelligence, even if we find it difficult to perceive how such an intelligence manifests itself and where it lies. Zenos paradox seems like a journey to infinity: we need an infinite number of steps in order to accomplish it. But we may be able to see where the end of the journey is. Its just that every step we make comprises an infinity itself. Real mathematics must be much more complex, even much more qualitative than natural numbers. In the end we find ourselves, almost unexpectedly, at the end of the road. It seems remarkable, difficult to explain in simple words. Its a truth that seems to defy even the most simple and fundamental perquisites of free will. But whatever the answer or the mathematical solution to this problem might be, we should feel happy and confident enough because despite the challenges and the difficulties our own thoughts consist of the same infinity which is found in the universal intelligence too.

A quantum theory of consciousness

Beyond the logical approach
We know that we can create a program for a computer machine using yes and no filters. For example, if yes then do routine A, otherwise go to routine B. What a computer doesnt know, however, is the alternative maybe. Even our most robust logical systems forget this alternative. Someone could argue that such a system would be inconsistent and unreliable. But, as we all know, this is what we really do most of the time: We seldom give a straightforward yes or no answer. After all, honesty is not always the best means towards success. This of course doesnt mean that nature forces us to be unethical or evil. Ethics as we have already seen forms the basis of our logic. But instead of a yes or no strict dilemma, th ere may be a more flexible as well as more complicated answer to give/problem to solve.

To be and not to be We have previously referred to the laws of thought of George Boole. Here we will quote a passage where it is demonstrated that something cannot be true and false at the same time. Later on we will see that if we suggest that something is both right and wrong then this could have some interesting implications.

Boole gives the following expression and explains.

x(l - x) = 0 (1) Let us, for simplicity of conception, give to the symbol x the particular interpretation of men, then 1 - x will represent the class: of not-men. Now the formal product of the expressions of two classes represents that class of individuals which is common to them both. Hence x(l - x) will represent the class whose members are at once men, and not men, and the equation (1) thus express the principle, that a class whose members are at the same time men and not men does not exist. In other words, that it is impossible for the same individual to be at the same time a man and not a man. Now let the meaning of the symbol x be extended from the representing of men, to that of any class of beings characterized by the possession of any quality whatever; and the equation (1) will then express that it is impossible for a being to possess a quality and not to possess that quality at the same time. But this is identically that principle of contradiction which Aristotle has described as the fundamental axiom of all philosophy. It is impossible that the same quality should both belong and not belong to the same thing. This is the most certain of all principles. Wherefore they who demonstrate refer to this as an ultimate opinion. For it is by nature the source of all the other axioms. However, in quantum mechanics we can never be certain if x is, for example, a man or something else. Lets suppose that at the microcosm a little man, the homunculus of the alchemists, takes a form which is so fuzzy that we cannot be sure if it is a man or an electron. More specifically, if we deal with an electron we cannot be sure if it is male or female, meaning if it has an up or a down spin. The point is that in order to calculate the correct probabilities we have to take into account both these probable states.

For example, according to Wikipedia, consider an electron with two possible configurations, up and down. This describes the physical system of a qubit.

is the most general state. But these coefficients dictate probabilities for the system to be in either configuration. [36] So something can be, lets say, c1% right and c2% wrong. This is the difference with classical logic which leaves aside all the maybes in between. We may give a further example from the work previously mentioned Quantum physics in neuroscience and psychology: a neurophysical model of mind-brain interaction in the mind-body problem. An intentional act is an action that is intended to produce a feedback of a certain conceived or imagined kind. Of course, no intentional act is certain: ones intentions may not be fulfilled. Hence the intentional action merely puts into play a process that will lead either to a confirmatory feedback Yes, the intention is realized, or to the result No, the Yes response did not occur. The effect of this intentional mental act is represented mathematically by an equation that is one of the key components of quantum theory. This equation represents, within quantum mathematics, the effect of process 1 action upon the quantum state S of the system being acted upon. The equation is: SS' =PSP + (I- P)S(I - P).

This formula exhibits the important fact that this process 1 action changes the state S of the system being acted upon into a new state S', which is a sum of two parts. The first part, PSP, represents in physical terms, the possibility in which the experiential feedback called Yes appears and the second part, (I-P)S(I-P), represents the alternative possibility No, this Yes feedback does not appear. Thus, an effect of the probing action is injected into the mathematical description of the physical system being acted upon.

The operator P is important. The action represented by P, acting both on the right and on the left of S, is the action of eliminating from the state S all parts of S except the Yes part. That particular retained part is determined by the choice made by the agent. The symbol I is the unit

operator, which is essentially multiplication by the number 1, and the action of (I-P), acting both on the right and on the left of S is, analogously, to eliminate from S all parts of S except the No parts.

Notice that process 1 produces the sum of the two alternative possible feedbacks, not just one or the other. Since the feedback must either be Yes or No=Not-Yes, one might think that process 1, which keeps both the Yes and the No possibilities, would do nothing. But that is not correct. This is a key point. It can be made absolutely clear by noticing that S can be written as a sum of four parts, only two of which survive the process 1 action:

S = PSP + (I- P)S(I -P)+ PS(I -P) + (I- P)SP.

This formula is a strict identity. The dedicated reader can quickly verify it by collecting the contributions of the four occurring terms PSP, PS, SP and S, and verifying that all terms but S cancel out. This identity shows that the state S is a sum of four parts, two of which are eliminated by process 1.

But this means that process 1 has a non-trivial effect upon the state being acted upon: it eliminates the two terms that correspond neither to the appearance of a Yes feedback nor to the failure of the Yes feedback to appear. This result is the first key point: quantum theory has a specific causal process, process 1, which produces a non-trivial effect of an agents choice upon the physical description of the system being examined.

So such a process reaches our own brain and the way it works. A more vigorous example is given by Wikipedia on entanglement: [37]

. What the previous formula says is that the yes and no answers, represented here by the vectors 0 and 1, are entangled in such a way that they depend on each other in order to

give the right answer. See now how this is related to the whole of Booles philosophy in his Laws of thought. He states, If we mean, Things which are xs, but not ys, or ys, but not xs, the expression will be x(1 - y) + y(1 - x) the symbol x standing for xs, y for ys. If, however, we mean, Things which are either xs, or, if not xs, then ys, the expression will be x + y(1 - x). This expression supposes the admissibility of things which are both x s and ys at the same time. It might more fully be expressed in the form xy + x(1-y)+y(1-x) = xy + x xy + y yx = x + y yx = x + y(1-x). but this expression, on addition of the two first terms, only reproduces the former one.

The yx is the one that may express the entangled factor neglected by classical physics as meaningless or as physically impossible.

How the distributive law fails In the former discussion we supposed some basic facts. For example that xy = yx. This is not always true. It is not always true in matrix algebra and as far as quantum operators are concerned which do not commute. Such operators give rise to entanglement and non-locality in quantum mechanics. Here we may see an interesting case in quantum logic where the distributive law also fails, as Wikipedia explains: Quantum logic has some properties which clearly distinguish it from classical logic, most notably, the failure of the distributive law of propositional logic:

p and (q or r) = (p and q) or (p and r),

where the symbols p, q and r are propositional variables. To illustrate why the distributive law fails, consider a particle moving on a line and let

p = the particle has momentum in the interval [0, +1/6] q = the particle is in the interval [1, 1] r = the particle is in the interval [1, 3]. Then we might observe that:

p and (q or r) = true, in other words, that the particles momentum is between 0 and +1/6, and its position is between 1 and +3. On the other hand, the propositions p and q and p and r are both false, since they assert tighter restrictions on simultaneous values of position and momentum than is allowed by the uncertainty principle (they have combined uncertainty 1/3 < 1/2). So,

(p and q) or (p and r) = false Thus the distributive law fails. [38]

The explanation of the previous is as follows. According to the uncertainty principle we cannot know both the momentum p and the position q (or r in the previous example) of a particle simultaneously. The uncertainty is quantified by the expression pq > (Planks constant being equal to one). pq is equivalent to p and q, while q or r is equivalent to q+r. So, leaving aside the symbol , p and (q or r) = p(q+r)=1/6(2+2)=(1/6)4=2/31/2. Therefore, according to the uncertainty principle, p(q+r)= true. But each of (p and q) or (p and r) have a value of (1/6)2=1/3<1/2 violating the uncertainty principle. So the distributive law fails.

As Wikipedia says, quantum logic has been proposed as the correct logic for propositional inference generally, most notably by the philosopher Hilary Putnam, at least at one point in his career. This thesis was an important ingredient in Putnams paper Is Logic Empirical?, in which he analyzed the epistemological status of the rules of propositional logic. Putnam attributes the idea that anomalies associated to quantum measurements originate with anomalies in the logic of physics itself to the physicist David Finkelstein. However, this idea had been

around for some time and had been revived several years earlier by George Mackeys work on group representations and symmetry.

In other words, logic is theoretical while knowledge is empirical. When something moves we measure its momentum at a previous point in space and time. At the moment we do our calculation the moving object may have changed its momentum. The Plank constant, which is found in the uncertainty principle, incorporates units of space and time as well as of mass. This is also why pq qp in quantum mechanics. Both p and q act on each other as operators representing physical quantities. Our thought works the same way. It transforms the things it encounters so that an object cannot be the same with its representation. But this relationship is even more fundamental. At a more theoretical or abstract level things change when they are observed, but the observed object will also change the way we observe things. And this sort of entanglement is something that we always knew, despite the fact that we have often found it easier to ignore.

The infinite between two steps Not only momentum is an uncertain quantity but also motion itself may be regarded as a paradox as far as Zeno is concerned. In fact the whole field of differential calculus (velocity is the first derivative of position) can be disputed as George Berkley did in his Analyst: I know not whether it be worth while to observe, that possibly some Men may hope to operate by Symbols and Suppositions, in such sort as to avoid the use of Fluxions, Momentums, and Infinitesimals after the following manner. Suppose x to be one Absciss of a Curve, and z another Absciss of the same Curve. Suppose also that the respective Areas are xxx and zzz: and that z - x is the Increment of the Absciss, and zzz - xxx the Increment of the Area, without considering how great, or how small those Increments may be. Divide now zzz - xxx by z - x and the Quotient will be zz + zx + xx: and, supposing that z and x are equal, this same Quotient will be 3xx which in that case is the Ordinate, which therefore may be thus obtained independently of Fluxions and Infinitesimals. But herein is a direct Fallacy: for in the first place, it is supposed that the Abscisses z and x are unequal, without such supposition no one step could have been made; and in the second place, it is supposed they are equal; which is a manifest Inconsistency,

and amounts to the same thing that hath been before considered. And there is indeed reason to apprehend, that all Attempts for setting the abstruse and fine Geometry on a right Foundation, and avoiding the Doctrine of Velocities, Momentums, &c. will be found impracticable, till such time as the Object and the End of Geometry are better understood, than hitherto they seem to have been. The great Author of the Method of Fluxions felt this Difficulty, and therefore he gave in to those nice Abstractions and Geometrical Metaphysics, without which he saw nothing could be done on the received Principles; and what in the way of Demonstration he hath done with them the Reader will judge. It must, indeed, be acknowledged, that he used Fluxions, like the Scaffold of a building, as things to be laid aside or got rid of, as soon as finite Lines were found proportional to them. But then these finite Exponents are found by the help of Fluxions. Whatever therefore is got by such Exponents and Proportions is to be ascribed to Fluxions: which must therefore be previously understood. And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the Ghosts of departed Quantities? [39] These Ghosts of departed Quantities, which Berkley refers to, or Fluxions, are the infinitesimals of modern calculus introduced by Leibnitz and Newton. A derivative of a function at one of its points is defined as the limit of its value at this point with respect to the change of its variable. For example, instantaneous velocity v is the change of position x with respect to time t at a certain point when time difference tends to zero. Typically, the expression is lim(t0) x/t. The problem here is that since t goes to zero the fraction becomes undefinable (division by zero). So we use a process which is forbidden to define something which in turn is permitted. This is why Berkley called infinitesimals ghosts.

Another way to approach the problem is by using a series. Archimedes used a geometric series to solve Zenos paradox. If each next step is half the previous one then the whole distance to be covered will be 1/2+1/4+1/8+ The sum equals 1, in other words it is the whole distance. Still, the number of terms in the series is infinite. Again we are faced with the ancient problem of infinity.

Georg Cantor was a mathematician who quantified infinity at an astonishing level. As Wikipedia says, he formalized many ideas related to infinity and infinite sets during the late 19th and early 20th centuries. In the theory he developed, there are infinite sets of different sizes (called cardinalities). For example, the set of integers is countably infinite, while the infinite set of real numbers is uncountable In one of his earliest papers, Cantor proved that the set of real numbers is more numerous than the set of natural numbers; this showed, for the first time, that there exist infinite sets of different sizes. He was also the first to appreciate the importance of one-to-one correspondences (hereinafter denoted 1-to-1 correspondence) in set theory. He used this concept to define finite and infinite sets, subdividing the latter into denumerable (or countably infinite) sets and uncountable sets (nondenumerable infinite sets). [40]

So not only are there infinities, but some infinities are greater than others. However, no matter how high the density of the numbers in a set is, what is left in between still is the void. Quantum mechanics has reached such a level of accuracy so that it can define a minimum length in the universe, the so- called Plank length. This length is equal to 1.6110-35 meters. Thats a very small distance. So, according to quantum mechanics, Zenos paradox is resolved with the assumption that the intermediate steps are not infinitesimally small but at least as much as Plank length.

The key point here is however that space-time is not continuous. Differential calculus presupposes that a function is everywhere continuous, but the real world is full of discontinuities. At such points a derivative cannot be defined. In fact Achilles in Zenos paradox may travel the whole distance thanks to the discreteness of the intermediate steps. Nevertheless, the fact of a discontinuous space makes motion a logical impossibility. This is only if we regard travelling all the distance in between.

Non- local properties of the mind

Quantum entanglement and non- locality The term non-locality refers to an instantaneous action at a distance. This paradox stems exactly from the supposition of continuity in space and time. Einstein set an upper limit to the

maximum speed in the universe. This limit is the speed of light. This is what (special) relativity is all about. However this does not solve the problem of non-locality because again it presupposes a space-time continuum. Einstein also formulated a paradox, the so-called EPR paradox. If, according to quantum mechanics, the states of a system are undefinable before a measurement, then it is possible to affect one part of the system and measure the respective change at a another part of the system instantaneously at the time of the measurement. This is a simple paradox based on momentum conservation (spoky action at a distance as Einstein called it.) But again it presupposes that momentum is a continuous function of space-time. This sort of thinking leads in fact to another paradox, according to special theory of relativity- bending of space and time. But this is another story. What is of interest here is that experiments in quantum entanglement proved that instantaneous action at a distance happens. Entangled states of a system (quantum states in which some form of momentum, such as spin or polarization, is conserved) can interact with each other instantaneously, regardless of the in between distance. Whatever the explanation may be, we see that we have to radically change the way we view space and time. And at the root of this problem lies the (false) assumption of continuity.

Bells inequalities The man who quantified the EPR paradox was John Bell. His inequalities prove that if there exists a correlation between the variables you can have results that defy classical physics. In other words, this correlation is of a non-local nature, meaning that if someone tries to explain it using the notion of signal propagation as a means of communication between the observables, he would be forced to admit a faster than light signal transmission. I found an interesting article about Bells inequalities of David M. Harrison, which I present: The origins of this topic is a famous paper by Einstein, Rosen and Podolsky (EPR) in 1935; its title was Can Quantum-Mechanical Description of Physical Reality be Considered Complete? They considered what Einstein called the spooky action-at-a-distance that seems to be part of Quantum Mechanics, and concluded that the theory must be incomplete if not outright wrong. As you probably already know, Einstein never did accept Quantum Mechanics. One of his

objections was that God does not play at dice with the universe. Bohr responded: Quit telling God what to do! In the early 1950s David Bohm was a young Physics professor at Princeton University. He was assigned to teach Quantum Mechanics and, as is common, decided to write a textbook on the topic; the book is still a classic. Einstein was at Princeton at this time, and as Bohm finished each chapter of the book Einstein would critique it. By the time Bohm had finished the book Einstein had convinced him that Quantum Mechanics was at least incomplete. Bohm then spent many years in search of hidden variables, unobserved factors inside, say, a radioactive atom that determines when it is going to decay. In a hidden variable theory, the time for the decay to occur is not random, although the variable controlling the process is hidden from us.

In 1964 J.S. Bell published his theorem. It was cast in terms of a hidden variable theory. Since then, other proofs have appeared by d Espagnat, Stapp, and others that are not in terms of hidden variables. Below we shall do a variation on d Espagnats proof that I devised.

Proving Bells inequality We shall be slightly mathematical. The details of the math are not important, but there are a couple of pieces of the proof that will be important. The result of the proof will be that for any collection of objects with three different parameters, A, B and C:

The number of objects which have parameter A but not parameter B plus the number of objects which have parameter B but not parameter C is greater than or equal to the number of objects which have parameter A but not parameter C.

We can write this more compactly as:

Number(A, not B) + Number(B, not C) greater than or equal to Number(A, not C) The relationship is called Bells inequality.

In class I often make the students the collection of objects and choose the parameters to be:

A: male B: height over 5' 8" (173 cm) C: blue eyes

Then the inequality becomes that the number of men students who do not have a height over 5' 8" plus the number of students, male and female, with a height over 5' 8" but who do not have blue eyes is greater than or equal to the number of men students who do not have blue eyes. I absolutely guarantee that for any collection of people this will turn out to be true.

It is important to stress that we are not making any statistical assumption: the class can be big, small or even zero size. Also, we are not assuming that the parameters are independent: note that there tends to be a correlation between gender and height.

Sometimes people have trouble with the theorem because we will be doing a variation of a technique called proof by negation. For example, here is a syllogism:

All spiders have six legs. All six legged creatures have wings. Therefore all spiders have wings.

If we ever observe a spider that does not have wings, then we know that at least one and possibly both of the assumptions of the syllogism are incorrect. Similarly, we will derive the inequality and then show an experimental circumstance where it is not true. Thus we will know that at least one of the assumptions we used in the derivation is wrong.

Also, we will see that the proof and its experimental tests have absolutely nothing to do with Quantum Mechanics.

Now we are ready for the proof itself. First, I assert that:

Number(A, not B, C) + Number(not A, B, not C) must be either 0 or a positive integer

or equivalently:

Number(A, not B, C) + Number(not A, B, not C) greater than or equal to 0

This should be pretty obvious, since either no members of the group have these combinations of properties or some members do.

Now we add Number(A, not B, not C) + Number(A, B, not C) to the above expression. The left hand side is:

Number(A, not B, C) + Number(A, not B, not C) + Number(not A, B, not C) + Number(A, B, not C)

and the right hand side is:

0 + Number(A, not B, not C) + Number(A, B, not C)

But this right hand side is just:

Number(A, not C)

since for all members either B or not B must be true. In the classroom example above, when we counted the number of men without blue eyes we include both those whose height was over 5' 8" and those whose height was not over 5' 8". Above we wrote since for all members either B or not B must be true. This will turn out to be important.

We can similarly collect terms and write the left hand side as:

Number(A, not B) + Number(B, not C)

Since we started the proof by asserting that the left hand side is greater than or equal to the right hand side, we have proved the inequality, which I re-state:

Number(A, not B) + Number(B, not C) greater than or equal to Number(A, not C)

We have made two assumptions in the proof. These are:

Logic is a valid way to reason. The whole proof is an exercise in logic, at about the level of the Fun With Numbers puzzles one sometimes sees in newspapers and magazines. Parameters exist whether they are measured or not. For example, when we collected the terms Number(A, not B, not C) + Number(A, B, not C) to get Number(A, not C), we assumed that either not B or B is true for every member.

Applying Bell's inequality to electron spin

Consider a beam of electrons from an electron gun. Let us set the following assignments for the three parameters of Bells inequality: A: electrons are spin-up for an up being defined as straight up, which we will call an angle of zero degrees. B: electrons are spin-up for an orientation of 45 degrees. C: electrons are spin-up for an orientation of 90 degrees. Then Bells inequality will read:

Number(spin-up zero degrees, not spin-up 45 degrees) + Number(spin-up 45 degrees, not spinup 90 degrees) greater than or equal to Number(spin-up zero degrees, not spin-up 90 degrees)

But consider trying to measure, say, Number(A, not B). This is the number of electrons that are spin-up for zero degrees, but are not spin-up for 45 degrees. Being not spin-up for 45 degrees is, of course, being spin-down for 45 degrees.

We know that if we measure the electrons from the gun, one-half of them will be spin-up and one-half will be spin-down for an orientation of 0 degrees, and which will be the case for an individual electron which is random. Similarly, if we measure the electrons with the filter oriented at 45 degrees, one-half will be spin-down and one-half will be spin-up.

But if we try to measure the spin at both 0 degrees and 45 degrees we have a problem.

The figure shows a measurement first at 0 degrees and then at 45 degrees. Of the electrons that emerge from the first filter, 85% will pass the second filter, not 50%. Thus for electrons that are measured to be spin-up for 0 degrees, 15% are spin-down for 45 degrees.

Thus measuring the spin of an electron at an angle of zero degrees irrevocably changes the number of electrons which are spin-down for an orientation of 45 degrees. If we measure at 45 degrees first, we change whether or not it is spin-up for zero degrees. Similarly for the other two terms in this application of the inequality. This is a consequence of the Heisenberg Uncertainty Principle. So this inequality is not experimentally testable.

In our classroom example, the analogy would be that determining the gender of the students would change their height. Pretty weird, but true for measuring electron spin.

However, recall the correlation experiments that we discussed earlier. Imagine that the electron pairs that are emitted by the radioactive substance have a total spin of zero. By this we mean that

if the right hand electron is spin-up its companion electron is guaranteed to be spin-down provided the two filters have the same orientation.

Say in the illustrated experiment the left hand filter is oriented at 45 degrees and the right hand one is at zero degrees. If the left hand electron passes through its filter then it is spin-up for an orientation of 45 degrees. Therefore we are guaranteed that if we had measured its companion electron it would have been spin-down for an orientation of 45 degrees. We are simultaneously measuring the right-hand electron to determine if it is spin-up for zero degrees. And since no information can travel faster than the speed of light, the left hand measurement cannot disturb the right hand measurement. So we have beaten the Uncertainty Principle: we have determined whether or not the electron to the right is spin-up zero degrees, not spin-up 45 degrees by measuring its spin at zero degrees and its companions spin at 45 degrees.

Now we can write the Bell inequality as:

Number(right spin-up zero degrees, left spin-up 45 degrees) + Number(right spin-up 45 degrees, left spin-up 90 degrees) greater than or equal to Number(right spin-up zero degrees, left spin-up 90 degrees) This completes our proof of Bells Theorem.

The same theorem can be applied to measurements of the polarisation of light, which is equivalent to measuring the spin of photon pairs.

The experiments have been done. For electrons the left polarizer is set at 45 degrees and the right one at zero degrees. A beam of, say, a billion electrons is measured to determine Number(right spin-up zero degrees, left spin-up 45 degrees). The polarizers are then set at 90 degrees/45 degrees, another billion electrons are measured, then the polarizers are set at 90 degrees/zero degrees for another billion electrons.

The result of the experiment is that the inequality is violated. The first published experiment was by Clauser, Horne, Shimony and Holt in 1969 using photon pairs. The experiments have been repeated many times since. In the last section we made two assumptions to derive Bells inequality which here become:

Logic is valid. Electrons have spin in a given direction even if we do not measure it. Now we have added a third assumption in order to beat the Uncertainty Principle:

No information can travel faster than the speed of light. We will state these a little more succinctly as:

Logic is valid. There is a reality separate from its observation. Locality. You will recall the we discussed proofs by negation. The fact that our final form of Bells inequality is experimentally violated indicates that at least one of the three assumptions we have made have been shown to be wrong.

You will also recall that earlier we pointed out that the theorem and its experimental tests have nothing to do with Quantum Mechanics. However, the fact that Quantum Mechanics correctly predicts the correlations that are experimentally observed indicates that the theory too violates at least one of the three assumptions. Finally, as we stated, Bells original proof was in terms of hidden variable theories. His assumptions were:

Logic is valid.

Hidden variables exist. Hidden variables are local.

Most people, including me, view the assumption of local hidden variables as very similar to the assumption of a local reality.

As can be easily imagined, many people have tried to wiggle out of this profound result. Some attempts have critiqued the experimental tests. One argument is that since we set the two polarizers at some set of angles and then collect data for, say, a billion electrons there is plenty of time for the polarizers to know each others orientation, although not by any known mechanism. More recent tests set the orientation of the polarizers randomly after the electrons have left the source. The results of these tests are the same as the previous experiments: Bell's inequality is violated and the predicted Quantum correlations are confirmed. Still other tests have set the distance between the two polarizers at 11 km, with results again confirming the Quantum correlations.

Another critique has been that since the correlated pairs emitted by the source go in all directions, only a very small fraction of them actually end up being measured by the polarizers. Another experiment using correlated Beryllium atoms measured almost all of the pairs, with results again confirmed the Quantum correlations.

There is another objection to the experimental tests that, at least so far, nobody has managed to get totally around. We measure a spin combination of, say, zero degrees and 45 degrees for a collection of electrons and then measure another spin combination, say 45 degrees and 90 degrees, for another collection of electrons. In our classroom example, this is sort of like measuring the number of men students whose height is not over 5' 8" in one class, and then using another class of different students to measure the number of students whose height is over 5' 8" but do not have blue eyes. The difference is that a collection of, say, a billion electrons from the source in the correlation experiments always behaves identically within small and expected statistical fluctuations with every other collection of a billion electrons from the source. Since that fact has been verified many times for all experiments of all types, we assume it is true when

we are doing these correlation experiments. This assumption is an example of inductive logic; of course we assumed the validity of logic in our derivation. Sometimes one sees statements that Bells Theorem says that information is being transmitted at speeds greater than the speed of light. So far I have not seen such an argument that I believe is correct. If we are sitting by either of the polarisers we see that one-half the electrons pass and one-half do not; which is going to be the case for an individual electron which appears to be random. Thus, the behavior at our polariser does not allow us to gain any information about the orientation of the other polariser. It is only in the correlation of the electron spins that we see something strange. DEspagnat uses the word influence to describe what may be traveling at superluminal speeds. Imagine we take a coin and carefully saw it in half so that one piece is a heads and the other is a tails. We put each half in a separate envelope and carry them to different rooms. If we open one of the envelopes and see a heads, we know that the other envelope contains a tails. This correlation experiment corresponds to spin measurements when both polarisers have the same orientation. It is when we have the polarisers at different orientations that we see something weird. So far we dont know which of the assumptions we made in the proof are incorrect, so we are free to take our pick of one, two or all three. We shall close this section by briefly considering the consequences of discarding the assumption of the validity of logic What If Logic Is Invalid?

It has been suspected since long before Bell that Quantum Mechanics is in conflict with classical logic. For example, deductive logic is based on a number of assumptions, one of which is the Principle of the Excluded Middle: all statements are either true or false.

But consider the following multiple choice test question:

The electron is a wave.

The electron is a particle. All of the previous. None of the above.

From wave-particle duality we know that both statements 1 and 2 are both sort of true and sort of false. This seems to call into question the Principle of the Excluded Middle. Thus, some people have worked on a multi-valued logic that they hope will be more consistent with the tests of Bells Theorem and therefore with Quantum Mechanics.

Mathematics itself can be viewed as just a branch of deductive logic, so if we revise the rules of logic we will need to devise a new mathematics. [41]

Harisson wonders about the validity of logic and mathematical reasoning. But I believe that the problem has more fundamentally to do with the peculiarity of nature (and consequently with the human mind). One of Bells assumptions was that no local hidden variables exist. David Bohm was he who introduced a non-local hidden variable theory. We will return to this problem later on. It suffices here to say that a non-local hidden variable theory may make the problem more complicated instead of solving it. Because not only the variables are then non-local but they are also hidden. This way the whole problem becomes more extended. Somehow it seems that the initial conditions set by the assumptions/experiment determine the final result in a definite way. Even if we make a change during the experiment, again we change the conditions at that instant, so that the whole experiment will progress accordingly. Still, we seem not to understand how exactly this mechanism works, so we talk about spooky action at a distance, hidden variables, strange correlations, etc.

Non-local character of the unconscious Consciousness is not just a passive procedure that consists of superimposed events. It is a spontaneous and independent mechanism of action and reaction that may determine which state of an event or of a set of events will be manifested against all the others, by an act of selfawareness. This way consciousness naturally evolves in a step- by- step causally described progress. Nevertheless its origins are found in another world where spontaneity rules. The

underlying mechanism of the conscious mind is found in the unconscious. There, spontaneous inference of reality gives rise to the possibility of a non- locally interrelated set of things at the smaller scales, which nevertheless may be expressed at the larger scales of the universe.

What I would also like to add here is the possibility of a two-fold aspect of consciousness. On the one hand, the conscious functions and acts causally and, more or less, retrospectively, while, on the other hand, the unconscious is spontaneous and acausal, therefore non-local. Despite the fact that quantum entanglement needs a certain and indeed very careful preparation of a system, it may be true that in fact we are in a state of entanglement with everything else in the universe, although we are always faced with the paradoxical necessity to observe consciously the effects in order to realize this kind of universal entanglement.

A quantum acausal principle So the world of cause and effect belongs to logic, while the world of non-locality and spontaneity belongs to quantum thinking and the unconscious. I use the notion of consciousness for both the conscious and the unconscious because consciousness means much more than analytical reasoning. This way of thinking may soften our rigid world of rationality and may provide a guide towards the explanation of strange phenomena of thought, such as foresight, hunches and premonitions.

Unconscious inference requires an a priori, inherent and beforehand experience of the world. So somewhere inside us or somewhere in the universe there exist some forms, archetypes, operators, properties, universalities, whatever their name, which comprise the basic structure of our minds. This universal truths or entities may be expressed in a formal way as well as in casual language with words, notions, or mathematical terms and symbols which might express the deeper connections and relations of ratios and symmetries in the world. This would be the beginning of a new world, since not only science will have been transformed into flexible and generalized methods, but also common language will include all these new notions at the highest and deepest level of understanding.

Consciousness makes the wave-function collapse

This sort of entanglement or coincidence between thought and the material word is expressed physically with the notion of the wave-function. The wave-function in quantum physics began as an extension of the equation of motion but ended up to include all sorts of states. Mainly it is used to calculate a probability distribution (of something). But this distribution, even if it evolves with time, it is not located at a specific point in space and time. In fact it is distributed to infinity. When a measurement, thus an act of observation, takes place, the wave-function collapses everywhere, simultaneously. But this is when we take a value of an observable locally. The rest of the wave-function remains obscure and if we want to focus our attention to another of its parts or phases then we are faced with the uncertainty principle. It is the same as if we focus on a point of attention and try to figure out what something nearby might be without moving our eyes. The greater the distance, the fainter the picture we can get. But even at the point of focus the object we observe is obscured or agitated by the act of observation itself. This effect may not be measurable in the case we observe a moving bus, but as far as an electron is concerned, when we hit it with a photon from our microscope in order to measure it, the effect of the photon on the electron is significant. While measuring its position for example, its momentum will have changed. But equivalently we may say that, because of the object we observe, our attention will shift Thus the uncertainty principle!

Schrdingers cat

According to Wikipedia, Schrdingers cat is a thought experiment, sometimes described as a paradox, devised by Austrian physicist Erwin Schrdinger in 1935. It illustrates what he saw as the problem of the Copenhagen interpretation of quantum mechanics applied to everyday objects, resulting in a contradiction with common sense. The scenario presents a cat that may be both alive and dead, depending on an earlier random event. Although the original experiment was imaginary, similar principles have been researched and used in practical applications. The thought experiment is also often featured in theoretical discussions of the interpretations of quantum mechanics. In the course of developing this experiment, Schrdinger coined the term Verschrnkung (entanglement).

Schrdinger intended his thought experiment as a discussion of the EPR article- named after its authors Einstein, Podolsky, and Rosen- in 1935. The EPR article highlighted the strange nature of quantum entanglement, which is a characteristic of a quantum state that is a combination of the states of two systems (for example, two subatomic particles), that once interacted but were then separated and are not each in a definite state. The Copenhagen interpretation implies that the state of the two systems collapses into a definite state when one of the systems is measured. Schrdinger and Einstein exchanged letters about Einsteins EPR article, in the course of which Einstein pointed out that the state of an unstable keg of gunpowder will, after a while, contain a superposition of both exploded and unexploded states.

To further illustrate, Schrdinger describes how one could, in principle, transpose the superposition of an atom to large-scale systems. He proposed a scenario with a cat in a sealed box, wherein the cats life or death depended on the state of a subatomic particle. According to Schrdinger, the Copenhagen interpretation implies that the cat remains both alive and dead (to the universe outside the box) until the box is opened. Schrdinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; quite the reverse, the paradox is a classic reductio ad absurdum. The thought experiment illustrates quantum mechanics and the mathematics necessary to describe quantum states. Intended as a critique of just the Copenhagen interpretation (the prevailing orthodoxy in 1935), the Schrdinger cat thought experiment remains a typical touchstone for limited interpretations of quantum mechanics. Physicists often

use the way each interpretation deals with Schrdingers cat as a way of illustrating and comparing the particular features, strengths, and weaknesses of each interpretation.

Schrdinger wrote: One can even set up quite ridiculous cases. A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small that perhaps in the course of the hour, one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges, and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The psi-function of the entire system would express this by having in it the living and dead cat mixed or smeared out in equal parts. It is typical of these cases that an indeterminacy originally restricted to the atomic domain becomes transformed into macroscopic indeterminacy, which can then be resolved by direct observation. That prevents us from so naively accepting as valid a blurred model for representing reality. In itself, it would not embody anything unclear or contradictory. There is a difference between a shaky or out-of-focus photograph and a snapshot of clouds and fog banks. Schrdingers famous thought experiment poses the question, when does a quantum system stop existing as a superposition of states and become one or the other? (More technically, when does the actual quantum state stop being a linear combination of states, each of which resembles different classical states, and instead begins to have a unique classical description?) If the cat survives, it remembers only being alive. But explanations of the EPR experiments that are consistent with standard microscopic quantum mechanics require that macroscopic objects, such as cats and notebooks, do not always have unique classical descriptions. The thought experiment illustrates this apparent paradox. Our intuition says that no observer can be in a mixture of statesyet the cat, it seems from the thought experiment, can be such a mixture. Is the cat required to be an observer, or does its existence in a single well-defined classical state require another external observer? Each alternative seemed absurd to Albert Einstein, who was impressed by the ability

of the thought experiment to highlight these issues. In a letter to Schrdinger dated 1950, he wrote: You are the only contemporary physicist, besides Laue, who sees that one cannot get around the assumption of reality, if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality- reality as something independent of what is experimentally established. Their interpretation is, however, refuted most elegantly by your system of radioactive atom + amplifier + charge of gunpowder + cat in a box, in which the psi-function of the system contains both the cat alive and blown to bits. Nobody really doubts that the presence or absence of the cat is something independent of the act of observation Since Schrdingers time, other interpretations of quantum mechanics have been proposed that give different answers to the questions posed by Schrdingers cat of how long superpositions last and when (or whether) they collapse.

The most commonly held interpretation of quantum mechanics is the Copenhagen interpretation. In the Copenhagen interpretation, a system stops being a superposition of states and becomes either one or the other when an observation takes place. This experiment makes apparent the fact that the nature of measurement, or observation, is not well-defined in this interpretation. The experiment can be interpreted to mean that while the box is closed, the system simultaneously exists in a superposition of the states decayed nucleus/dead cat and undecayed nucleus/living cat, and that only when the box is opened and an observation performed does the wave function collapse into one of the two states.

However, one of the main scientists associated with the Copenhagen interpretation, Niels Bohr, never had in mind the observer-induced collapse of the wave function, so that Schrdingers cat did not pose any riddle to him. The cat would be either dead or alive long before the box is opened by a conscious observer. Analysis of an actual experiment found that measurement alone (for example by a Geiger counter) is sufficient to collapse a quantum wave function before there is any conscious observation of the measurement. The view that the observation is taken when a particle from the nucleus hits the detector can be developed into objective collapse theories.

The thought experiment requires an unconscious observation by the detector in order for magnification to occur.

In 1957, Hugh Everett formulated the many-worlds interpretation of quantum mechanics, which does not single out observation as a special process. In the many-worlds interpretation, both alive and dead states of the cat persist after the box is opened, but are decoherent from each other. In other words, when the box is opened, the observer and the possibly-dead cat split into an observer looking at a box with a dead cat, and an observer looking at a box with a live cat. But since the dead and alive states are decoherent, there is no effective communication or interaction between them. When opening the box, the observer becomes entangled with the cat, so observer states corresponding to the cats being alive and dead are formed; each observer state is entangled or linked with the cat so that the observation of the cats state and the cats state correspond with each other. Quantum decoherence ensures that the different outcomes have no interaction with each other. The same mechanism of quantum decoherence is also important for the interpretation in terms of consistent histories. Only the dead cat or alive cat can be a part of a consistent history in this interpretation.

Roger Penrose criticises this: I wish to make it clear that, as it stands, this is far from a resolution of the cat paradox. For there is nothing in the formalism of quantum mechanics that demands that a state of consciousness cannot involve the simultaneous perception of a live and a dead cat.

However, the mainstream view (without necessarily endorsing many-worlds) is that decoherence is the mechanism that forbids such simultaneous perception.

A variant of the Schrdinger's cat experiment, known as the quantum suicide machine, has been proposed by cosmologist Max Tegmark. It examines the Schrdingers cat experiment from the

point of view of the cat, and argues that by using this approach, one may be able to distinguish between the Copenhagen interpretation and many-worlds. [43]

Observer effect According to Wikipedia, the term observer effect refers to changes that the act of observation will make on a phenomenon being observed. This is often the result of instruments that, by necessity, alter the state of what they measure in some manner. A commonplace example is checking the pressure in an automobile tire; this is difficult to do without letting out some of the air, thus changing the pressure. This effect can be observed in many domains of physics.

The observer effect on a physical process can often be reduced to insignificance by using better instruments or observation techniques. However in quantum mechanics, which deals with very small objects, it is not possible to observe a system without changing the system, so the observer must be considered part of the system being observed. For an electron to become detectable, a photon must first interact with it, and this interaction will change the path of that electron. It is also possible for other, less direct means of measurement to affect the electron.

The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics. A key topic is that of wave function collapse, for which some interpretations assert that measurement causes a discontinuous change into a non-quantum state, which no longer evolves. The superposition principle of quantum physics says that for a wave function a measurement will give a state of the quantum system of one of the possible eigenvalues of the operator which is part of the eigenfunctions. Once we have measured the system, we know its current state and this stops it from being in one of its other states. This means that the type of measurement that we do on the system affects the end state of the system.

Wikipedia also says that an important aspect of the concept of measurement has been clarified in some QM experiments where a single electron proved sufficient as an observer- there is no need for a conscious observer, and that additional problems related to decoherence arise when the observer too is modeled as a quantum system. [44]

This implies that the mere presence of an unconscious object can cause the wavefunction of its environment collapse, revealing this way the interconnectedness of all things. Objects and subjects can exist alone, but this independent existence has no meaning in a world of superpositions. In other words, there seems to be a property of things which has to do with this effect of entanglement, beyond any other deterministic property of the m when they stand on their own. When we shed light to observe an object, this light which illuminates it, at the same time somehow obscures it. It creates an attentional blind-spot, which is inevitable because of the necessity of observation. So as we try to understand things using our instruments, senses and thought, at the same time we create a shift of their form and meaning, because of our ideas, beliefs and restrictions we impose to them, even because of the form of our mere presence. This is the deepest nature of the uncertainty principle.

Participatory principle

The inherent role of the observer (or even an unconscious inanimate object) in any physical process should be expected at least as far as a physical interaction between the parts of the process is concerned. Here, however, it is implied that the change in the system is not caused by the natural forces produced by the properties of matter but by the conscience of the observer. So if we suppose a purposeful action on the part of the observer, we should accept an intelligent

(either causal-conscious or acausal-unconscious) agent in nature which uses the properties of matter, triggers the forces and thus guides natural processes.

According to the site physics.about.com, the Participatory Anthropic Principle, or PAP, is the idea that the universe requires observers, because without observers the universe could not actually exist. This controversial claim is based on the traditional Copenhagen interpretation of quantum physics, which requires an act of observation to resolve the superposition of states in a quantum wavefunction. It is one particularly intriguing variant of the anthropic principle. It appears that PAP was first proposed by John Archibald Wheeler, though its unclear to what extent Wheeler really intended this suggestion to be taken seriously. One of the earliest references to the concept was when Wheeler discussed the idea of living in a participatory universe related to his classic It from Bit concept, which rests at the heart of quantum information theory. Here is a quote from Wheeler in 1990: It from bit. Otherwise put, every it- every particle, every field of force, even the space-time continuum itself- derives its function, its meaning, its very existence entirely- even if in some contexts indirectly- from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom- a very deep bottom, in most instances- an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipmentevoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe.

At first glance, there is a fundamental problem with this approach in that it took several billion years after the Big Bang for the universe to reach any condition where it s reasonable to expect life to have formed (and certainly for it to have evolved through the known processes of Darwinian evolution). However, one possible way out of this is to accept that the universe existed in a superposition of states for billions of years until there was finally some sort of observer, at which point the wavefunction would have collapsed into the state that allowed that observation to take place. This scenario seems like it would be fully consistent with the

Copenhagen interpretation of quantum physics, and certainly in line with the claims made by Wheeler in a 2006 radio interview : We are participators in bringing into being not only the near and here but the far away and long ago. We are in this sense, participators in bringing about something of the universe in the distant past and if we have one explanation for whats happening in the distant past why should we need more?

The Participatory Anthropic Principle is a controversial theory and is not accepted by a majority of physicists as being a principle that actually applies to the universe. Though the Copenhagen interpretation is often applied to resolve the fundamental problems of quantum physics, such as those in the quantum double slit experiment, most physicists don't really believe that an observer is required to resolve all interactions. [45]

The figure above was made by Wheeler who explains it as follows: Symbolic representation of the Universe as a self-excited system brought into being by selfreference. The universe gives birth to communicating participators. Communicating participators give meaning to the universe With such a concept goes the endless series of receding reflections one sees in a pair of facing mirrors How much are things affected when we observe them? Can we devise a completely objective experiment? Do things have pre-established properties, and do they exist before we observe them? Do we exist when others dont pay attention to us, and how different do we become when somebody is watching us? I personally believe that things do pre-exist, but when we look at them and experiment with them we give them a personal and often highly subjective form and meaning. This also means that things change when we deal with them, but this process also changes ourselves. However, we need some form of interaction, or entaglement, so that a reciprocal connection may be established. Otherwise things go by indifferently and unchanged.

Observation bias How big is the smallest fish in the pond? You catch one hundred fishes, all of which are greater than six inches. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your net cant catch smaller fish.

Knowledge about limitations of your data collection process affects what inferences you can draw from the data. In the case of the fish-size-estimation problem, a selection effect- the nets sampling only the big fish- vitiates any attempt to extrapolate from the catch to the population remaining in the water. Had your net instead sampled randomly from all the fish, then finding a hundred fishes all greater than a foot would have been good evidence that few if any of the fish remaining are much smaller.

These thoughts belong to Nick Bostrom from his site anthropic-principle.com. Bostrom more generally refers to the human factor which affects in a decisive and irreversible way what we observe. Anthropic reasoning, Bostrom continues, which seeks to detect, diagnose, and cure such biases, is a philosophical goldmine. Few fields are so rich in empirical implications, touch on so many important scientific questions, pose such intricate paradoxes, and contain such generous quantities of conceptual and methodological confusion that need to be sorted out. Working in this area is a lot of intellectual fun.

He uses the following example: Lets look at an example where an observation selection effect is involved: We find that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did (or we will trace our origin to a planet where intelligent life evolved, in case we are born in a space colony). Our data point- that intelligent life arose on our planet- is predicted equally well by the hypothesis that intelligent life

is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. This datum therefore does not distinguish between the two hypotheses, provided that on both hypotheses intelligent life would have evolved somewhere. (On the other hand, if the intelligent-life-is-improbable hypothesis asserted that intelligent life was so improbable that is was unlikely to have evolved anywhere in the whole cosmos, then the evidence that intelligent life evolved on Earth would count against it. For this hypothesis would not have predicted our observation. In fact, it would have predicted that there would have been no observations at all.)

Another problem is the so-called fine-tuning of the universe for the appearance of intelligent life. Another example of reasoning that invokes observation selection effects is the attempt to provide a possible (not necessarily the only) explanation of why the universe appears fine-tuned for intelligent life in the sense that if any of various physical constants or initial conditions had been even very slightly different from what they are then life as we know it would not have existed. The idea behind this possible anthropic explanation is that the totality of spacetime might be very huge and may contain regions in which the values of fundamental constants and other parameters differ in many ways, perhaps according to some broad random distribution. If this is the case, then we should not be amazed to find that in our own region physical conditions appear fine-tuned. Owing to an obvious observation selection effect, only such fine-tuned regions are observed. Observing a fine-tuned region is precisely what we should expect if this theory is true, and so it can potentially account for available data in a neat and simple way, without having to assume that conditions just happened to turn out right through some immensely lucky- and arguably a priori extremely improbable- cosmic coincidence. [46]

Here we see the anthropic bias in the use and interpretation of data, since if we assume that the universe is vast and that it may include different areas- or that there are parallel universes- with different physical constants or initial conditions, then presumably it is the nets to be blamed, which we chose and use to search for proofs which fit our own taste.

Not only we may filter the size of fish which we want to measure, but also the fish we catch are insufficiently interpreted. In other words, it is not only the size of the nets but also their quality. This brings in my mind Youngs double-slit experiment, where an electron behaves either as particle or as wave, depending on the experimental set up. Here we have to do not with the size but with the form of the fish. How many different forms an electron may have from which we recognize only two?

But here we reach the problem not of the equipment but of the observer himself. He may have some free choices concerning the size of his nets, but does his have any choice concerning the interpretation he gives about the fish of any size? Reality consists not only of all the fish found in the ocean, but also of what we make from any separate set of fish. In order words, is our knowledge unlimited if we use all sizes of nets, or are we doomed to come to incomplete conclusions no matter how big our collection is?

Free choices as the variables of free will

Does the fact that observation changes the word around us imply the aspect of free will of intelligent agents, or is observation a more or less unconscious process which has basically to do with the inescapable entanglement between the observer and what is observed? I would say the second case is the most probable. Entanglement itself has this property. Changes happen instantaneously at an unconscious level, and then we realize what has happened so that we take action. This may not leave us many choices, but this also may not be the point. Leaving in an entangled word of intelligent information doesnt mean that we can do whatever we want. Instead, the meaning could be that we understand this connection, so that we learn how to live in unison with ourselves and the world, and make the best out of it.

Von Newmans theory of consciousness When we observe the world in order to understand it, we usually forget that we do observe it. The way we see the word is the way we think it is. On the other hand, we also forget that we are part of the world, so in some sense the world learns about itself through our own observation. Classical physics neglected the role of the observer as if he were something outside nature.

Quantum physics showed the role of observation in the answers we take, but again it didnt treat the observer as part of his experiment. Von Neumanns approach to quantum mechanics was such an effort: to unify the mind of the observer with the material object being observed. Here, we will discuss this option according to Henry Stapp in his essay: Von Neumanns formulation of Quantum mechanics and the role of mind in nature.

The non-locality controversy It is certainly true that science rests ultimately on what we know. That fact is the basis of the new point of view. However, the tremendous successes of the classical physical theory inaugurated by Galileo, Descartes, and Newton during the seventeenth century, had raised the hope and expectation that human beings could extract from careful observation, and the imaginative creation of testable hypotheses, a valid idea of the general nature and rules of behavior of the reality in which our human knowledge is imbedded. Giving up on that hope is indeed a radical shift. On the other hand, classical physical theory left part of reality out, namely our conscious experiences.

Hence it had no way to account either for the existence of our conscious experiences or for how knowledge can reside in those experiences. Thus bringing human experience into our understanding of reality seems to be a step in the right direction. It might allow science to explain, eventually, how we know what we know. But Copenhagen quantum theory is only a half-way house: it brings in human experience, but at the stiff price of excluding the rest of reality.

Yet how could the renowned scientists who created Copenhagen quantum theory ever believe, and sway most other physicists into believing, that a complete science could leave out the physical world? It is undeniable that we can never know for sure that a proposed theory of the world around us is really true. But that is not a sufficient reason to renounce, as a matter of principle, the attempt to form at least a coherent idea of what the world could be like, and rules by which it could work. Clearly some extraordinarily powerful consideration was in play.

The point is this: If nature really is nonlocal, then the way is open to the development of a rationally coherent theory of nature that integrates the subjective knowings introduced by Copenhagen quantum theory into an objectively existing and evolving physical reality. The basic framework is provided by the version of quantum theory constructed by John von Neumann.

John von Neumann was one of the most brilliant mathematicians and logicians of his age. He followed where the mathematics and logic led. From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe, and it should be treated as such. Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device.

The mathematical rules of quantum theory specify clearly how the measuring devices are to be included in the quantum mechanically described physical world. Von Neumann first formulated carefully the mathematical rules of quantum theory, and then followed where that mathematics led. It led first to the incorporation of the measuring devices into the quantum mechanically described physical universe, and eventually to the inclusion of everything built out of atoms and their constituents. Our bodies and brains thus become, in von Neumanns approach, parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation that heals the rupturing of the physical world introduced by the Copenhagen approach. It postulates, for each observer, that each experiential event is connected in a certain specified way to a corresponding brain event. The dynamical rules that connect mind and brain are very restrictive, and this leads to a mind-brain theory with significant explanatory power.

Non-locality and relativity Von Neumanns objective theory immediately accounts for the faster-than-light transfer of information that seems to be entailed by the non-locality experiments. This feature- that there is some sort of objective instantaneous transfer of information- conflicts with the spirit of the theory of relativity. However, this quantum effect is of a subtle kind: it acts neither on material

substance, nor on locally conserved energy-momentum, nor on anything else that exists in the classical conception of the physical world that the theory of relativity was originally designed to cover. It acts on a mathematical structure that represents, rather, information and propensities

The physical world as information Von Neumann makes clear the fact that he is trying to tie together the subjective perceptual and objective physical aspects of nature: it is inherently entirely correct that the measurement or related process of subjective perception is a new entity relative to the physical environment and is not reducible to the latter. Indeed, subjective perception leads to the intellectual inner life of the individual... experience only makes statements of the following type: an observer has made a certain (subjective) observation; and never any like this: a physical quantity has a certain value. In the final stage of his analysis he divides the world into parts I, II, and III, where part I was everything up to the retina of the observer, II was his retina, nerve tracts and brain, and III his abstract ego. Clearly, his abstract ego involves his consciousness. Von Neumanns formulation of quantum theory develops the dynamics of the interaction between these three parts.

The evolution of the physical universe involves three related processes. The first is the deterministic evolution of the state of the physical universe. It is controlled by the Schrodinger equation of relativistic quantum field theory. This process is a local dynamical process, with all the causal connections arising solely from interactions between neighboring localized microscopic elements. However, this local process holds only during the intervals between quantum events.

Each of these quantum events involves two other processes. The first is a choice of a Yes-No question by the mind-brain system. The second of these two processes is a choice by Nature of an answer, either Yes or No, to that question. This second choice is partially free: it is a random choice, subject to the statistical rules of quantum theory. The first choice is the analog in von Neumann theory of an essential process in Copenhagen quantum theory, namely the free choice made by the experimenter as to which aspect of nature is going to be probe. This choice of which aspect of nature is going to be probed, i.e., of which specific question is going to be put to nature,

is an essential element of quantum theory: the quantum statistical rules cannot be applied until, and unless, some specific question is first selected.

In Copenhagen quantum theory this choice is made by an experimenter, and this experimenter lies outside the system governed by the quantum rules. This feature of Copenhagen quantum theory is not altered in the transition to von Neumann quantum theory: choice by an individual, of which question will be put to nature, is not controlled by any rules that are known or understood within contemporary physics. This choice on the part of the mind-brain-body system that constitutes the individual is, in this specific sense, a free choice: it is not governed by the physical laws of contemporary physics (i.e., quantum theory). This freedom constitutes a logical gap in the dynamical rules of quantum theory.

Only Yes-No questions are permitted: all other possibilities can be reduced to these. Thus each answer, Yes or No, injects one bit of information into the quantum universe. These bits of information are stored in the evolving objective quantum state of the universe, which is a compendium of these bits of information. But it evolves in accordance with the laws of atomic physics. Thus the quantum state has an ontological character that is in part matter like, since it is expressed in terms of the variables of atomic physics, and evolves between events under the control of the laws of atomic physics. However, each event injects the information associated with a subjective perception by some observing system into the objective state of the universe.

This conceptualization of natural process arises not from some preconceived speculative intuition, but directly from an examination of the mathematical structure injected into science by our study of the structure of the relationships between our experiences. The quantum state of the universe is thus rooted in atomic properties, yet is an informational structure that interacts with, and carries into the future, the informational content of each mental event. This state has effective causal efficacy because it controls, via statistical laws, the propensities for the occurrence of subsequent events.

Once the physical world is understood in this way, as an atomistically stored compendium of locally efficacious bits of information, the instantaneous transfers of information along the

preferred surfaces now can be understood to be changes, not in just human knowledge, as in the Copenhagen interpretation, but in an absolute state of objective information.

Mind-brain interaction and decoherence Von Neumann quantum theory is essentially a theory of the interaction between the evolving objective state of the physical universe and a sequence of mental events, each of which is associated with a localized individual system. The theory specifies the general form of the interaction between subjective knowings associated with individual physical systems and the physical states of those systems. The mathematical structure automatically ensures that when the state of the individual physical system associated with a mental event is brought into alignment with the content of that mental event the entire universe is simultaneously brought into alignment with that mental content. No special arrangement is needed to produce this key result: it is an unavoidable consequence of the quantum entanglements that are built into the mathematical structure.

An essential feature of quantum brain dynamics is the strong action of the environment upon the brain. This action creates a powerful tendency for the brain to transform almost instantly into an ensemble of components, each of which is very similar to an entire classically-described brain. I assume that this transformation does indeed occur, and exploit it in two important ways. First, this close connection to classical physics makes the dynamics easy to describe: classical language and imagery can be used to describe in familiar terms how the brain behaves. Second, this description in familiar classical terms makes it easy to identify the important ways in which the actual behavior differs from what classical physics would predict.

A key micro-property of the human brain pertains to the migration of calcium ions from the micro-channels through which these ions enter the interior of nerve terminals to the sites where they trigger the release the contents of a vesicle of neuro-transmitter. The quantum mechanical rules entail that each release of the contents of a vesicle of neurotransmitter generates a quantum splitting of the brain into different classically describable components, or branches. Evolutionary considerations entail that the brain must keep the brain-body functioning in a coordinated way and, more specifically, must plan and effectuate, in any normally encountered situation, a single

coherent course of action that meets the needs of that individual. But due to the quantum splitting mentioned above, the quantum brain will tend to decompose into components that specify alternative possible courses of action. Thus the purely mechanical evolution of the state of the brain in accordance with the Schrodinger equation will normally causes the brain to evolve into a growing ensemble of alternative branches, each of which is essentially an entire classically described brain that specifies a possible plan of action.

This ensemble that constitutes the quantum brain is mathematically similar to an ensemble that occurs in a classical treatment when one takes into account the uncertainties in our knowledge of the initial conditions of the particles and fields that constitute the classical representation of a brain. This close connection between what quantum theory gives and what classical physics gives is the basic reason why von Neumann quantum theory is able to produce all of the correct predictions of classical physics. To unearth specific differences caused by quantum effects one can start from this similarity at the lowest-order approximation, which yields the classical results, but then dig deeper.

The passive and active roles of mind The founders of quantum theory recognized that the mathematical structure of quantum theory is naturally suited for, and seems to require, bringing into the dynamical equations two separate aspects of the interaction between the physical universe and the minds of the experimenter/observers. The first of these two aspects is the role of the experimenter in choosing what to attend to; which aspect of nature he wants to probe; which question he wants to ask about the physical world. This is the active role of mind. The second aspect is the recognition, or coming to know, the answer that nature returns. This is the passive role of mind.

The active physical counterpart to the passive mental event I have mentioned the Schrodinger evolution of the state S(t) of the universe. The second part of the orthodox quantum dynamics consists of an event that discards from the ensemble of quasiclassical elements mentioned above those elements that are incompatible with the answer that nature returns. This reduction of the prior ensemble of elements, which constitute the quantum

mechanical representation of the brain, to the sub-ensemble compatible with the outcome of the query is analogous to what happens in classical statistical mechanics when new information about the physical system is obtained. However, in the quantum case one must in principle regard the entire ensemble of classically described brains as real, because interference between the different elements is in principle possible.

Each quantum event consists, then of a pair of events, one physical, the other mental. The physical event reduces the initial ensemble that constitutes the brain prior to the event to the subensemble consisting of those branches that are compatible with the informational content of the associated mental event.

This dynamical connection means that, during an interval of conscious thinking, the brain changes by an alternation between two processes. The first is the generation, by a local deterministic mechanical rule, of an expanding profusion of alternative possible branches, with each branch corresponding to an entire classically describable brain embodying some specific possible course of action. The quantum brain is the entire ensemble of these separate, but equally real, quasi-classical branches. The second process involves an event that has both physical and mental aspects. The physical aspect, or event, chops off all branches that are incompatible with the associated mental aspect, or event. For example, if the mental event is the experiencing of some feature of the physical world, then the associated physical event would be the updating of the brains representation of that aspect of the physical world. This updating of the (quantum) brain is achieved by discarding from the ensemble of quasi-classical brain states all those branches in which the brains representation of the physical world is incompatible with the information content of the mental event.

This connection is similar to a functionalist account of consciousness. But here it is expressed in terms of a dynamical interaction that is demanded by the requirement that the objective formulation of the theory yield the same predictions about connections between our conscious experiences that the empirically validated Copenhagen quantum theory gives. The interaction is the exact expression of the basic dynamical rule of quantum theory, which is the stipulation that

each increment in knowledge is associated with a reduction of the quantum state to one that is compatible with the new knowledge.

The quantum brain is an ensemble of quasi-classical components. As just noted, this structure is similar to something that occurs in classical statistical mechanics, namely a classical statistical ensemble. But a classical statistical ensemble, though structurally similar to a quantum brain, is fundamentally a different kind of thing. It is a representation of a set of truly distinct possibilities, only one of which is real. A classical statistical ensemble is used when a person does not know which of the conceivable possibilities is real, but can assign a probability to each possibility. In contrast, all of the elements of the ensemble that constitute a quantum brain are equally real: no choice has yet been made among them. Consequently, and this is the key point, the entire ensemble acts as a whole in the determination of the upcoming mind-brain event.

Each thought is associated with the actualization of some macroscopic quasi-stable features of the brain. Thus the reduction event is a macroscopic happening. Moreover, this event involves, dynamically, the entire ensemble of quasi-classical brain states. In the corresponding classical model each element of the ensemble evolves independently, in accordance with a micro-local law of motion that involves just that one branch alone. Thus there are basic dynamical differences between the quantum and classical models, and the consequences of these dynamical differences need to be studied in order to exhibit the quantum effects. The only freedom in the theory- insofar as we leave Natures choices alone- is the choice made by the individual about which question it will ask next, and when it will ask it. These are the only inputs of mind to the dynamics of the brain. This severe restriction on the role of mind is what gives the theory its predictive power. Without this restriction mind could be free to do anything, and the theory would have no consequences. Asking a question about something is closely connected to focusing ones attention on it. Attending to something is the act of directing ones mental power to some task. This task might

be to update ones representation of some feature of the surrounding world, or to plan or execute some other sort of mental or physical action.

The key question is then: Can this freedom merely to choose which question is asked, and when it is asked, lead to any statistical influence of mind on the behavior of the brain, where a statistical influence is an influence on values obtained by averaging over the properly weighted possibilities?

The answer is Yes!

The Quantum Zeno Effect There is an important and well- studied effect in quantum theory that depends on the timings of the reduction events arising from the queries put to nature. It is called the Quantum Zeno Effect. It is not diminished by interaction with the environment.

The effect is simple. If the same question is put to nature sufficiently rapidly and the initial answer is Yes, then any noise-induced diffusion, or force-induced motion, of the system away from the sub-ensemble where the answer is Yes will be suppressed: the system will tend to be confined to the sub-ensemble where the answer is Yes. The effect is sometimes called the watched pot effect: according to the old adage a watched pot never boils, just looking at it keeps it from changing. Also, a state can be pulled along in some direction by posing a rapid sequence of questions that change sufficiently slowly over time. In short, according to the dynamical laws of quantum mechanics, the freedom to choose which questions are put to nature, and when they are asked, allows mind to influence the behavior of the brain.

A person is aware of almost none of the processing that is going on in his brain: unconscious brain action does almost everything. So it would be both unrealistic and theoretically unfeasible to give mind unbridled freedom: the questions posed by mind ought to be determined in large measure by brain.

What freedom is given to man?

According to this theory, the freedom given to Nature is simply to provide a Yes or No answer to a question posed by a subsystem. It seems reasonable to restrict in a similar way the choice given to a human mind.

A Simple Dynamical Model It is easy to construct a simple dynamical model in which the brain does most of the work, in a mechanical way, and the mind, by means of choices between Yes or No options, merely gives top-level guidance.

Let {P} be the set of projection operator that act only on the brain-body of the individual and that correspond to possible mental events of the individual. Let P(t) be the P in {P} that maximizes PS(t), where S(t) is the state of the universe at time t. This P(t) represents the best possible question that could be asked by the individual at time t. Let the question associated with P(t) be posed if P(t) reaches a local maximum. If nature returns the answer Yes then the mental event associated with P(t) occurs.

Mental control comes in only through the option to rapidly pose this same question repeatedly, thus activating the Quantum Zeno Effect, which will tend to keep the state of the brain focused on the plan of action specified by P.

The Quantum Zeno Effect will not freeze up the brain completely. It merely keeps the state of the brain in the subspace where attention is focused on pursuing the plan of action specified by P.

In this model the brain does practically everything, but mind, by means of the limited effect of consenting to the rapid reposing of the question already constructed and briefly presented by brain, can influence brain activity by causing this activity to stay focused on the presented course of action.

Agreement with recent work on attention Much experimental work on attention and effort has occurred since the time of William James. That work has been hampered by the nonexistence of any putative physical theory that purports to explain how our conscious experiences influence activities in our brains. The behaviorist approach, which dominated psychological during the first half of the twentieth century, and which essentially abolished, in this field, not only the use of introspective data but also the very concept of consciousness, was surely motivated in part by the fact that consciousness had no natural place within the framework of classical physical theory. According to the principles of classical physical theory, consciousness makes no difference in behavior: all behavior is determined by microscopic causation without ever acknowledging the existence of consciousness. Thus philosophers who accepted the ideas of classical physics were driven to conclude that conscious experiences were either identical to corresponding classically describable activities of the brain, or were emergent properties.

The first idea, the identity theory of mind, seems impossible to reconcile with the fact that according to the classical principles the brain is an assembly of local elements behaving in accordance with the local laws of classical physics, and that all higher-order dynamical properties are just re-expressions of the local causal links between the local elements. But the existence of feelings and of other conscious experiences is not just a re-expression of the causal links described by the principles of classical physical theory. And any emergent property that emerges from a system whose behavior is completely specified by the classical principles is only trivially emergent, in the same sense as is the wheelness often cited by Roger Sperry: wheelness did not exist in the physical world before wheels, and it exerts top-down causation, via the causal links specified by the classical principles. But the emergence of wheelness is not analogous to the emergence of consciousness: the existence of the defining characteristics of the wheelness of a wheel follows rationally from a classical physics model of a wheel, but the existence of the defining experiential characteristics of the consciousness of a brain does not follow rationally from a classical physics model of the brain. The failure of the behaviorist programs led to the rehabilitation of attention during the early fifties, and many hundreds of experiments have been performed during the past fifty years for the purpose of investigating empirically those aspects of human behavior that we ordinarily link to our consciousness.

Harold Pashler describes a great deal of this empirical work, and also the intertwined theoretical efforts to understand the nature of an information-processing system that could account for the intricate details of the objective data. Two key concepts are the notions of a processing Capacity and of Attention. The latter is associated with an internally directed selection between different possible allocations of the available processing Capacity. A third concept is Effort, which is linked to incentives, and to reports by subjects of trying harder.

Pashler organizes his discussion by separating perceptual processing from post-perceptual processing. The former covers processing that, first of all, identifies such basic physical properties of stimuli as location, color, loudness, and pitch, and, secondly, identifies stimuli in terms of categories of meaning. The post-perceptual process covers the tasks of producing motor actions and cognitive action beyond mere categorical identification. Pashler emphasizes that the empirical findings of attention studies specifically argue for a distinction between perceptual limitations and more central limitations involved in thought and the planning of action. The existence of these two different processes, with different characteristics, is a principal theme of Pashlers book.

In the quantum theory of mind-brain being described here there are two separate processes. First, there is the unconscious mechanical brain process governed by the Schrodinger equation. This brain processing involves dynamical units that are represented by complex patterns of neural activity (or, more generally, of brain activity) that are facilitated by use, and such that each unit tends to be activated as a whole by the activation of several of its parts: this explains the development of brain process through association. The brain evolves mechanically by the dynamical interplay of these dynamic units, and by feed-back loops that strengthen or weaken appropriate input channels.

Each individual quasi-classical element of the ensemble of alternative possible brain states that constitutes the quantum brain creates, on the basis of clues, or cues, coming from various sources, a plan for a possible coherent course of action. Quantum uncertainties entail that a host of different possibilities will emerge, and hence that the quantum brain will evolve into a set of

component classically describable brains representing different possible courses of action. This mechanical phase of the processing already involves some selectivity, because the various input clues contribute either more or less to the evolving brain process according to the degree to which these inputs activate, via associations, the patterns that survive and turn into the plan of action.

Mental intervention has, according to the quantum-physics-based theory described here, several distinctive characteristics. It consists of a sequence of discrete events each of which consents to an integrated course of action presented by brain. The rapidity of these events can be increased with effort. Effort-induced speed-up of the rate of occurrence of these events can, by means of the quantum Zeno effect, keep attention focused on a task. Between 100 and 300 msec of consent seem to be needed to fix a plan of action. Effort can, by increasing the number of events per second, increase the mental input into brain activity. Each conscious event picks out from the multitude of quasi-classical possibilities that comprise the quantum brain the sub-ensemble that is compatible with the conscious experience.

The correspondence between the mental event and the associated physical event is this: the physical event reduces the prior physical ensemble of alternative possibilities to the subensemble compatible with the mental event. This connection will be recognized as the core interpretive postulate of Copenhagen quantum theory: the physical event reduces the prior state of the observed system to the part of it that is compatible with the experience of the observer. Examination of Pashlers book shows that this quantum-physics-based theory accommodates naturally all of the complex structural features of the empirical data that he describes. He emphasizes a specific finding: strong empirical evidence for what he calls a central processing bottleneck associated with the attentive selection of a motor action. This bottleneck is not automatic within classical physics. A classical model could easily produce simultaneously two responses in different modalities, say vocal and manual, to two different stimuli arriving via two different modalities, say auditory and tactile: the two processes could proceed via dynamically independent routes. Pashler notes that the bottleneck is undiminished in split-brain patients

performing two tasks that, at the level of input and output, seem to be confined to different hemispheres. Pashler states The conclusion that there is a central bottleneck in the selection of action should not be confused with the... debate (about perceptual-level process). The finding that people seem unable to select two responses at the same time does not dispute the fact that they also have limitations in perceptual processing... I have already mentioned the independent selectivity injected into brain dynamics by the purely mechanical part of the quantum mind-brain process.

The queuing effect for the mind-controlled motor responses does not exclude interference between brain processes that are similar to each other, and hence that use common brain mechanisms. Pashler notes this distinction, and says the principles governing queuing seem indifferent to neural overlap of any sort studied so far. He also cites evidence that suggests that the hypothetical timer of brain activity associated with the cerebellum is basically independent of the central response-selection bottleneck.

The important point here is that there is in principle, in the quantum model, an essential dynamical difference between the unconscious processing carried out by the Schrodinger evolution, which generates via a local process an expanding collection of classically conceivable possible courses of action, and the process associated with the sequence of conscious events that constitutes a stream of consciousness. The former are not limited by the queuing effect, because all of the possibilities develop in parallel, whereas the latter do form elements of a single queue. The experiments cited by Pashler all seem to support this clear prediction of the quantum approach.

An interesting experiment mentioned by Pashler involves the simultaneous tasks of doing an IQ test and giving a foot response to a rapidly presented sequence of tones of either 2000 or 250 Hz. The subjects mental age, as measured by the IQ test, was reduced from adult to 8 years. This result supports the prediction of quantum theory that the bottleneck pertains to both intelligent behavior, which requires conscious processing, and selection of motor response.

Another interesting experiment showed that, when performing at maximum speed, with fixed accuracy, subjects produced responses at the same rate whether performing one task or two simultaneously: the limited capacity to produce responses can be divided between two simultaneously performed tasks. Pashler also notes Recent results strengthen the case for central interference even further, concluding that memory retrieval is subject to the same discrete processing bottleneck that prevents simultaneous response selection in two speeded choice tasks. In the section on Mental Effort, Pashler reports that incentives to perform especially well lead subjects to improve both speed and accuracy, and that the motivation had greater effects on the more cognitively complex activity. This is what would be expected if incentives lead to effort that produces increased rapidity of the events, each of which injects into the physical process, via quantum selection and reduction, bits of control information that reflect mental evaluation. Studies of sleep-deprived subjects suggest that in these cases effort works to counteract low arousal. If arousal is essentially the rate of occurrence of conscious events then this result is what the quantum model would predict. Pashler notes that Performing two tasks at the same time, for example, almost invariably... produces poorer performance in a task and increases ratings in effortfulness. And Increasing the rate at which events occur in experimenter-paced tasks often increases effort ratings without affecting performance Increasing incentives often raises workload ratings and performance at the same time. All of these empirical connections are in line with the general principle that effort increases the rate of conscious events, each of which inputs a mental evaluation and a selection or focusing of a course of action, and that this resource can be divided between tasks.

Additional supporting evidence comes from the studies of the effect of the conscious process upon the storage of information in short-term memory. According to the physics-based theory, the conscious process merely actualizes a course of action, which then develops automatically, with perhaps some occasional monitoring. Thus if one sets in place the activity of retaining in

memory a certain sequence of stimuli, then this activity can persist undiminished while the central processor is engaged in another task. This is what the data indicate. Pashler remarks that These conclusions contradict the remarkably widespread assumption that short-term memory capacity can be equated with, or used as a measure of, central resources. In the theory outlined here short term memory is stored in patterns of brain activity, whereas consciousness is associated with the selection of a sub-ensemble of quasi-classical states. This distinction seems to account for the large amount of detailed data that bears on this question of the connection of short-term-memory to consciousness.

Deliberate storage in, or retrieval from, long-term memory requires focused attention, and hence conscious effort. These processes should, according to the theory, use part of the limited processing capacity, and hence be detrimentally affected by a competing task that makes sufficient concurrent demands on the central resources. On the other hand, perceptual processing that involves conceptual categorization and identification without conscious awareness should not interfere with tasks that do consume central processing capacity. These expectations are what the evidence appears to confirm: the entirety of...front-end processing is modality specific and operates independent of the sort of single-channel central processing that limits retrieval and the control of action. This includes not only perceptual analysis but also storage in STM (short term memory) and whatever may feed back to change the allocation of perceptual attention itself.

Pashler describes a result dating from the nineteenth century: mental exertion reduces the amount of physical force that a person can apply. He notes that This puzzling phenomenon remains unexplained. However, it is an automatic consequence of the physics-based theory: creating physical force by muscle contraction requires an effort that opposes the physical tendencies generated by the Schrdinger equation. This opposing tendency is produced by the quantum Zeno effect, and is roughly proportional to the number of bits per second of central processing capacity that is devoted to the task. So if part of this processing capacity is directed to another task, then the applied force will diminish.

Pashler speculates on the possibility of a neurophysiological explanation of the facts he describes, but notes that the parallel versus serial distinction between the two mechanisms leads, in the classical neurophysiological approach, to the questions of what makes these two mechanisms so different, and what the connection between them is.

After analyzing various possible mechanisms that could cause the central bottleneck, Pashler says the question of why this should be the case is quite puzzling. Thus the fact that this bottleneck, and its basic properties, follow automatically from the same laws that explain the complex empirical evidence in the fields of classical and quantum physics means that the theory has significant explanatory power.

Of course, some similar sort of structure could presumably be worked into a classical model. But a general theory of all of nature that automatically explains a lot of empirical data in a particular field on the basis of the general principles is normally judge superior to a special theory that is rigged after the fact to explain these data.

It needs to be emphasized that there is at present absolutely no empirical evidence that favors the classical model over the quantum model described above. The classical model would have to be implemented as a statistical theory, due to the uncertainties in the initial conditions, and that the statistical model is to first order the same as the simple quantum model described above. The quantum model has the advantage that at least it could be valid, whereas the classical model must necessarily fail when quantum effects become important. So nothing is lost by switching to quantum theory, but a lot is gained. Psychology and psychiatry gain the possibility of reconciling with neuroscience the essential psychological concept of the ability of our minds to guide our actions. And psycho-physics gains a dynamical model for the interaction of mind and brain. Finally, philosophy of mind is liberated from the dilemma of having to choosing between identity theory and the emergence of a causally inert mind: the emergence of a mind which lacks the capacity to act back on the physical world in the way needed to drive its evolution hand-in-hand with the evolution of the brain.

Many-worlds interpretation

Previous Figure: The quantum-mechanical Schrdingers cat paradox according to the many-worlds interpretation; In this interpretation, every event is a branch point; the cat is both alive and dead, even before the box is opened, but the alive and dead cats are in different branches of the universe, both of which are equally real, but which cannot interact with each other.

According to Wikipedia, the many-worlds interpretation is an interpretation of quantum mechanics that asserts the objective reality of the universal wavefunction and denies the actuality of wavefunction collapse. Many-worlds implies that all possible alternative histories and futures are real, each representing an actual world (or universe). It is also referred to as MWI, the relative state formulation, the Everett interpretation, the theory of the universal wavefunction, many-universes interpretation, or just many-worlds. The idea of MWI originated in Everetts Princeton Ph.D. thesis The Theory of the Universal Wavefunction, developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 entitled Relative State Formulation of Quantum Mechanics (Wheeler contributed the title relative state; Everett originally called his approach the Correlation Interpretation, where correlation refers to quantum entanglement). The phrase many-worlds is due to Bryce DeWitt, who was responsible for the wider popularization of Everetts theory, which had been largely ignored for the first decade after publication. DeWitt s phrase many-worlds has become so much more popular than Everetts Universal Wavefunction or Everett-Wheelers Relative State Formulation that many forget that this is

only a difference of terminology; the content of both of Everetts papers and DeWitts popular article is the same. Although several versions of many-worlds have been proposed since Hugh Everetts original work, they all contain one key idea: the equations of physics that model the time evolution of systems without embedded observers are sufficient for modeling systems which do contain observers; in particular there is no observation-triggered wave function collapse which the Copenhagen interpretation proposes. The many-worlds interpretation shares many similarities with later, other post-Everett interpretations of quantum mechanics which also use decoherence to explain the process of measurement or wavefunction collapse. MWI treats the other histories or worlds as real since it regards the universal wavefunction as the basic physical entity or the fundamental entity, obeying at all times a deterministic wave equation.

As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) are passed through the double slit, a calculation assuming wave-like behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves.

Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of collapse in which an indeterminate quantum system would probabilistically collapse down onto, or select, just one determinate outcome to explain this phenomenon of observation. Wavefunction collapse was widely regarded as artificial and ad hoc, so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everetts Ph.D. work provided such an alternative interpretation. Everett stated that for a composite system- for example a subject (the observer or measuring apparatus) observing an

object (the observed system, such as a particle)- the statement that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled; we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wavefunction collapse) the notion of a relativity of states.

Everett noticed that the unitary, deterministic dynamics alone decreed that after an observation is made each element of the quantum superposition of the combined subject object wavefunction contains two relative states: a collapsed object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject-object states proceeds with complete indifference as to the presence or absence of the other elements, as if wavefunction collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the objects wave-functions collapse has emerged from the unitary, deterministic theory itself. (This answered Einsteins early criticism of quantum theory, that the theory should define what is observed, not for the observables to define the theory). Since the wavefunction appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed.

MWI removes the observer-dependent role in the quantum measurement process by replacing wavefunction collapse with quantum decoherence. Since the role of the observer lies at the heart of most if not all quantum paradoxes, this automatically resolves a number of problems; for example Schrdingers cat thought experiment, the EPR paradox, von Neumanns boundary problem and even wave-particle duality. Quantum cosmology also becomes intelligible, since there is no need anymore for an observer outside of the universe. There is a wide range of claims that are considered many-worlds interpretations. It was often claimed by those who do not believe in MWI that Everett himself was not entirely clear as to what he believed; however, MWI adherents (such as DeWitt, Tegmark, Deutsch and others)

believe they fully understand Everetts meaning as implying the literal existence of the other worlds. Additionally, recent biographical sources make it clear that Everett believed in the literal reality of the other quantum worlds. Everetts son reported that Hugh Everett never wavered in his belief over his many-worlds theory. Also Everett was reported to believe his many-worlds theory guaranteed him immortality. One of MWIs strongest advocates is David Deutsch. According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed in this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing, he suggested that parallelism that results from the validity of MWI could lead to a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it.

Asher Peres was an outspoken critic of MWI; for example, a section in his 1993 textbook had the title Everetts interpretation and other bizarre theories. In fact, Peres not only questioned whether MWI is really an interpretation, but rather, if any interpretations of quantum mechanics are needed at all. Indeed, an interpretation can be regarded as a purely formal transformation, which adds nothing to the rules of the quantum mechanics. Peres seems to suggest that positing the existence of an infinite number of non-communicating parallel universes is highly suspect per those who interpret it as a violation of Occams razor, i.e., that it does not minimize the number of hypothesized entities (Occams razor is just the principle of parsimony: among different hypotheses, chose the one with the fewest assumptions). However, precisely the opposite conclusion is drawn, simply by applying Occams Razor to the set of assumptions. It is understood the number of elementary particles are not a gross violation of Occams Razor, one counts the types, not the tokens. Max Tegmark remarks that the alternative to many-worlds is many words, an allusion to the complexity of von Neumanns collapse postulate. According to Martin Gardner, the other worlds of MWI have two different interpretations: real or unreal, and claims that Stephen Hawking and Steve Weinberg both favor the unreal interpretation. Gardner also claims that the non-real interpretation is favored by the majority of

physicists, whereas the realist view is only supported by MWI experts such as Deutsch and Bryce DeWitt.

MWI is considered by some to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others claim MWI is directly testable. Everett regarded MWI as falsifiable since any test that falsifies conventional quantum theory would also falsify MWI. [47]

Quantum suicide and immortality One of the interesting aspects of MWI is that it may imply a Quantum Theory of Immortality (QTI). According to Wikipedia, quantum suicide is a thought experiment that was originally published independently by Hans Moravec in 1987 and Bruno Marchal in 1988 and was independently developed further by Max Tegmark in 1998. It attempts to distinguish between the Copenhagen interpretation of quantum mechanics and the Everett many-worlds interpretation by means of a variation of the Schrdingers cat thought experiment, from the cats point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide regardless of the odds. [48] Tegmark (1997) describes the Quantum Suicide Experiment as follows: The apparatus is a quantum gun which each time its trigger is pulled measures the z-spin of a particle [particles can be spin up or spin down, seemingly at random]. It is connected to a machine gun that fires a single bullet if the result is down and merely makes an audible click if the result is up The experimenter first places a sand bag in front of the gun and tells her assistant to pull the trigger ten times. All [QM interpretations] predict that she will hear a seemingly random sequence of shots and duds such as bang-click-bang-bang-bang-click-clickbang-click-click. She now instructs her assistant to pull the trigger ten more times and places her head in front of the barrel. This time the shut-up-and calculate [non-MWI interpretations of QM] have no meaning for an observer in the dead state... and the [interpretations] will differ in their predictions. In interpretations where there is an explicit non-unitary collapse, she will be either dead or alive after the first trigger event, so she should expect to perceive perhaps a click

or two (if she is moderately lucky), then game over, nothing at all. In the MWI, on the other hand, the... prediction is that [the experimenter] will hear click with 100% certainty. When her assistant has completed this unenviable assignment, she will have heard ten clicks, and concluded that the collapse interpretations of quantum mechanics [all but the MWI] are ruled out to a confidence level of 1-0.5n ~ 99.9%. If she wants to rule them out ten sigma, she need merely increase n by continuing the experiment a while longer. Occasionally, to verify that the apparatus is working, she can move her head away from the gun and suddenly hear it going off intermittently. Note, however, that [almost all instances] will have her assistant perceiving that he has killed his boss. [49]

The meaning of the description above is that according to the many-worlds interpretation since all probabilities occur at the same time the assistant will always come up alive in a parallel world so that she will never hear the gun going off. But the paradox is that at some point she will see the experimenter dead, because the experimenter, who never puts the gun in her head, will see her assistant dead The story is reminiscent of a Russian roulette but in a quantum mechanical way. Physicist Lev Vaidman argued that someone should not agree to play games such as quantum Russian roulette because the version of themselves that survives will be in a world with a low weight. Quantum Russian roulette is similar to Russian roulette accept that the gun is triggered by a quantum event. This means that the players will branch every time they pull the trigger so that they both live and die. One version of each player is guaranteed to win but there will be many more worlds where they die.

The concept of quantum suicide extends the idea of quantum Russian roulette in order to differentiate between the collapse and Bohm approaches and some versions of the Everett approach. The idea behind quantum suicide is that you would never observe a world where you instantaneously die and so, from your perspective, every time you pull the trigger you live. This would not be the case if every eventuality was not realized. This idea was first proposed by Austrian mathematician Hans Moravec in 1987 and French philosopher Bruno Marchal in 1988, Swedish-American cosmologist Max Tegmark extended the idea in 1998. Moravec considered a situation where scientists are unable to turn on a particle accelerator because every time they try

something stops them. He suggested that this is the only thing that we could possibly observe if the effect of the particle accelerator was to instantly destroy the Earth. [50]

Beyond any paradox which arises, it is interesting to wonder if quantum suicide can offer a better understanding with respect to some new, yet unknown, state of existence; in other words, some kind of life after death. Is it a parallel world where we find ourselves after death? According to quantum suicide, we will never realize being really dead! (Except the rest of the word who will be grieving over us.) However, I believe that it worth thinking what a state of being means, how much do we exist in our present lifetime, and how many different levels of existence may we occupy at the same time. Consciousness must play a significant role; for example, is there any kind of life after death if personal consciousness is lost? Or is it a state of consciousness shift? Apparently, personal experience or existence is not so important, compared with information conservation, reproduction and perpetuation. The many-worlds, or many-minds, approach may help us understand how this passage or shift may someday really occur.

Hidden variables Hidden variables in quantum mechanics are used to explain non-locality preserving at the same time realism. Perhaps the most famous theory of hidden variables is Bohms interpretation of quantum mechanics. He introduces the notion of the quantum potential as a hidden variable to explain the non-local connection in quantum entanglement. Here, we will discuss two articles. First, Bells article on the problem of hidden variables, and then one of Bohms articles with his interpretation.

Bells analysis on hidden variables The demonstrations of von Neumann and others, that quantum mechanics does not permit a hidden variable interpretation, are reconsidered. It is shown that their essential axioms are unreasonable. It is urged that in further examination of this problem an interesting axiom would be that mutually distant systems are independent of one another.

This is a very interesting point made by John Bell in the beginning of his essay On the problem of hidden variables in quantum mechanics. In quantum entanglement, if we consider that the distant parts interact with each other, then we should suppose a faster than light transmission of information. But if the two parts are independent to each other, then the phenomenon should occur through a common organizing factor or principle, which connects the distant parts separately, i.e. non-locally. To know the quantum mechanical state of a system implies, in general, only statistical restrictions on the results of measurements. It seems interesting to ask if this statistical element be thought of as arising, as in classical statistical mechanics, because the states in question are averages over better defined states for which individually the results would be quite determined. These hypothetical dispersion free states would be specified not only by the quantum mechanical state vector but also by additional hidden variables- hidden because if states with prescribed values of these variables could actually be prepared, quantum mechanics would be observably inadequate.

Whether this question is indeed interesting has been the subject of debate. The present paper does not contribute to that debate. It is addressed to those who do find the question interesting, and more particularly to those among them who believe that the question concerning the existence of such hidden variables received an early and rather decisive answer in the form of von Neumanns proof on the mathematical impossibility of such variables in quantum theory. An attempt will be made to clarify what von Neumann and his successors actually demonstrated. This will cover, as well as von Neumanns treatment, the recent version of the argument by Jauch and Piron, and the stronger result consequent on the work of Gleason. It will be urged that these analyses leave the real question untouched. In fact it will be seen that these demonstrations require from the hypothetical dispersion free states, not only that appropriate ensembles thereof should have all measurable properties of quantum mechanical states, but certain other properties as well. These additional demands appear reasonable when results of measurement are loosely identified with properties of isolated systems. They are seen to be quite unreasonable when one remembers with Bohr the impossibility of any sharp distinction between the behavior of atomic

objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear.

Here Bell points to the fact that, in contrast to quantum mechanics, pre-established properties of the entangled system may be treated as hidden variables, since quantum mechanics supposes that these properties cannot exist because the system is in a state of superposition before measurement. Then Bell makes some assumptions and considers Von Neumanns proof for the supposed non-existence of hidden variables. The authors of the demonstrations to be reviewed were concerned to assume as little as possible about quantum mechanics. This is valuable for some purposes, but not for ours. We are interested only in the possibility of hidden variables in ordinary quantum mechanics and will use freely all the usual notions. Thereby the demonstrations will be substantially shortened. A quantum mechanical system is supposed to have observables represented by Hermitian operators in a complex linear vector space. Every measurement of an observable yields one of the eigenvalues of the corresponding operator. Observables with commuting operators can be measured simultaneously. A quantum mechanical state is represented by a vector in the linear state space.

The question at issue is whether the quantum mechanical states can be regarded as ensembles of states further specified by additional variables, such that given values of these variables together with the state vector determine precisely the results of individual measurements. These hypothetical well-specified states are said to be dispersion free

Consider now the proof of von Neumann that dispersion free states, and so hidden variables, are impossible. His essential assumption is: Any real linear combination of any two Hermitian operators represents an observable, and the same linear combination of expectation values is the expectation value of the combination. This is true for quantum mechanical states; it is required by von Neumann of the hypothetical dispersion free states also. But for a dispersion free state (which has no statistical character) the expectation value of an observable must equal one of its

eigenvalues. Therefore, dispersion free states are impossible. If the state space has more dimensions, we can always consider a two-dimensional subspace; therefore, the demonstration is quite general.

The essential assumption can be criticized as follows. At first sight the required additivity of expectation values seems very reasonable, and it is rather the non-additivity of allowed values (eigenvalues) which requires explanation. Of course the explanation is well known: A measurement of a sum of non-commuting observables cannot be made by combining trivially the results of separate observations on the two termsit requires a quite distinct experiment. But this explanation of the non-additivity of allowed values also establishes the non-triviality of the additivity of expectation values. The latter is a quite peculiar property of quantum mechanical states, not to be expected a priori. There is no reason to demand it individually of the hypothetical dispersion free states, whose function it is to reproduce the measurable peculiarities of quantum mechanics when averaged over... Thus the formal proof of von Neumann does not justify his informal conclusion: It is therefore not, as is often assumed, a question of reinterpretation of quantum mechanics- the present system of quantum mechanics would have to be objectively false in order that another description of the elementary process than the statistical one be possible. It was not the objective measurable predictions of quantum mechanics which ruled out hidden variables. It was the arbitrary assumption of a particular (and impossible) relation between the results of incompatible measurements either of which might be made on a given occasion but only one of which can in fact be made.

A linear combination of observables guaranties their independence. If they are somehow correlated to each other, such a combination is supposed not to exist. So, we may either find such a combination of the members of a set and prove that the members are independent, or not find any combination, in which case the members can be either dependent or independent. In other words, we can prove independence, but only assume dependence. Consequently, according to this, locality can anytime be proved, while non-locality is an eternal assumption.

Bell then discusses the problem of locality and separability. Up till now we have been resisting arbitrary demands upon the hypothetical dispersion free states. However, as well as reproducing quantum mechanics on averaging, there are features which can reasonably be desired in a hidden variable scheme. The hidden variables should surely have some spatial significance and should evolve in time according to prescribed laws. These are prejudices, but it is just this possibility of interpolating some (preferably causal) space-time picture, between preparation of and measurements on states, that makes the quest for hidden variables interesting to the unsophisticated. The ideas of space, time, and causality are not prominent in the kind of discussion we have been considering above. To the writers knowledge the most successful attempt in that direction is the 1952 scheme of Bohm for elementary wave mechanics. By way of conclusion, this will be sketched briefly, and a curious feature of it stressed.

Consider for example a system of two spin -1/2 particles. The quantum mechanical state is represented by a wave function. This is governed by the Schrodinger equation. For simplicity we have taken neutral particles with magnetic moments, and an external magnetic field H has been allowed to represent spin analyzing magnets. The hidden variables are then two vectors X1 and X2, which give directly the results of position measurements. Other measurements are reduced ultimately to position measurements. For example, measurement of a spin component means observing whether the particle emerges with an upward or downward deflection. The variables X1 and X2 are supposed to be distributed in configuration space with the probability density appropriate to the quantum mechanical state. Consistently with this, X1 and X2 are supposed to vary with time.

The curious feature is that the trajectory equations for the hidden variables have in general a grossly non-local character. If the wave function is factorable before the analyzing fields become effective (the particles being far apart), this factorability will be preserved. The Schrodinger equation also separates, and the trajectories of X1 and X2 are determined separately by equations involving H(X1) and H(X2), respectively. However, in general, the wave function is not factorable. The trajectory of 1 then depends in a complicated way on the trajectory and wave

function of 2, and so on the analyzing fields acting on 2- however remote these may be from particle 1. So in this theory an explicit causal mechanism exists whereby the disposition of one piece of apparatus affects the results obtained with a distant piece. In fact the Einstein-PodolskyRosen paradox is resolved in the way which Einstein would have liked least.

More generally, the hidden variable account of a given system becomes entirely different when we remember that it has undoubtedly interacted with numerous other systems in the past and that the total wave function will certainly not be factorable. The same effect complicates the hidden variable account of the theory of measurement, when it is desired to include part of the apparatus in the system.

Bohm of course was well aware of these features of his scheme, and has given them much attention. However, it must be stressed that, to the present writers knowledge, there is no proof that any hidden variable account of quantum mechanics must have this extraordinary character. It would therefore be interesting, perhaps, to pursue some further impossibility proofs, replacing the arbitrary axioms objected to above by some condition of locality, or of separability of distant systems. [51] In the previous example, Bell uses two independent hidden variables that somehow determine the position of the spin vectors of the two particles. In Bohms analysis, these variables are related to probabilistic trajectories of the particles in the quantum potential. The particles seem to know where to go because they are guided by this potential, or, according to a similar interpretation, they are guided by some sort of pilot waves (De Broglie waves). In any case, the quantum potential acts as the unifying factor that binds together the distant particles non-locally, i.e. without any interaction between the particles. This way, the observed quantum system can be both non-local and separable.

Bohms interpretation in terms of hidden variables Bohm, in his essay A Suggested Interpretation of the Quantum Theory in Terms of Hidden Variables, begins his analysis as follows.

The usual interpretation of the quantum theory is based on an assumption having very farreaching implications, viz., that the physical state of an individual system is completely specified by a wave function that determines only the probabilities of actual results that can be obtained in a statistical ensemble of similar experiments. This assumption has been the object of severe criticisms, notably on the part of Einstein, who has always believed that, even at the quantum level, there must exist precisely definable elements or dynamical variables determining (as in classical physics) the actual behavior of each individual system, and not merely its probable behavior. Since these elements or variables are not now included in the quantum theory and have not yet been detected experimentally, Einstein has always regarded the present form of the quantum theory as incomplete, although he admits its internal consistency.

Most physicists have felt that objections such as those raised by Einstein are not relevant, first, because the present form of the quantum theory with its usual probability interpretation is in excellent agreement with an extremely wide range of experiments, at least in the domain of distances larger than 10-13 cm, and, secondly, because no consistent alternative interpretations have as yet been suggested. The purpose of this paper (and of a subsequent paper hereafter denoted by II) is, however, to suggest just such an alternative interpretation. In contrast to the usual interpretation, this alternative interpretation permits us to conceive of each individual system as being in a precisely definable state, whose changes with time are determined by definite laws, analogous to (but not identical with) the classical equations of motion. Quantummechanical probabilities are regarded (like their counterparts in classical statistical mechanics) as only a practical necessity and not as a manifestation of an inherent lack of complete determination in the properties of matter at the quantum level. As long as the present general form of Schroedingers equation is retained, the physical results obtained with our suggested alternative interpretation are precisely the same as those obtained with the usual interpretation.

We shall see, however, that our alternative interpretation permits modifications of the mathematical formulation which could not even be described in terms of the usual interpretation. Moreover, the modifications can quite easily be formulated in such a way that their effects are insignificant in the atomic domain, where the present quantum theory is in such good agreement with experiment, but of crucial importance in the domain of dimensions of the order of 10-3 cm,

where, as we have seen, the present theory is totally inadequate. It is thus entirely possible that some of the modifications describable in terms of our suggested alternative interpretation, not in terms of the usual interpretation, may be needed for a more thorough understanding of phenomena associated with very small distances. We shall not, however, actually develop such modifications in any detail in these papers. After this article was completed, the authors attention was called to similar proposals for an alternative interpretation of the quantum theory made by de Broglie in 1926, but later given up by him partly as a result of certain criticisms made by Pauli and partly because of additional objections raised by de Broglie himself However, all of the objections of de Broglie and Pauli could have been met if only de Broglie had carried his ideas to their logical conclusion. The essential new step in doing this is to apply our interpretation in the theory of the measurement process itself as well as in the description of the observed system. Such a development of the theory of measurements is given in Paper II, where it will be shown in detail that our interpretation leads to precisely the same results for all experiments as are obtained with the usual interpretation. The foundation for doing this is laid in Paper I, where we develop the basis of our interpretation, contrast it with the usual interpretation, and apply it to a few simple examples, in order to illustrate the principles involved.

Before discussing the fundamentals of his theory, Bohm makes some remarks about the main stream quantum theory. The usual physical interpretation of the quantum theory centers around the uncertainty principle. Now, the uncertainty principle can be derived in two different ways. First, we may start with the assumption already criticized by Einstein, namely, that a wave function that determines only probabilities of actual experimental results nevertheless provides the most complete possible specification of the so-called quantum state of an individual system. With the aid of this assumption and with the aid of the de Broglie relation, p=hk, where k is the wave number associated with a particular Fourier component of the wave function, the uncertainty principle is readily deduced. From this derivation, we are led to interpret the uncertainty

principle as an inherent and irreducible limitation on the precision with which it is correct for us

even to conceive of momentum and position as simultaneously defined quantities. For if, as is done in the usual interpretation of the quantum theory, the wave intensity is assumed to determine only the probability of a given position, and if the k-th Fourier component of the wave function is assumed to determine only the probability of a corresponding momentum, p=hk, then it becomes a contradiction in terms to ask for a state in which momentum and position are simultaneously and precisely defined.

A second possible derivation of the uncertainty principle is based on a theoretical analysis of the processes with the aid of which physically significant quantities such as momentum and position can be measured. In such an analysis, one finds that because the measuring apparatus interacts with the observed system by means of indivisible quanta, there will always be an irreducible disturbance of some observed property of the system. If the precise effects of this disturbance could be predicted or controlled, then one could correct for these effects, and thus one could still in principle obtain simultaneous measurements of momentum and position, having unlimited precision. But if one could do this, then the uncertainty principle would be violated. The uncertainty principle is, as we have seen, however, a necessary consequence of the assumption that the wave function and its probability interpretation provide the most complete possible specification of the state of an individual system. In order to avoid the possibility of a contradiction with this assumption, Bohr and others have suggested an additional assumption, namely, that the process of transfer of a single quantum from observed system to measuring apparatus is inherently unpredictable, uncontrollable, and not subject to a detailed rational analysis or description. With the aid of this assumption, one can show that the same uncertainty principle that is deduced from the wave function and its probability interpretation is also obtained as an inherent and unavoidable limitation on the precision of all possible measurements. Thus, one is able to obtain a set of assumptions, which permit a self-consistent formulation of the usual interpretation of the quantum theory.

The above point of view has been given its most consistent and systematic expression by Bohr, in terms of the principle of complementarity. In formulating this principle, Bohr suggests that at the atomic level we must renounce our hitherto successful practice of conceiving of an individual system as a unified and precisely definable whole, all of whose aspects are, in a

manner of speaking, simultaneously and unambiguously accessible to our conceptual gaze. Such a system of concepts, which is sometimes called a model, need not be restricted to pictures, but may also include, for example, mathematical concepts, as long as these are supposed to be in a precise (i.e., one-to-one) correspondence with the objects that are being described. The principle of complementarity requires us, however, to renounce even mathematical models. Thus, in Bohrs point of view, the wave function is in no sense a conceptual model of an individual system, since it is not in a precise (one-to-one) correspondence with the behavior of this system, but only in a statistical correspondence.

In place of a precisely defined conceptual model, the principle of complementarity states that we are restricted to complementarity pairs of inherently imprecisely defined concepts, such as position and momentum, particle and wave, etc. The maximum degree of precision of definition of either member of such a pair is reciprocally related to that of the opposite member. This need for an inherent lack of complete precision can be understood in two ways. First, it can be. regarded as a consequence of the fact that the experimental apparatus needed for a precise measurement of one member of a complementary pair of variables must always be such as to preclude the possibility of a simultaneous and precise measurement of the other member. Secondly, the assumption that an individual system is completely specified by the wave function and its probability interpretation implies a corresponding unavoidable lack of precision in the very conceptual structure, with the aid of which we can think about and describe the behavior of the system.

It is only at the classical level that we can correctly neglect the inherent lack of precision in all of our conceptual models; for here, the incomplete determination of physical properties implied by the uncertainty principle produces effects that are too small to be of practical significance. Our ability to describe classical systems in terms of precisely definable models is, however, an integral part of the usual interpretation of the theory. For without such models, we would have no way to describe, or even to think of, the result of an observation, which is of course always finally carried out at a classical level of accuracy. If the relationships of a given set of classically describable phenomena depend significantly on the essentially quantum-mechanical properties of matter, however, then the principle of complementarity states that no single model is possible

which could provide a precise and rational analysis of the connections between these phenomena. In such a case, we are not supposed, for example, to attempt to describe in detail how future phenomena arise out of past phenomena. Instead, we should simply accept without further analysis the fact that future phenomena do in fact somehow manage to be produced, in a way that is, however, necessarily beyond the possibility of a detailed description. The only aim of a mathematical theory is then to predict the statistical relations, if any, connecting these phenomena.

Bohm then criticizes the standard interpretation of the quantum theory. The usual interpretation of the quantum theory can be criticized on many grounds. In this paper, however, we shall stress only the fact that it requires us to give up the possibility of even conceiving precisely what might determine the behavior of an individual system at the quantum level, without providing adequate proof that such a renunciation is necessary. The usual interpretation is admittedly consistent; but the mere demonstration of such consistency does not exclude the possibility of other equally consistent interpretations, which would involve additional elements or parameters permitting a detailed causal and continuous description of all processes, and not requiring us to forego the possibility of conceiving the quantum level in precise terms. From the point of view of the usual interpretation, these additional elements or parameters could be called hidden variables. As a matter of fact, whenever we have previously had recourse to statistical theories, we have always ultimately found that the laws governing the individual members of a statistical ensemble could be expressed in terms of just such hidden variables. For example, from the point of view of macroscopic physics, the coordinates and momenta of individual atoms are hidden variables, which in a large scale system manifest themselves only as statistical averages. Perhaps then, our present quantum-mechanical averages are similarly a manifestation of hidden variables, which have not, however, yet been detected directly.

Now it may be asked why these hidden variables should have so long remained undetected. To answer this question, it is helpful to consider as an analogy the early forms of the atomic theory, in which the existence of atoms was postulated in order to explain certain large-scale effects,

such as the laws of chemical combination, the gas laws, etc. On the other hand, these same effects could also be described directly in terms of existing macrophysical concepts (such as pressure, volume, temperature, mass, etc.); and a correct description in these terms did not require any reference to atoms. Ultimately, however, effects were found which contradicted the predictions obtained by extrapolating certain purely macrophysical theories to the domain of the very small, and which could be understood correctly in terms of the assumption that matter is composed of atoms. Similarly, we suggest that if there are hidden variables underlying the present quantum theory, it is quite likely that in the atomic domain, they will lead to effects that can also be described adequately in the terms of the usual quantum-mechanical concepts; while in a domain associated with much smaller dimensions, such as the level associated with the fundamental length of the order of 10-13 cm, the hidden variables may lead to completely new effects not consistent with the extrapolation of the present quantum theory down to this level.

If, as is certainly entirely possible, these hidden variables are actually needed for a correct description at small distances, we could easily be kept on the wrong track for a long time by restricting ourselves to the usual interpretation of the quantum theory, which excludes such hidden variables as a matter of principle. It is therefore very important for us to investigate our reasons for supposing that the usual physical interpretation is likely to be the correct one. To this end, we shall begin by repeating the two mutually consistent assumptions on which the usual interpretation is based:

(1) The wave function with its probability interpretation determines the most complete possible specification of the state of an individual system. (2) The process of transfer of a single quantum from observed system to measuring apparatus is inherently unpredictable, uncontrollable, and unanalyzable.

Let us now inquire into the question of whether there are any experiments that could conceivably provide a test for these assumptions. It is often stated in connection with this problem that the mathematical apparatus of the quantum theory and its physical interpretation form a consistent whole and that this combined system of mathematical apparatus and physical interpretation is tested adequately by the extremely wide range of experiments that are in agreement with

predictions obtained by using this system. If assumptions (1) and (2) implied a unique mathematical formulation, then such a conclusion would be valid, because experimental predictions could then be found which, if contradicted, would clearly indicate that these assumptions were wrong. Although assumptions (1) and (2) do limit the possible forms of the mathematical theory, they do not limit these forms sufficiently to make possible a unique set of predictions that could in principle permit such an experimental test. Thus, one can contemplate practically arbitrary changes in the Hamiltonian operator, including, for example, the postulation of an unlimited range of new kinds of meson fields each having almost any conceivable rest mass, charge, spin, magnetic moment, etc. And if such postulates should prove to be inadequate, it is conceivable that we may have to introduce non-local operators, non-linear fields, S-matrices, etc. This means that when the theory is found to be inadequate (as now happens, for example, at distances of the order of 10-13 cm), it is always possible, and, in fact, usually quite natural, to assume that the theory can be made to agree with experiment by some as yet unknown change in the mathematical formulation alone, not requiring any fundamental changes in the physical interpretation. This means that as long as we accept the usual physical interpretation of the quantum theory, we cannot be led by any conceivable experiment to give up this interpretation, even if it should happen to be wrong. The usual physical interpretation therefore presents us with a considerable danger of falling into a trap, consisting of a self-closing chain of circular hypotheses, which are in principle unverifiable if true. The only way of avoiding the possibility of such a trap is to study the consequences of postulates that contradict assumptions (1) and (2) at the outset. Thus, we could, for example, postulate that the precise outcome of each individual measurement process is in principle determined by some at present hidden elements or variables; and we could then try to find experiments that depended in a unique and reproducible way on the assumed state of these hidden elements or variables. If such predictions are verified, we should then obtain experimental evidence favoring the hypothesis that hidden variables exist. If they are not verified, however, the correctness of the usual interpretation of the quantum theory is not necessarily proved, since it may be necessary instead to alter the specific character of the theory that is supposed to describe the behavior of the assumed hidden variables.

We conclude then that a choice of the present interpretation of the quantum theory involves a real physical limitation on the kinds of theories that we wish to take into consideration. From the

arguments given here, however, it would seem that there are no secure experimental or theoretical grounds on which we can base such a choice because this choice follows from hypotheses that cannot conceivably be subjected to an experimental test and because we now have an alternative interpretation.

Then Bohm presents his own interpretation. We shall now give a general description of our suggested physical interpretation of the present mathematical formulation of the quantum theory. We shall carry out a more detailed description in subsequent sections of this paper. We begin with the one-particle Schrodinger equation, and shall later generalize to an arbitrary number of particles. This wave equation is (1) Now is a complex function, which can be expressed as (2) where R and S are real. We readily verify that the equations for R and S are


(4) It is convenient to write P(x)=R2(x) or R=P1/2 where P(x) is the probability density. We then obtain


(6) Now, in the classical limit (h0) the above equations are subject to a very simple interpretation. The function S(x) is a solution of the Hamilton-Jacobi equation. If we consider an ensemble of

particle trajectories which are solutions of the equations of motion, then it is a well-known theorem of mechanics that if all of these trajectories are normal to any given surface of constant S, then they are normal to all surfaces of constant S, and S(x)/m will be equal to the velocity vector, v(x), for any particle passing the point x. Equation (5) can therefore be re-expressed as (7) This equation indicates that it is consistent to regard P(x) as the probability density for particles in our ensemble. For in that case, we can regard Pv as the mean current of particles in this ensemble, and Eq. (7) then simply expresses the conservation of probability. Let us now see to what extent this interpretation can be given a meaning even when h 0. To do this, let us assume that each particle is acted on, not only by a classical potential, V(x), but also by a quantum-mechanical potential,

(8) Then Eq. (6) can still be regarded as the Hamilton-Jacobi equation for our ensemble of particles,

can still be regarded as the particle velocity, and Eq. (5) can still be regarded as

describing conservation of probability in our ensemble. Thus, it would seem that we have here the nucleus of an alternative interpretation for Schrodingers equation.

The first step in developing this interpretation in a more explicit way is to associate with each electron a particle having precisely definable and continuously varying values of position and momentum. The solution of the modified Hamilton-Jacobi equation (4) defines an ensemble of possible trajectories for this particle, which can be obtained from the Hamilton-Jacobi function, S(x), by integrating the velocity, v(x) = S(x)/m. The equation for S implies, however, that the particles moves under the action of a force which is not entirely derivable from the classical potential, F(x), but which also obtains a contribution from the quantum-mechanical potential, U(x)=(2/2m)(R2/R). The function, P(x), is not completely arbitrary, but is partially determined in terms of S(x) by the differential Eq. (3). Thus R and S can be said to codetermine each other. The most convenient way of obtaining R and S is, in fact, usually to solve Eq. (1) for the Schrodinger wave function, and then to use the relations,

Since the force on a particle now depends on a function of the absolute value, P(x), of the wave function, (x), evaluated at the actual location of the particle, we have effectively been led to regard the wave function of an individual electron as a mathematical representation of an objectively real field. This field exerts a force on the particle in a way that is analogous to, but not identical with, the way in which an electromagnetic field exerts a force on a charge, and a meson field exerts a force on a nucleon. In the last analysis, there is, of course, no reason why a particle should not be acted on by a -field, as well as by an electromagnetic field, a gravitational field, a set of meson fields, and perhaps by still other fields that have not yet been discovered.

The analogy with the electromagnetic (and other) field goes quite far. For, just as the electromagnetic field obeys Maxwells equations, the -field obeys Schrodingers equation. In both cases, a complete specification of the fields at a given instant over every point in space determines the values of the fields for all times. In both cases, once we know the field functions, we can calculate force on a particle, so that, if we also know the initial position and momentum of the particle, we can calculate its entire trajectory.

In this connection, it is worthwhile to recall that the use of the Hamilton-Jacobi equation in solving for the motion of a particle is only a matter of convenience and that, in principle, we can always solve directly by using Newtons laws of motion and the correct boundary conditions. The equation of motion of a particle acted on by the classical potential, V(x), and the quantummechanical potential, Eq. (8), is (8a) It is in connection with the boundary conditions appearing in the equations of motion that we find the only fundamental difference between the -field and other fields, such as the electromagnetic field. For in order to obtain results that are equivalent to those of the usual interpretation of the quantum theory, we are required to restrict the value of the initial particle

momentum to p=S(x). From the application of Hamilton-Jacobi theory to Eq. (6), it follows that this restriction is consistent, in the sense that if it holds initially, it will hold for all time. Our suggested new interpretation of the quantum theory implies, however, that this restriction is not inherent in the conceptual structure. We shall see in Sec. 9, for example, that it is quite consistent in our interpretation to contemplate modifications in the theory, which permit an arbitraryrelation between p and S(x). The law of force on the particle can, however, be so chosen that in the atomic domain, p turns out to be very nearly equal to S(x)/m, while in processes involving very small distances, these two quantities may be very different. In this way, we can improve the analogy between the -field and the electromagnetic field (as well as between quantum mechanics and classical mechanics). Another important difference between the -field and the electromagnetic field is that, whereas Schrodingers equation is homogeneous in , Maxwells equations are inhomogeneous in the electric and magnetic fields. Since inhomogeneities are needed to give rise to radiation, this means that our present equations imply that the -field is not radiated or absorbed, but simply changes its form while its integrated intensity remains constant. This restriction to a homogeneous equation is, however, like the restriction to p=S(x), not inherent in the conceptual structure of our new interpretation. Thus, in Sec 9, we shall show that one can consistently postulate inhomogeneities in the equation governing which produce important effects only at very small distances, and negligible effects in the atomic domain. If such inhomogeneities are actually present, then the -field will be subject to being emitted and absorbed, but only in connection with processes associated with very small distances. Once the -field has been emitted, however, it will in all atomic processes simply obey Schrodingers equation as a very good approximation. Nevertheless, at very small distances, the value of the -field would, as in the case of the electromagnetic field, depend to some extent on the actual location of the particle.

Let us now consider the meaning of the assumption of a statistical ensemble of particles with a probability density equal to P(x) = R2(x)= |(x)|2. From Eq. (5), it follows that this assumption is consistent, provided that satisfies Schrodingers equation, and v= S(x)/m. This probability density is numerically equal to the probability density of particles obtained in the usual interpretation. In the usual interpretation, however, the need for a probability description is

regarded as inherent in the very structure of matter, whereas in our interpretation it arises because from one measurement to the next we cannot in practice predict or control the precise location of a particle, as a result of corresponding unpredictable and uncontrollable disturbances introduced by the measuring apparatus. Thus, in our interpretation, the use of a statistical ensemble is (as in the case of classical statistical mechanics) only a practical necessity, and not a reflection of an inherent limitation on the precision with which it is correct for us to conceive of the variables defining the state of the system.

Moreover, it is clear that if in connection with very small distances we are ultimately required to give up the special assumptions that satisfies Schrodingers equation and that v=S(x)/m, then |(x)|2 will cease to satisfy a conservation equation and will therefore also cease to be able to represent the probability density of particles. Nevertheless, there would still be a true probability density of particles which is conserved. Thus, it would become possible in principle to find experiments in which |(x)|2 could be distinguished from the probability density, and therefore to prove that the usual interpretation, which gives |(x)|2 only a probability interpretation must be inadequate. Moreover, we shall see that with the aid of such modifications in the theory, we could in principle measure the particle positions and momenta precisely, and thus violate the uncertainty principle. As long as we restrict ourselves to conditions in which Schrodingers equation is satisfied, and in which v=S(x)/m however, the uncertainty principle will remain an effective practical limitation on the possible precision of measurements. This means that at present, the particle positions and momenta should be regarded as hidden variables, since as we shall see we are not now able to obtain experiments that localize them to a region smaller than that in which the intensity of the -field is appreciable. Thus, we cannot yet find clear-cut experimental proof that the assumption of these variables is necessary, although it is entirely possible that, in the domain of very small distances, new modifications in the theory may have to be introduced, which would permit a proof of the existence of the definite particle position and momentum to be obtained.

We conclude that our suggested interpretation of the quantum theory provides a much broader conceptual framework than that provided by the usual interpretation, for all of the results of the

usual interpretation are obtained from our interpretation if we make the following three special assumptions which are mutually consistent: That the -field satisfies Schrodingers equation. That the particle momentum is restricted to v=S(x)/m. That we do not predict or control the precise location of the particle, but have, in practice, a statistical ensemble with probability density P(x) = |(x)|2. The use of statistics is, however, not inherent in the conceptual structure, but merely a consequence of our ignorance of the precise initial conditions of the particle.

(1) (2) (3)

As we shall see, it is entirely possible that a better theory of phenomena involving distances of the order of 10-13 cm or less would require us to go beyond the limitations of these special assumptions. Our principal purpose in this paper is to show, however, that if one makes these special assumptions, our interpretation leads in all possible experiments to the same predictions as are obtained from the usual interpretation.

It is now easy to understand why the adoption of the usual interpretation of the quantum theory would tend to lead us away from the direction of our suggested alternative interpretation. For in a theory involving hidden variables, one would normally expect that the behavior of an individual system should not depend on the statistical ensemble of which it is a member, because this ensemble refers to a series of similar but disconnected experiments carried out under equivalent initial conditions. In our interpretation, however, the quantum-mechanical potential, U(x), acting on an individual particle depends on a wave intensity, P(x), that is also numerically equal to a probability density in our ensemble. In the terminology of the usual interpretation of the quantum theory, in which one tacitly assumes that the wave function has only one interpretation; namely, in terms of a probability, our suggested new interpretation would look like a mysterious dependence of the individual on the statistical ensemble of which it is a member. In our interpretation, such a dependence is perfectly rational, because the wave function can consistently be interpreted both as a force and as a probability density.

Bohm goes on to further formulate his theory. We shall discuss here two cases which he uses to illustrate his assumptions: The double-slit experiment and the barrier problem. Let us now consider a scattering problem. Because it is comparatively easy to analyze, we shall discuss a hypothetical experiment, in which an electron is incident in the z direction with an initial momentum, p0, on a system consisting of two slits. After the electron passes through the slit system, its position is measured and recorded, for example, on a photographic plate.

Now, in the usual interpretation of the quantum theory, the electron is described by a wave function. The incident part of the wave function is 0=exp(ip0z/); but when the wave passes through the slit system, it is modified by interference and diffraction effects, so that it will develop a characteristic intensity pattern by the time it reaches the position measuring instrument. The probability that the electron will be detected between x and x+dx is |(x)|2dx. If the experiment is repeated many times under equivalent initial conditions, one eventually obtains a pattern of hits on the photographic plate that is very reminiscent of the interference patterns of optics.

In the usual interpretation of the quantum theory, the origin of this interference pattern is very difficult to understand. For there may be certain points where the wave function is zero when both slits are open, but not zero when only one slit is open. How can the opening of a second slit prevent the electron from reaching certain points that it could reach if this slit were closed? If the electron acted completely like a classical particle, this phenomenon could not be explained at all. Clearly, then the wave aspects of the electron must have something to do with the production of the interference pattern. Yet, the electron cannot be identical with its associated wave, because the latter spreads out over a wide region. On the other hand, when the electrons position is measured, it always appears at the detector as if it were a localized particle.

The usual interpretation of the quantum theory not only makes no attempt to provide a single precisely defined conceptual model for the production of the phenomena described above, but it asserts that no such model is even conceivable. Instead of a single precisely denned conceptual model, it provides a pair of complementary models, viz., particle and wave, each of which can be

made more precise only under conditions which necessitate a reciprocal decrease in the degree of precision of the other. Thus, while the electron goes through the slit system, its position is said to be inherently ambiguous, so that if we wish to obtain an interference pattern, it is meaningless to ask through which slit an individual electron actually passed. Within the domain of space within which the position of the electron has no meaning we can use the wave model and thus describe the subsequent production of interference. If, however, we tried to define the position of the electron as it passed the slit system more accurately by means of a measurement, the resulting disturbance of its motion produced by the measuring apparatus would destroy the interference pattern. Thus, conditions would be created in which the particle model becomes more precisely defined at the expense of a corresponding decrease in the degree of definition of the wave model. When the position of the electron is measured at the photographic plate, a similar sharpening of the degree of definition of the particle model occurs at the expense of that of the wave model.

In our interpretation of the quantum theory, this experiment is described causally and continuously in terms of a single precisely definable conceptual model. As we have already shown, we must use the same wave function as is used in the usual interpretation; but instead we regard it as a mathematical representation of an objectively real field that determines part of the force acting on the particle. The initial momentum of the particle is obtained from the incident wave function, exp(ip0z/), as p=s/z=p0. We do not in practice, however, control the initial location of the particle, so that although it goes through a definite slit, we cannot predict which slit this will be. The particle is at all times acted on by the quantum-mechanical potential, U= ( /2m)
2 2

R/R. While the particle is incident, this potential vanishes because R is then a constant;

but after it passes through the slit system, the particle encounters a quantum-mechanical potential that changes rapidly with position. The subsequent motion of the particle may therefore become quite complicated. Nevertheless, the probability that a particle shall enter a given region, dx, is as in the usual interpretation, equal to |(x)|2dx. We therefore deduce that the particle can never reach a point where the wave function vanishes. The reason is that the quantum-mechanical potential, U, becomes infinite when R becomes zero. If the approach to infinity happens to be through positive values of U, there will be an infinite force repelling the particle away from the origin. If the approach is through negative values of U, the particle will go through this point with infinite speed, and thus spend no time there. In either case, we obtain a simple and precisely

definable conceptual model explaining why particles can never be found at points where the wave function vanishes. If one of the slits is closed, the quantum-mechanical potential is correspondingly altered, because the -field is changed, and the particle may then be able to reach certain points which it was unable to reach when both slits were open. The slit is therefore able to affect the motion of the particle only indirectly, through its effect on the Schrodinger -field. Moreover, if the position of the electron is measured while it is passing through the slit system, the measuring apparatus will, as in the usual interpretation, create a disturbance that destroys the interference pattern. In our interpretation, however, the necessity for this destruction is not inherent in the conceptual structure; and as we shall see, the destruction of the interference pattern could in principle be avoided by means of other ways of making measurements, ways which are conceivable but not now actually possible.

Considering the barrier problem, Bohm says, According to classical physics, a particle can never penetrate a potential barrier having a height greater than the particles kinetic energy. In the usual interpretation of the quantum theory, it is said to be able, with a small probability, to leak through the barrier. In our interpretation of the quantum theory, however, the potential provided by the Schrodinger -field enables it to ride over the barrier, but only a few particles are likely to have trajectories that carry them all the way across without being turned around.

We shall merely sketch in general terms how the above results can be obtained. Since the motion of the particle is strongly affected by its -field, we must first solve for this field with the aid of Schrodingers equation. Initially, we have a wave packet incident on the potential barrier; and because the probability density is equal to |(x)|2, the particle is certain to be somewhere within this wave packet. When the wave packet strikes the repulsive barrier, the -field undergoes rapid changes which can be calculated if desired, but whose precise form does not interest us here. At this time, the quantum-mechanical potential, U(x)=(2/2m)(R2/R), undergoes rapid and violent fluctuations. The particle orbit then becomes very complicated and, because the potential

is time dependent, very sensitive to the precise initial relationship between the particle position and the center of the wave packet. Ultimately, however, the incident wave packet disappears and is replaced by two packets, one of them a reflected packet and the other a transmitted packet having a much smaller intensity. Because the probability density is ||2, the particle must end up in one of these packets. The other packet can subsequently be ignored. Since the reflected packet is usually so much stronger than the transmitted packet, we conclude that during the time when the packet is inside the barrier, most of the particle orbits must be turned around, as a result of the violent fluctuations in the quantum-mechanical potential. [52]

Psychic operators and a new formulation of science In all the previous discussion we have made an essential assumption. Thought works according to nature. In fact thought is a product of natural processes. This seems obvious, but we usually tend to forget it. The participatory role of the thinker, or observer, was brought forward by modern quantum mechanics, whereas it was ignored in classical physics. Quantum mechanics raised issues concerning some paradoxes of nature, therefore paradoxes of the human mind. Even if quantum theory is incomplete, we already know, thanks to Gdel, that logic certainly is, so that the pursuit of absolute knowledge, or of the theory of everything, is nothing more than wishful thinking.

But the main problem is still one of interpretation. What is, for example, non-locality? Is it instantaneous action at a distance, or is it just the elusive character of the microcosm? I believe that even the basic notion of a point-particle is elusive. Particles could well be extended objects in space-time, naively represented by points, so that a probability distribution is essential to find their location.

The assumption that properties of things seem not to exist before measurement is in fact inescapable. We know nothing about things before we observe them. But the act of observation, which is a form of interaction, changes both the object and the way we think. It brings forward an object to our consciousness, which is not the same object as the real one, but an object in consciousness. So the form and the content both change because of the reciprocal interaction.

Quantum mechanics also revealed a world of much more flexible mathematical symbols. The wave-function is not just a wave equation, but also a state representation; operators are universals acting on physical objects, while Diracs bra-ket vectors are not only numerical but also symbolical- for example they may contain arrows for spin. They can also merge to produce an inner product, or they can be conjugated like verbs. This new quantum dawn in the world of science brings a promise for the great unification not only of forces but also of different fields of science. I cant see, for example, any intricate obstacle to express an emotion or any psychic state in the form of a Diracs vector. Then, we could relate such psychic states to natural phenomena, and see what happens. This is really a fruitful modern notation, while the theory does not forbid such hybridisms. Bohms Wholeness and the implicate order together with Jungs Synchronicity- an acausal connecting principle are still my two favorite books. So that such a unification of the physis and the psyche, of body and mind, would be for me the greatest fulfillment of all.

The holographic brain

Since the ancient times there has been a fundamental question whether a visible object exists both in the real world and in our minds. Our brain makes continuously, instinctively, and instantaneously all the necessary arrangements and comparisons to guaranty us what is real and what is illusory. Even when it fails, this is due to some kind of trick (for example, light intensity, weather conditions, degree of concreteness of the object, etc.). But in any case we can recognize that things regarded as separated in space and time are parts of a single objective reality. We see an object and immediately our brain harries to confirm or discard any relevant information. This looks like the ancient view of a medium necessary to mediate between the eyes and the soul of the observer so that vision may be established. The medium in such a case is light and the soul is all the sensory, emotional and mental responses of our brain. The process of what we call thinking may not happen exclusively in one part of the brain but it may include many different areas working together in order to give us a complete and precise form and shape of the object corresponding to the visual stimuli as much as possible.

According to Sheldrake, in his aforementioned article The Sense of Being Stared At, the theory that there is a detailed representation of the external world within the brain is by no means universally believed within academic circles. It is under attack by skeptical neuroscientists and philosophers. The more that is known about the eyes and the brain, the less likely the internal representation theory seems. The resolving power of the eyes is limited; each eye has a blind spot of which we remain unaware; the eyes are in frequent motion, saccading from point to point in the visual field three to four times a second. As Alva Noe has summarized the problem, how, on the basis of the fragmented and discontinuous information, are we able to enjoy the impression of seamless consciousness of an environment that is detailed, continuous, complex and high resolution?

The most radical solution to this problem is to suppose that the visual world is not an illusion, and is not inside the brain at all. It is where it seems to be, in the external world. The leading proponent of this view was J.J. Gibson (1979) in his ecological approach to perception. Rather than the brain building up an internal model of the environment, vision involves the whole animal and is concerned with the guidance of action. For Gibson, perception is active and direct. The animal moves its eyes, head and body, and it moves through the environment. Visual perception is not a series of static snapshots, but a dynamic visual flow. As Gibson put it, Information is conceived as available in the ambient energy flux, not as signals in a bundle of nerve fibers. It is information about both the persisting and the changing features of the environment together. Moreover, information about the observer and his movements is available, so that self-awareness accompanies perceptual awareness.

Max Velmans currently argues in favor of a theory of this kind. He discusses the example of a subject S looking at a cat as follows: According to reductionists there seems to be a phenomenal cat in S's mind, but this is really nothing more than a state of her brain. According to the reflexive model, while S is gazing at the cat, her only visual experience of the cat is the cat she sees out in the world. If she is asked to point to this phenomenal cat (her cat experience), she should point not to her brain but to the cat as perceived, out in space beyond the body surface I assume that the brain constructs a representation or mental model of what is happening, based on the input from the initiating stimulus... Visual representations of a cat, for example, include

encoding for shape, location and extension, movement, surface texture, color, and so on.... Let us suppose that the information encoded in the subjects brain is formed into a kind of neural projection hologram. A projection hologram has the interesting quality that the three dimensional image it encodes is perceived to be out in space, in front of its two-dimensional surface.

Velmans makes it clear that the idea of holographic projection is only an analogy, and stresses that he thinks perceptual projection is subjective and non-physical, occurring only in phenomenal as opposed to physical space. Nevertheless, these projections extend beyond the skull and generally coincide with physical space.

Recording a hologram

If our brain tries to understand the world in a holistic manner, then a hologram is a good example of how our brain may work. According to Wikipedia, holography is a technique that enables a light field, which is generally the product of a light source scattered off objects, to be recorded and later reconstructed when the original light field is no longer present, due to the absence of the original object. Holograms are recorded using a flash of light that illuminates a scene and then imprints on a recording medium, much in the way a photograph is recorded. In addition, however, part of the light beam must be shone directly onto the recording medium- this second light beam is known as the reference beam. A hologram requires a laser as the sole light source.

When the two laser beams reach the recording medium, their light waves intersect and interfere with each other. It is this interference pattern that is imprinted on the recording medium. The pattern itself is seemingly random, as it represents the way in which the scenes light interfered with the original light source. The interference pattern can be said to be an encoded version of the scene, requiring a particular key- that is, the original light source- in order to view its contents. This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field that is identical to the one originally produced by the scene and scattered onto the hologram. This image effect produced in a persons retina is known as a virtual image.

A basic characteristic of holograms is that in a holographic reconstruction, each region of the photographic plate contains the whole image. However, if we cut the plate in smaller pieces the original image becomes obscure and less detailed. Furthermore, holography not only requires special techniques, such as lasers with fixed wavelength, in order to work, but also needs particular conditions to provide a clear holographic image. For example, to prevent external light from interfering, holograms are usually taken in darkness, or in low level light of a different color from the laser light used in making the hologram. [53]

These certain conditions necessary to make a hologram may be rare in nature but could be a common feature of the brain, because its functions are complicated enough to make all the necessary arrangements from the moment it receives an image from the eyes till the time this image is completely processed. We could say that our eyes see, but our brain watches. The fact that our brain is multi-functional, instead of processing and storing images in a linear, inflexible way, is shown by modern experiments in many scientific fields.

The holonomic brain theory, for example, was originated by Karl Pribram, who noticed that rats didnt forget to perform tasks even if large parts of their brain were removed. In this model, each sense functions as a lens, refocusing wave patterns either by perceiving a specific pattern or context as swirls, or by discerning discrete grains or quantum units. According to Pribram, the tuning of wave frequency in cells of the primary visual cortex plays a role in visual imaging,

while such tuning in the auditory system has been well established for decades. This holographic idea led to the coining of the term holonomic to describe the notion in wider contexts than just holograms. Pribram has written, What the data suggest is that there exists in the cortex, a multidimensional holographic-like process serving as an attractor or set point toward which muscular contractions operate to achieve a specified environmental result. The specification has to be based on prior experience (of the species or the individual) and stored in holographic-like form. Activation of the store involves patterns of muscular contractions (guided by basal ganglia, cerebellar, brain stem and spinal cord) whose sequential operations need only to satisfy the target encoded in the image of achievement much as the patterns of sequential operations of heating and cooling must meet the setpoint of the thermostat. [54]

Holonomic brain theory

According to Scholarpedia, The Holonomic Brain Theory describes a type of process that occurs in fine fibered neural webs. The process is composed of patches of local field potentials described mathematically as windowed Fourier transforms or wavelets. The Fourier approach to sensory perception is the basis for the holonomic theory of brain function. Holonomic processes

have more recently been called Quantum Holography by Walter Schempp (1993) in their application to image processing in tomography as in PET scans and functional Magnetic Resonance (fMRI)- and even more recently for processing images in digital cameras. Dennis Gabor (1946) had pioneered the use of windowed Fourier processes for use in communication theory and noted its similarity to its use in describing quantum processes in subatomic physics. Gabor therefore called his units of communication quanta of information. Karl Pribrams holonomic theory is based on evidence that the dendritic receptive fields in sensory cortexes are described mathematically by Gabor functions.

Taking the visual system as an example, the form of an optical image is transformed by the retina into a quantum process that is transmitted to the visual cortex. Each dendritic receptive field thus represents the spread of the properties of that form originating from the entire retina. Taken together, cortical receptive fields form patches of dendritic local field potentials described mathematically by Gabor functions. Note that the spread of properties occurs within each patch; there is no spread of the Fourier process over the large extent of the entire cortex. In order to serve the perceptual process the patches must become assembled by the operation of nerve impuses in axonal circuits. Processing the vibratory sensory inputs in audition and in tactile sensation proceeds somewhat similarly.

But Gabor and similar wavelet functions, though useful in communication and computations, fail to serve as the properties of images and objects that guide us in the space-time world we navigate. In order to attain such properties an inverse Fourier transformation has to occur. Fortunately the Fourier process is readily invertable; the same transformation that begets the holographic domain, gets us back into space-time. The inverse Fourier transformation is accomplished by movement. In vision, nystagmoid movements define pixels, points which are mathematically defined by Point Attractors. Larger eye and head movements define groupings of points which can readily be recognized as moving space-time figures. Such groupings are mathematically defined as Symmetry Groups. The brain processes involved are organized by a motor cortex immediately adjacent to the primary visual cortex. Similar motor strips are located adjacent to other sensory input systems.

The holonomic theory of brain function has two roots: 1) the experimental evidence accrued during the 1960s and 1970s that mapped certain brain processes as local field potentials as well as bursts of electrical discharges traversing circuits, and 2) the mathematical insights of Dennis Gabor in the 1940s as realized in optical imaging by Emmett Leith in the early 1960s. The experimental mapping procedure originated with Stephen Kuffler. Kuffler (1953), working with the visual system took the common clinical procedure of mapping visual fields into the microelectrode laboratory. A visual field is described as the part of the environment that a person can see with one eye without moving that eye. Maps of this field are routinely recorded by means of the verbal response of the person to a spot of light on an appropriate medium such as graph paper.

For the verbal response of the human, Kuffler substituted the response of a single neuron recorded from a microelectrode implanted in the visual system of an animal. Because the record was made from the domain of a single neuron rather than the whole visual system, the map portrayed what was going on in the dendritic arbor of that neuron. The map was no longer a map of what was seen by the whole visual system but only that part, the receptive dendritic field, viewed by the particular neuron.

The dendritic arbor (figure above) is made up of fibers for the most part too fine to support propagated action potentials, spikes. Rather the local field potential changes oscillate between moderate excitation (postsynaptic depolarization) and inhibition (post-synaptic hyperpolarization). The maps therefore represent a distribution of oscillations of electrical potentials within a particular dendritic arbor. Hubel and Wiesel, (1959) working in Kuflers laboratory discovered that the visual cortex responded more effectively to an elongated line or bar presented at a specific orientation rather than to a spot of light. However, a decade later many laboratories found that oriented gratings composed of lines at different spacing, rather than single lines were the effective stimulus to engage a neuron in the visual cortex. These gratings were characterized by their spatial

frequency: scanning the grating produces an alternation between light and dark, the frequency of alternation depending on the spacing of the grating. An example in the somatosensory cortex of three receptive fields and their contour maps produced by a tactile grating is presented in the figure below.

An example of the three dimensional representation of the surface distribution and associated contour map of the electrical response to buccal nerve stimulation.

Satial decay of a synaptic potential initiated by an input onto a dendrite.

There are four common misconceptions about the application of holographic and holonomic theories- that is, holonomic procedures- to brain function. The first and most important of these is that, contrary to what is shown in the figure above, the processing that occurs in the dendritic arbor, in the receptive field, is performed by propagated nerve impulses. Finding that impulses do occur in certain dendrites readily produces such a misconception. An excellent example appears in Eric Kandells 2006 biographical In Search of Memory. Kandell found such impulses in the dendrites of the hippocampus early in his career: By applying the powerful methodologies of cell biology, Alden and I easily picked some low hanging intellectual fruit. We found that action potentials [nerve impulses] in the pyramidal cells of the hippocampus originated at more than one site within the cell. We had good

evidence to suggest that action potentials in pyramidal cells of the hippocampus can also begin in the dendrites

This proved to be an important discovery. Up to that time most scientists including Dominick Purpura and Harry Grundfest, thought that dendrites could not be excited and therefore could not generate action potentials.

Willifred Rall, a major theorist and model builder at NIH, had developed a mathematical model showing how dendrites of motor neurons function. This model was based on the fundamental assumption that the cell membrane of dendrites is passive: it does not contain voltage-gated sodium channels and therefore cannot support an action potential. The intracellular signals we recorded were the first evidence to the contrary, and our finding later proved to be a general principle of neuronal function. The problem that Kandells finding poses can be called The tyranny of names. Those of us who have been concerned with processes occurring in fine-fibered webs have been too prone to focus on dendrites per se. Kandells finding has been repeatedly confirmed as has his conclusion which has been restated in his (as well as other) otherwise excellent neuroscience texts. Dendrites, defined as afferents to neural cell bodies, come in all sizes. The biggest of them all are the afferent peripheral nerves entering the spinal cord. Such large fibers readily support the propagation of nerve impulses. Large diameter fibers occur both as afferent (dendritic) and efferent (axonal) fibers in neural circuits.The hippocampal dendrites, though not as large as peripheral nerves, have sizable diameters. The very fact that Kandell and others can make intracellular recordings from these hippocampal dendrites attests to their considerable size. The webs wherein holonomic processes occur (in the hippocampus and elsewhere) are made up of pre- and postsynaptic slim branches of larger fibers. Fine fibered webs occur in the brain, both at the ends of branching axons and within dendritic arbors. The holonomic brain theory is founded in the processing that occurs in fine fiber webs wherever they occur.

[The tyranny of names was called to my attention when, in the early 1950s I found responses in the precentral motor cortex of the brain evoked by sciatic stimulation. It took much subsequent

research and weeks of phone conversations and visits by neuroscience friends Clint Woolsey and Wade Marshall to witness demonstrations in my laboratory to convince them- and me- that the precentral cortex is actually a sensory cortex for intentional action, not just an efferent path to muscles from the brain.] Contrast Kandells statement with another, made repeatedly over the decades by Ted Bullock (1981): In 1957 it was possible to say These considerations also lead us to the suggestion that much of normal nervous function occurs without impulses [emphasis in the original] but mediated by graded activity, not only as response but also as stimulus (Bullock 1957). The notion had appealed to me for some time (Bullock 1945) and in 1947 I wrote in a review: The far-reaching implications of the assumption that neurons can affect each other by means distinct from classical impulses in synaptic pathways are obvious. I referred to Bremer (1944) and Gerard (1941) who influenced me most in this view, which remained for a long time ignored in the conventional orthodoxy. [Currently therefore,] I propose that a circuit in our context of nervous tissue is an oversimplified abstraction involving a limited subset of communicated signals .

That, in fact, there are many parallel types of signals and forms of response, often skipping over neighbors [that are] in direct contact and acting upon more or less specified classes of nearby or even remote elements. Thus the true picture of what is going on could not be drawn as a familiar circuit; and specified influence would not be properly called connectivity, except for a very limited subset of neighbors. Instead of the usual terms neural net or local circuit I would suggest we think of a neural throng, that is a form of densely packed social gathering with more structure and goals than a mob.

Diagram of microstructure of synaptic domains in cortex. The ensemble of overlapping circles represents the junctions between branches of input axons and cortical dendrites.

Or take authors statements in 1991 Brain and Perception. In Chap. 4, Pribram describes one example of properties of the manner in which some cortical dendrites interact: Receptive fields in the sensory cortex are composed of polarizations occurring in dendritic arbors of cortical neurons. According to the holonomic brain theory these polarizations collectively interact to produce the receptive field properties mapped during single neuron recording (figure above). [The recording electrode, that is, the relevant the axon, samples that interaction.] Dendrites are fitted with spines that resemble little cilia, or hairs, protruding perpendicularly from the dendritic fiber. These spines have bulbs at their endings, knob-like heads that make contact with branches of axons and other dendrites to form synapses. Activity in axons and in other dendrites such as those stemming from reciprocal synapses produce depolarizations and hyperpolarizations in the dendritic spines.

Shepherd, Rall, Perkel and their colleagues modeled the process whereby these postsynaptic events occurring in the spine heads interact. The issue is this: The stalks of the spines are narrow and therefore impose a high resistance to conduction (active or passive) toward the dendritic

branch. Spine head depolarizations (as well as hyperpolarizations) must therefore interact with one another if they are to influence the action potentials generated at the axon hillock of the parent cell of the dendrite. The interactions (dromic and antidromic) among dendritic potentials (by means of which the signal becomes effective at the next stage of processing) thus depend on the simultaneous activation of both pre and postsynaptic sites.

According to Shepherdthe relative efficacy of distal dendritic inputs would [in this manner] be greatly enhanced --information might thus be processed through precise timing of specific inputs to different neighboring spines These precise interactions would greatly increase the complexity of information processing that can take place in distal dendrites.

Another common misconception is that the Fourier transformation is globally spread across the entire brain cortex. This has led to misleading statements such as The brain is a hologram. Only one particular brain process is holonomic, the one taking place in the transactions occurring in its fine fibered web. From the outset in the early 1960s when Pribram proposed the theory, he noted that the spread function (as it is appropriately called) is limited to a receptive field of an individual neuron in a cortical sensory system- and he actually thought that this was a serious problem for the theory until it was shown by radio- astronomers that such limited regions could be patched together to encompass large regions of observations.

Despite these precise early descriptions, psychophysicists and others in the scientific community spent much time and effort to show that a global Fourier transformation would not work to explain sensory function. Few paid heed to patch holography- which Pribram had dubbed holonomy and which engineers and mathematicians call a windowed Fourier Transformation.

The third common misconception regarding holography and holonomy is that these processes deal with waves. Waves occur in space and in time. The Fourier transformation deals with the intersections among waves, their interference patterns created by differences among their phases. The amplitudes of these intersections are Fourier coefficients, discrete numbers that are used for computation. These numbers are useful in statistical calculations. The statistical and the spectral

computations are readily convertible into one another: successive terms in the Fourier series correspond to orders in statistics and thus can serve as vectors in graphs.

Convertibility raises the question of the value of having multiple mathematical representations of data. In authors experience, which reflects earlier discussions in quantum physics, the spectral representation displays a more nuanced representation while the statistical/vector representation is more computationally friendly.

A final common misconception that needs to be dealt with, is that all memory storage is holonomic (holographic). This misconception stems from juxtaposing memory storage to memory retrieval. However, in order for retrieval to occur, the memory must be stored in such a way that it can become retrieved. In other words, retrieval is dependent on storing a code. The retrieval process, the encoding, is stored in the brains circuitry. We can, therefore, distinguish a deep holonomic store (which can be content addressable) from a surface pattern (such as naming) of stored circuitry. Thus the deep dis-membered holonomic store can be re-membered. Holographic and holonomic processes are holistic as their names imply. This attribute has endeared the concept to humanists and some philosophers and scientists. However, many of these proponents of a holistic view conflate two very different forms of holism. In biology and psychology a well- known form which I like to call wholism is captured in the saying that the whole is more than and different from the sum of its parts. Reductionist and materialist scientists and philosophers like this form of wholism because they can discern emergent properties as they investigate higher orders and can try to reduce the higher order to the lower ones either as to their properties or the theoretical terms that describe their relations. By contrast, holographic and holonomic processes are truly holistic in that they spread patterns everywhere and everywhen to entangle the parts with one another. In this domain, space and time no longer exist and therefore neither does causality in Aristotles sense of efficient causation. This relation between cause and effect has served well as the coin of much of current science and the philosophy of science. However, Aristotles more comprehensive formal or formative causation is more appropriate to descriptions of more complex orders such as language

and those composed by holographic and holonomic brain processes. Holism in this form is related to holy and healthy. My hope has been that as scientists begin to understand and accept the validity of holonomic processes as truly scientific, this understanding will help resolve the current estrangement between the sciences and the humanities, and between sophisticated pursuits of science and sophisticated pursuits of religion. [55]

Bohms Wholeness and the implicate order

It is extremely interesting, not only from a scientific perspective but also for purely personal reasons, to ask and to investigate how the perception of reality is generated. Most of the people consider reality as a matter of course, something which has always existed, and whose essence is never going to change. However, in the course of human history things once considered correct, true, or real, have been revised so that they dont have the same meaning as before, or even they are not considered as elements of reality anymore. Therefore, what we think about reality and what reality is by itself may be two completely different things. This relationship between human intelligence and reality is analyzed by David Bohm in his book, Wholeness and the implicate order. As he says: It is useful here to consider how such a distinction may have arisen. It is well known, for example, that a young child often nds it dicult to distinguish the contents of his thought from real things (e.g., he may imagine that these contents are visible to others, as they are visible to him, and he may be afraid of what others call imaginary dangers). So while he tends to begin the process of thinking navely (i.e. without being explicitly conscious that he is thinking), at some stage he becomes consciously aware of the process of thought, when he realizes that some of the things that he seems to perceive are actually only thoughts and therefore no things (or nothing) while others are real (or something).

Primitive man must often have been in a similar situation. As he began to build up the scope of his practical technical thought in his dealings with things, such thought images became more intense and more frequent. In order to establish a proper balance and harmony in the whole of his life he probably felt the need to develop his thought about totality in a similar way. In this latter

kind of thought, the distinction between thought and thing is particularly liable to become confused. Thus, as men began to think of the forces of nature and of gods, and as artists made realistic images of animals and gods, sensed as possessing magical or transcendent powers, man was led to engage in a kind of thought without any clear physical referent that was so intense, so unremittant, and so realistic that he could no longer maintain a clear distinction between mental image and reality.

Such experiences must eventually have given rise to a deeply felt urge to clear up this distinction (expressed in questions such as Who am I?, What is my nature?, What is the true relationship of man, nature and the gods?, etc.), for to remain permanently confused about what is real and what is not, is a state that man must ultimately nd to be intolerable, since it not only makes impossible a rational approach to practical problems but it also robs life of all meaning.

It is clear, then, that sooner or later, man in his overall process of thought would engage in systematic attempts to clear up this distinction. One can see that at some stage it has to be felt in this process that it is not enough to know how to distinguish particular thoughts from particular things. Rather, it is necessary to understand the distinction universally. Perhaps, then, the primitive man or the young child may have a ash of insight in which he sees, probably without explicitly verbalizing it, that thought as a whole has to be distinguished from the whole of what is not thought. Bohms approach is particularly informative, since he virtually gives the boundary between what is natural or real and what is absurd, a product of imagination. However, to understand this distinction between what is real and what is imaginary, we must first understand both the nature of reality and the nature of human thought. With respect to the former, that of what is reality, Bohm tells us: I regard the essence of the notion of process as given by the statement: Not only is everything changing, but all is ux. That is to say, what is is the process of becoming itself, while all objects, events, entities, conditions, structures, etc., are forms that can be abstracted from this process.

The best image of process is perhaps that of the owing stream, whose substance is never the same. On this stream, one may see an ever-changing pattern of vortices, ripples, waves, splashes, etc., which evidently have no independent existence as such. Rather, they are abstracted from the owing movement, rising and vanishing in the total process of the ow. Such transitory subsistence as may be possessed by these abstracted forms implies only a relative independence or autonomy of behavior, rather than absolutely independent existence as ultimate substances.

In accordance with Bohm, in other words, reality is not something that preexists, but it is rather a choice of human consciousness among a number of possible configurations in a more fundamental level of information flow, or of procedures. We will now see the latter aspect, how this level of fundamental processes is related to the level of human perception: Having discussed what the notion of process implies concerning the nature of reality, let us now consider how this notion should bear on the nature of knowledge. Clearly, to be consistent, one has to say that knowledge, too, is a process, an abstraction from the one total ux, which latter is therefore the ground both of reality and of knowledge of this reality. Of course, one may fairly readily verbalize such a notion, but in actual fact it is very dicult not to fall into the almost universal tendency to treat our knowledge as a set of basically xed truths, and thus not of the nature of process (e.g., one may admit that knowledge is always changing but say that it is accumulative, thus implying that its basic elements are permanent truths which we have to discover). Indeed, even to assert any absolutely invariant element of knowledge (such as all is ux) is to establish in the eld of knowledge something that is permanent; but if all is ux, then every part of knowledge must have its being as an abstracted form in the process of becoming, so that there can be no absolutely invariant elements of knowledge.

Therefore, knowledge of reality is also a part of the process of reality, so that knowledge is also an endless process of acquiring reality. The interesting part, however, is how this fact of the process of knowledge is connected with human perception:

What, then, is the relationship of intelligence to thought? Briey, one can say that when thought functions on its own, it is mechanical and not intelligent, because it imposes its own generally irrelevant and unsuitable order drawn from memory. Thought is, however, capable of responding, not only from memory but also to the unconditioned perception of intelligence that can see, in each case, whether or not a particular line of thought is relevant and tting.

One may perhaps usefully consider here the image of a radio receiver. When the output of the receiver feeds back into the input, the receiver operates on its own, to produce mainly irrelevant and meaningless noise, but when it is sensitive to the signal on the radio wave, its own order of inner movement of electric currents (transformed into sound waves) is parallel to the order in the signal and thus the receiver serves to bring a meaningful order originating beyond the level of its own structure into movements on the level of its own structure. One might then suggest that in intelligent perception, the brain and nervous system respond directly to an order in the universal and unknown ux that cannot be reduced to anything that could be dened in terms of knowable structures. This way human perception of the world is a state of coordination with a universal evolutionary process, where human perception, depending on its sensitivity, perceives one or another combination of things as reality. However, according to Bohm, full realization of the world can only be achieved with the perception of its own wholeness: While it is thus clear that ultimately thought and thing cannot properly be analyzed as separately existent, it is also evident that in mans immediate experience some such analysis and separation has to be made, at least provisionally, or as a point of departure. Indeed, the distinction between what is real and what is mere thought and therefore imaginary or illusory is absolutely necessary, not only for success in practical aairs but also if we are in the long run even to maintain our sanity

Indeed, all man-made features of our general environment are, in this sense, extensions of the process of thought, for their shapes, forms, and general orders of movement originate basically in thought, and are incorporated within this environment, in the activity of human work, which is

guided by such thought. Vice versa, everything in the general environment has, either naturally or through human activity, a shape, form, and mode of movement, the content of which ows in through perception, giving rise to sense impressions which leave memory traces and thus contribute to the basis of further thought.

In this whole movement, content that was originally in memory continually passes into and becomes an integral feature of the environment, whole content that was originally in the environment passes into and becomes an integral feature of memory, so that (as pointed out earlier) the two participate in a single total process, in which analysis into separate parts (e.g. thought and thing) has ultimately no meaning The key question is, then: Can we be aware of the everchanging and owing reality of this actual process of knowledge? If we can think from such awareness, we will not be led to mistake what originates in thought with what originates in reality that is independent of thought. And thus, the art of thinking with totality as its content may develop in a way that is free of the confusion inherent in those forms of thought which try to dene, once and for all, what the whole of reality is, and which therefore lead us to mistake the content of such thought for the overall order of a total reality that would be independent of thought.

Is there, finally, an objective reality independent of the needs and purposes of humans? But how could this be possible, since we tend to color everything with our own needs, beliefs and ambitions? In other words, if reality is closely and inescapably connected with our thoughts, in a reciprocal relationship, then what we have to do is to be as active and conscious as possible with respect to this process of the truth of knowledge, which obviously does not ever cease to exist, and never ends. In Bohms own words: Any particular form of thinking about the totality does indeed indicate a way of looking at our whole contact with reality, and thus it has implications for how we may act in this contact. However, each such way of looking is limited, in the sense that it can lead to overall order and harmony only up to some point, beyond which it must cease to be relevant and tting Ultimately, the actual movement of thought embodying any particular notion of totality has to be

seen as a process, with ever-changing form and content. If this process is carried out properly, with attention to and awareness of thought in its actual ux of becoming, then one will not fall into the habit of treating the content tacitly as a nal and essentially static reality that would be independent of thought.

Even this statement about the nature of our thinking is, however, itself only a form in the total process of becoming, a form which indicates a certain order of movement of the mind, and a certain disposition needed for the mind to engage harmoniously in such movement. So there is nothing nal about it. Nor can we tell where it will lead. Evidently, we have to be open to further fundamental changes of order in our thought as we go on with the process. Such changes have to come about in fresh and creative acts of insight, which are necessary for the orderly movement of such thought. What we are suggesting in this chapter is, then, that only a view of knowledge as an integral part of the total ux of process may lead generally to a more harmonious and orderly approach to life as a whole, rather than to a static and fragmentary view which does not treat knowledge as process, and which splits knowledge o from the rest of reality.

During the course of human history there have been several cases where human thought had to abandon earlier, insufficient views about the world for the sake of new, effective ones. Whether this paradigm shift, as it is called, has just led to a simple improvement of our world view, or rather to a radical revision, has apparently to do with the corresponding necessity. Bohm considers that this new knowledge can be achieved in two ways, through accommodation and assimilation: It is pertinent to this subject to consider Piagets description of all intelligent perception in terms of two complementary movements, accommodation and assimilation Examples of accommodation are fitting, cutting to a pattern, adapting, imitating, conforming to rules, etc. On the other hand, to assimilate is to digest or to make into a comprehensive and inseparable whole (which includes oneself). Thus, to assimilate means to understand. It is clear that in intelligent perception, primary emphasis has in general to be given to assimilation, while accommodation tends to play a relatively secondary role in the sense that its main significance is as an aid to assimilation

Clearly, such perception can appropriately take place at almost any time, and does not have to be restricted to unusual and revolutionary periods in which one finds that the older orders can no longer be conveniently adapted to the facts. Rather, one may be continually ready to drop old notions of order at various contexts, which may be broad or narrow, and to perceive new notions that may be relevant in such contexts. Thus, understanding the fact by assimilating it into new orders can become what could perhaps be called the normal way of doing scientific research.

It is clear then, that changes of order and measures in the theory ultimately lead to new ways of doing experiments and to new kinds of instruments, which in turn lead to the making of corresponding ordered and measured facts of new kinds. In this development, the experimental fact serves in the first instance as a test for theoretical notions As long as such a common measure prevails, then of course the theory used need not be changed. If the common measure is found not to be realized, then the first step is to see whether it can be re-established by means of adjustments within the theory without a change in its underlying order. If, after reasonable efforts, a proper accommodation of this kind is not achieved, then what is needed is a fresh perception of the whole fact

As relativity and quantum theory have shown that it has no meaning to divide the observing apparatus from what is observed, so the considerations discussed here indicate that it has no meaning to separate the observed fact (along with the instruments used to observe it) from the theoretical notions of order that help to give shape to this fact. As we go on to develop new notions of order going beyond those of relativity and quantum theory, it will thus not be appropriate to try immediately to apply these notions to current problems that have arisen in the consideration of the present set of experimental facts. Rather, what is called for in this context is very broadly to assimilate the whole of the fact in physics into the new theoretical notions of order. After this fact has generally been digested, we can begin to glimpse new ways in which such notions of order can be tested and perhaps extended in various directions Fact and theory are thus seen to be different aspects of one whole in which analysis intoseparate but interacting parts is not relevant.

Then Bohm, in order to indicate this relationship between the wholeness of the natural world and the fragmented way humans perceive it, discusses the concepts of a lens and a hologram: An example of the very close relationship between instrumentation and theory can be seen by considering the lens, which was indeed one of the key features behind the development of modern scientific thought. The essential feature of a lens is, as indicated in Figure 6.1, that it forms an image in which a given point P in the object corresponds (in a high degree of approximation) to a point Q in the image. By thus bringing the correspondence of specified features of object and image into such sharp relief, the lens greatly strengthened man's awareness of the various parts of the object and of the relationship between these parts. In this way, it furthered the tendency to think in terms of analysis and synthesis. Moreover, it made possible an enormous extension of the classical order of analysis and synthesis to objects that were too far away, too big, too small, or too rapidly moving to be thus ordered by means of unaided vision. As a result, scientists were encouraged to extrapolate their ideas and to think that such an approach would be relevant and valid no matter how far they went, in all possible conditions, contexts, and degrees of approximation.

However, relativity and quantum theory imply undivided wholeness, in which analysis into distinct and well-defined parts is no longer relevant. Is there an instrument that can help give a certain immediate perceptual insight into what can be meant by undivided wholeness, as the lens did for what can be meant by analysis of a system into parts? It is suggested here that one can obtain such insight by considering hologram.

As shown in Figure 6.2 coherent light from a laser is passed through a half-silvered mirror. Part of the beam goes on directly to a photographic plate, while another part is reflected so that it illuminates a certain whole structure. The light reflected from this whole structure also reaches the plate, where it interferes with that arriving there by a direct path. The resulting interference pattern which is recorded on the plate is not only very complex but also usually so fine that it is not even visible to the naked eye. Yet, it is somehow relevant to the whole illuminated structure, though only in a highly implicit way.

This relevance of the interference pattern to the whole illuminated structure is revealed when the photographic plate is illuminated with laser light. As shown in Figure 6.3, a wave-front is then created which is very similar in form to that coming off the original illuminated structure. By placing the eye in this way, one in effect sees the whole of the original structure, in three dimensions, and from a range of possible points of view (as if one were looking at it through a window). If we then illuminate only a small region R of the plate, we still see the whole structure, but in somewhat less sharply defined detail and from a decreased range of possible points of view (as if we were looking through a smaller window). It is clear, then, that there is no one-to-one correspondence between parts of an illuminated object and parts of an image of this object on the plate. Rather, the interference pattern in each region R of the plate is relevant to the whole structure, and each region of the structure is relevant to the whole of the interference pattern on the plate Because of the wave properties of light, even a lens cannot produce an exact one-to-one correspondence. A lens can therefore be regarded as a limiting case of a hologram.

How is the information stored in a hologram and displayed in space propagated? Apparently, it is transmitted through light, or more generally with the means of propagation, so that an overall procedure of transferring information is produced, which Bohm calls the holomovement:

A more striking example of implicate order can be demonstrated in the laboratory, with a transparent container full of a very viscous fluid, such as treacle, and equipped with a mechanical rotator that can stir the fluid very slowly but very thoroughly. If an insoluble droplet of ink is placed in the fluid and the stirring device is set in motion, the ink drop is gradually transformed

into a thread that extends over the whole fluid. The latter now appears to be distributed more or less at random so that it is seen as some shade of grey. But if the mechanical stirring device is now turned in the opposite direction, the transformation is reversed, and the droplet of dye suddenly appears, reconstituted.

When the dye was distributed in what appeared to be a random way, it nevertheless had some kind of order which is different, for example, from that arising from another droplet originally placed in a different position. But this order is enfolded or implicated in the grey mass that is visible in the fluid. Indeed, one could thus enfold a whole picture. Different pictures would look indistinguishable and yet have different implicate orders, which differences would be revealed when they were explicated, as the stirring device was turned in a reverse direction Suppose, then, that after thus enfolding a large number of droplets, we turn the stirring device in a reverse direction, but so rapidly that the individual droplets are not resolved in perception. Then we will see what appears to be a solid object (e.g. a particle) moving continuously through space. This form of a moving object appears in immediate perception primarily because the eye is not sensitive to concentrations of dye lower than a certain minimum, so that one does not directly see the whole movement of the dye. Rather, such perception relevates a certain aspect. That is to say, it makes this aspect stand out in relief while the rest of the fluid is seen only as a grey background within which the related object seems to be moving. How true could this object, which appears through its hologram, be? Surely, the drops of ink are real, although the appearing fish is a fantastic creature. This fantastic object, however, is what determines the whole process, and gives to the process the possibility of description and its conceptual context. It is still possible to extend this reasoning to known examples from the everyday world and from the world of physics. In quantum entanglement, for example, instead of treating the two particles as separate, we may regard them as parts of a whole object or process, so that we acquire a different and more complete understanding of the phenomenon:

One can see in the quantum context a significant similarity to the orders of movement that have been described in terms of the simple examples discussed above. Thus, as shown in Figure 6.8 elementary particles are generally observed by means of tracks that they are supposed to make in detecting devices (photographic emulsions, bubble chambers, etc.). Such a track is evidently to be regarded as no more than an aspect appearing in immediate perception To describe it as the track of a particle is then to assume in addition that the primarily relevant order of movement is similar to that in the immediately perceived aspect.

What could really offer a holographic understanding of the world? Obviously, it could help us perceive the world as a whole, from which we see only a part. Such a holistic approach would help us recognize the implicate procedures which constitute the world at a more fundamental level. In this sense, the distinction of individual processes in autonomous parts becomes inappropriate, since what becomes important in the description is coexistence and cooperation within an indivisible whole: However, the whole discussion of the new order implicit in the quantum theory shows that such a description cannot coherently be maintained. For example, the need to describe movement discontinuously in terms of quantum jumps implies that the notion of a well-defined orbit of a particle that connects the visible marks constituting the track cannot have any meaning. In any case, the wave-particle properties of matter show that the overall movement depends on the total experimental arrangement in a way that is not consistent with the idea of autonomous motion of localized particles

In both cases, there appears in immediate perception an explicate order that cannot consistently be regarded as autonomous. In the example of the dye, the explicate order is determined as an intersection of the implicate order of the whole movement of the fluid and an implicate order of distinctions of density of dye that are relevated in sense perception. In the quantum context, there similarly will be an intersection of an implicate order of some whole movement corresponding to what we have called, for example, the electron, and another implicate order of distinctions that are relevated (and recorded) by our instruments. Thus, the word electron should be regarded as no more than a name by which we call attention to a certain aspect of the holo-movement, an aspect that can be discussed only by taking into account the entire experimental situation and that cannot be specified in terms of localized objects moving autonomously through space. And, of course, every kind of particle which in current physics is said to be a basic constituent of matter will have to be discussed in the same sort of terms (so that such particles are no longer considered as autonomous and separately existent). Thus, we come to a new general physical description in which everything implicates everything in an order of undivided wholeness. [56] David Bohm employed the hologram as a means of a characteristic implicate order, that is order hidden within a deeper level of reality and which is manifested, or becomes explicate, in our world under certain conditions, such as the observable results of instruments. In Bohms words, There is the germ of a new notion of order here. This order is not to be understood solely in terms of a regular arrangement of objects or as a regular arrangement of events. Rather, a total order is contained, in some implicit sense, in each region of space and time.

With respect to the implicate order, Bohm asked us to consider the possibility that physical law should refer primarily to an order of undivided wholeness in a content of description similar to that indicated by the hologram rather than to an order of analysis of such content into separate parts What this means can be depicted in the previous figure, taken from Bohms book: Lets suppose that we a have a tank filled with water and two cameras taking shots of a single fish swimming in this tank. Each camera shows a different angle of the fish. If these two different views of the fish are projected on two screens placed in another room, a spectator in that room will imagine that there are two different fishes swimming in the tank. Only when he sees the original tank and the cameras he will realize the illusion. Our minds may be filled with pre-fabricated forms of objects and notions that we try to match each time we look at or think of the world, and try to adapt to any kind of changes. We may look at different angles or see things differently, but the basic structure of our brain and eyes doesnt change. What changes is the re- arrangement of our views about the world that constantly changes. If modern science in the future finds a way to identify the areas in the brain corresponding to such changes, then we will have a better understanding of how we understand. Because the strangeness of the world is mainly due to our strange minds, rather than some imperceptible or incomprehensible reality we are faced with.

Sheldrakes morphic fields

The sense of being stared at is an article by Rupert Sheldrake, which can be found on his website. It is a feeling which, more or less, most of us have felt at some moment, although the framework under which the phenomenon occurs is rather vague. Sheldrake, paradoxically in my opinion, links the phenomenon with vision, since, as I have already said, it is more likely a matter of perception or intuition. However, the data he gives concerning the views through the ages about the operation of vision is very interesting. These views can be divided into two main categories: Extramission and intromission theories. In the first category belong the beliefs that vision is created by the eyes, which are supposed to emit some kind of radiation or flow. Adherents of this theory basically were ancient philosophers, such as Pythagoras and Plato. Sheldrake in his article says:

In ancient Greece, early in the fifth century BC, members of the Pythagorean School proposed an early version of extramission theory, suggesting that a visual current was projected outwards from the eye. Also, the philosopher Empedocles (c. 492-432 BC) proposed that the eyes sent out their own rays; they were like lanterns with their own internal light. Sight proceeded from the eyes to the object seen.

On the other hand, there was an early version of intromission theories, namely that vision is caused by some type or radiation or flow which falls on the eyes. Democritus was one of the first supporters of this view: Meanwhile, the atomist philosopher Democritus (c. 460-371 BC) advocated an early version of the intromission theory. He was a prototypic materialist, propounding the doctrine that ultimate reality consists of particles of matter in motion. He proposed that material particles streamed off the surface of things in all directions; vision depended on these particles entering the eye. In order to account for coherent images, he supposed that the particles were joined together in thin films that travelled into the eye. There have been theories which combined these two, lets say, extreme versions of the theory of vision: Nevertheless, some atomists admitted that influences could move both ways, not just into the eyes, but also outwards from the looker. One reason for accepting outward-moving influences was the belief in the evil eye, whereby some people could allegedly harm others by looking at them with envy or other negative emotions. Democritus explained the evil eye as mediated by images moving out-ward from the eyes, charged with hostile mental contents, that remain persistently attached to the person victimized, and thus disturb and injure both body and mind. A belief in the power of envious gazes to bring about negative effects was common in the ancient world, and is still widespread in Greece and many other countries.

The philosopher Plato (427-347 BC) adopted the idea of an outward-moving current, but proposed that it combined with light to form a single homogeneous body stretching from the eye to the visible object. This extended medium was the instrument of visual power reaching out from the eye. Through it, influences from the visible object passed to the soul. In effect, Plato combined intromission and extramission theories with the idea of an intermediate medium between the object and the eye.

Aristotle (384-322 BC) followed Plato in emphasizing the importance of an intermediate medium between the eye and the object seen, but he rejected both the intromission and extramission theories. Nothing material passed in or out of the eye during vision. He called the intermediate medium the transparent. He thought of light not as a material substance, but as a state of the transparent, resulting from the presence of a luminous body. The visible object was the source or cause a change in the transparent, through which influences were transmitted instantaneously to the soul of the observer.

Basically, what we have here is a summary, from the ancient times already, of the views concerning what vision could be. With respect to the faith in the evil eye, we see that whether the phenomenon exists or not, it has been wrongly related to vision, since it has more to do with bad (or good) thoughts, which are supposed to be transmitted from person to person, even if we accept that the place in the body from which this energy comes out is the eyes.

However, an important shift in the way we understand the whole process of vision came with Euclid who put the phenomenon under geometrical terms: The final major contribution of classical antiquity was that of the mathematicians, starting with the geometer Euclid (active around 300 BC). Euclids approach was strictly mathematical and excluded practically all aspects of vision that could not be reduced to geometry. He adopted an extramission theory, and emphasized that vision was an active process, giving the example of looking for a pin, and at first not seeing it, but then finding it. There is a change in what is seen as a result of this active process of looking and finding, even though the light entering the eye remains the same.

Euclid recognized that light played a part in vision, but he said very little about the way it was related to the visual rays projecting outwards from the eyes. He assumed that these rays travelled in straight lines, and he worked out geometrically how eyes projected the images we see outside ourselves. He also clearly stated the principles of mirror reflection, recognizing the equality of what we now call the angles of incidence and reflection, and he explained virtual images in terms of the movement of visual rays outwards from the eyes.

In the previous figures we see the process of vision as it was more or less described by Euclid (even in he believed that the light rays were produced by the eyes) and which is taught in modern textbooks. 'Rays from a point on the object are reflected at the mirror and appear to come from a point behind the mirror where the eye imagines the rays intersect when produced backwards.

The second of these diagrams belongs to Newton, who in his book about optics describes that the reflected rays incident on the spectators eyes

make the same Picture in the bottom of the Eyes as if they had come from the Object really placed at a without the Interposition of the Looking-glass; and all Vision is made according to the place and shape of that Picture (Book I, Axiom VIII).

Intriguing, however, is the fact that modern research has found that there is strong belief that vision is due to some sort of outflowing radiation in a large percentage of people, regardless of age. In the same article, Sheldrake says: In his study of childrens intellectual development, Piaget (1973) found that children under the age of 10 or 11 thought vision involved an outward-moving influence from the eyes. Gerald Winer and his colleagues have confirmed Piagets finding in a recent series of surveys in Ohio. Eighty per cent of the children in Grade 3 (aged 8-9) agreed that vision involved both the inward and outward movement of rays, energy or something else.

Winer and his colleagues were surprised- indeed shocked by these findings. They were especially surprised to find that belief in the ability to feel the looks of unseen others increased with age, with 92% of older children and adults answering yes to the question Do you ever feel that someone is staring at you without actually seeing them look at you? They commented, the belief in the ability to feel stares, which occurs at a high level among children as well as adults, seems, if anything, to increase with age, as if irrationality were increasing rather than declining between childhood and adulthood! These findings may be surprising, but apparently they dont necessarily suggest that the sense of being stared at is a real phenomenon caused by some sort of radiation emitted by the human eye, because there are still many people who believe, for example, that the sun rotates around the earth, which of course is wrong. On the other hand, it wouldnt be appropriate to exclude the phenomenon without examining the available evidence.

In any case, Sheldrake uses the concept of the hologram to propose his own theory about vision:

My own hypothesis is that projection takes place through perceptual fields, extending out beyond the brain, connecting the seeing animal with that which is seen. Vision is rooted in the activity of the brain, but is not confined to the inside of the head (Sheldrake, 1994; 2003). Like Velmans, I suggest that the formation of these fields depends on the changes occurring in various regions of the brain as vision takes place, influenced by expectations, intentions and memories. Velmans suggests that this projection takes place in a way that is analogous to a field phenomenon, as in a hologram. I suggest that the perceptual projection is not just analogous to but actually is a field phenomenon

Perceptual fields are related to a broader class of biological fields involved in the organization of developing organisms and in the activity of the nervous system The idea of biological fields has been an important aspect of developmental biology since the 1920s, when the hypothesis of morphogenetic fields was first proposed (Gurwitsch, 1922). These fields underlie processes of biological morphogenesis...

Most biologists hope that morphogenetic fields will eventually be explained in terms of the known fields of physics, or in terms of patterns of diffusion of chemicals, or by other known kinds of physico-chemical mechanism. Models of these fields in terms of chemical gradients may indeed help to explain an early stage of morphogenesis, called pattern formation, in which different groups of cells make different proteins. But morphogenesis itself, the formation of structures like limbs, eyes and flowers, involves more than making the right proteins in the right cells at the right times. The cells, tissues and organs form themselves into specific structures in a way that is still very poorly understood, and it is here that morphogenetic fields would play an essential role shaping and guiding the developmental processes. My proposal is that morphogenetic fields are not just a way of talking about known phenomena like gradients of chemicals, but are a new kind of field. Then Sheldrake explains how these perceptual fields could be related to the sense of being stared at:

Morphogenetic fields are part of a larger class of fields called morphic fields, which includes behavioral, social and perceptual fields. Such fields can be modeled mathematically in terms of attractors within vector fields.

According to this hypothesis, it is in the nature of morphic fields to bind together and coordinate patterns of activity into a larger whole. Morphic fields guide systems under their influence towards attractors, and they stabilize systems through time by means of self-resonance. They are also influenced by a resonance across time and space from previous similar systems, by a process called morphic resonance. Thus they contain an inherent memory, both of systems own past, and a kind of collective or pooled memory from previous similar systems elsewhere. Through repetition a morphic field becomes increasingly habitual

To understand the sense of being stared at, we need a further postulate, namely that these perceptual fields interact with the fields of the person or animal on which attention is focussed. Ex hypothesi, all people and animals have their own morphic fields, so this interaction would require an action of like upon like, a field-field interaction.

Thus, according to Sheldrake human thought produces some sort of fields, referred to as perceptual, and which, as a kind of morphic fields, interact with the other morphic fields of nature, such as the perceptual fields of other people, through a form of resonance, which he calls morphic resonance. What is characteristic of Sheldrakes theory is that he uses a specific notion from physics (the term field) in order to describe human thought, which however has not field characteristics. Even if a human being or generally a living organism emits some sort of fields during his/its mental activity or biological operation, there is no reason why these fields should be of a new kind, instead of already known fields, such as sound waves, optical signals, pulses, or even some still unknown waves of electromagnetic nature. Moreover, the very concept of a field is used to explain interaction through information transmission. If morphogenetic fields were indeed fields in this strict sense, they couldnt be able to spread themselves arbitrarily and instantaneously, as the case of the sense of being stared at implies.

Perhaps Aristotle was right when he said that what matters is the means and not the transmitter or the receiver, since this morphic resonance between living creatures emphasizes the importance of the connection between them, and not so much if this connection is active or passive. This is also the characteristic of non-locality, i.e. the instantaneous connection between the observer and what is observed, which implies a connection without transmission of information, and therefore the notion of a field is here inappropriate. However, it refers to modern quantum physics and to the phenomenon of the so-called quantum entanglement.

Sheldrake says about this: Christopher Clarke argues that entanglement may not only play an important part in vision, but also that quantum entanglement is an essential aspect of conscious perception. Consciousness itself somehow arises from entangled systems: If the qualitative aspect of perception (the socalled qualia) are produced by quantum entanglement between the states of the brain and the states of perceived objects, then the supports of conscious loci are not just the brain, but the whole of perceived space. In other words I am spread out over the universe by virtue of my connectivity with other beings. Clarke further suggests that in living organisms quantum entanglement may help to account for their holistic properties: If we consider a living, and hence coherent, entity, then the entanglement will take over the individual states of the parts, which will no longer be definable, and replace them with the quantum state of the entangled whole.

It also proposes a way of connection between quantum physics and evolutionary biology, with what is called quantum Darwinism: A team of physicists at Los Alamos has recently proposed a form of preferential perception of quantum states that becomes habitual, in a way that sounds not unlike the activity of habitual perceptual fields discussed above.

A Nature news report in 2004 explained how this new hypothesis arose from the question, If, as quantum mechanics says, observing the world tends to change it, how is it that we can agree on anything at all? Why doesnt each person leave a slightly different version of the world for the next person to find? The answer is called quantum Darwinism: Certain special states of a system are promoted above others by a quantum form of natural selection.... Information about these states proliferates and gets imprinted on the environment. So observers coming along and looking at the environment in order to get a picture of the world tend to see the same preferred states. Rather than decoherence being a problem for this view, it is an essential feature. As Olliviers co-author Zurek put it, Decoherence selects out of the quantum mush those states that are stable. These stable states are called pointer states. Through a Darwin-like selection process these states proliferate as many observers see the same thing. In Zureks words, One might say that pointer states are most fit. They survive monitoring by the environment to leave descendents that inherit their properties.

If a pointer state links an observer to someone she is looking at, such preferred states of quantum decoherence might underlie the sense of being stared at. Indeed a preferred habitual quantum state may be another way of talking about a perceptual field. [57]

As regards the concept of decoherence in quantum mechanics, this is the moment when the wavefunction collapses due to observation, and a specific configuration of the system under observation is crystalized, against all the other coexisting, or super- imposed states, which were at that moment available (possible).

With respect to the sense of being stared at, the theory of perceptual fields and of morphogenetic fields in general is an attempt to interpret a situation which in fact has neither to do with vision nor with perceptual fields between human brains. It probably has to do with an abstract and largely unconscious perception of an event, which is not transmitted as information, and therefore does not have field characteristics. More likely it has to do with a momentary

perception of an event, which is interpreted by the brain in a particular way. How exactly this phenomenon takes place and whether it is authentic and not a primarily a posteriori retrograde confirmation of perception about a momentary event, this is a matter for further investigation.

Towards the conquest of the 6th sense

A holistic view of the process of vision would point to the fact that when humans perceive the world all the senses participate so that perception is produced in two steps: Firstly, there is a holistic but unconscious gathering of what is perceived; secondly, consciousness takes over to establish the perceived reality. This means that there already exists an instinctive, pre-registered knowledge of what exists and then comes the affirmation from our conscience.

This process is not unrealistic or paradoxical since it covers all the pre-mentioned phenomena concerning vision. In fact, this is what is going on when we have a feeling that something is going to happen and then this event happens. The perception of the event on the first level is more or less unconscious. It bypasses any rational processes of the brain and this is why it seems to happen instantaneously. The second level perception includes the known physical senses as well as the logical processing of the brain, so the rationalization of the event takes time on the second level.

We could identify this instantaneous and unconscious perception of events with a 6th sense even if it has nothing to do with a real, physical sense. The whole phenomenon may not correspond with a special place in the sensory system (ex. what has been called the third eye) or the body, or an area in the brain. It is more or less a phenomenon that involves the totality of the brain and thought. We could say that it is a sense of our total existence within the world and its events. The problem of instantaneity may be dealt as follows: The observer and the observed form an entangled system, through which all senses and other means of communication work. Since at the first stage the connection is implicate and unconscious, there is no transmission of information in any kind of organized form. So the principle of locality is not violated, meaning that no information can travel faster than the speed of light. At the second level, where all the functions of logical reasoning and physical channels of sensory communication are activated,

information exchange and processing takes time under all the restrictions of locality and the succession rules of cause and effect. Under this kind of interpretation, what we feel is that we are somewhere and sometime, and that events take place within the space-time that we live in together with other things and living beings, which can sense our presence in the same way. It is an unconscious, spontaneous and instinctive procedure or feeling, which then matures and takes form and content through our senses and the rational process of the brain just to get dressed afterwards in the definition and interpretation that we are physically and mentally accustomed to give.

This is really strange a phenomenon. On the one hand our logic says that it cannot be true, because what is left of it after our awakened reasoning is nothing but a dream that quickly fades away. On the other hand we dont want to dismiss it, because we know that something happened even if we dont know how to explain it. But through some kind of mental awareness and meditation or even some new form of indirect attention and discrete analytic reasoning, we may be able to grasp it and begin to materialize and practice it within the real world. In such an ambitious and advanced perspective modern technology could help experimentally and practically together with all the accumulated knowledge, so that this 6th sense, or more generally this symmetric aspect of the world, may become a common feature and everyda y practice in the future.

This does not necessarily mean the physical activation of a new sense, like the senses of vision and hearing, with a corresponding organ or center somewhere in the body or the brain. It has more to do with a holistic understanding of reality, where what we see and realize are phenomena that include ourselves, together with an acute and trained perception of things and events that we usually pass by and leave them unobserved. Neither should we take for granted the existence of some sort of real fields that are produced by the brain and travel from mind to mind; nor that there exists any kind of information that travels backwards from the future to the present, from objects to our eyes, which we allegedly perceive unconsciously with some sort of mantic power. Instead we could realize our co-existence or resonance with the rest of the world, the symmetry of our nature with space, time, things and living beings, including other

people, and concentrate our efforts in understanding and experiencing the kaleidoscope of frequencies and motions that fill the universe and ourselves. [58]

Delayed choice experiments and the Bohm approach

According to Wikipedia, Wheelers delayed choice experiment is a thought experiment in quantum physics proposed by John Archibald Wheeler in 1978. The results Wheeler predicted have since been confirmed by actual experiment. Wheelers experiment is a variation on the famous double-slit experiment. In Wheelers version, the method of detection used in the experiment can be changed after a photon passes the double slit, so as to delay the choice of whether to detect the path of the particle, or detect its interference with itself. Since the measurement itself seems to determine how the particle passes through the double slits- and thus its state as a wave or particle- Wheelers experiment has been useful in trying to understand certain strange properties of quantum particles. Several implementations of the experiment 19842007 showed that the act of observation ultimately determines whether the photon will behave as a particle or wave, verifying the unintuitive results of the thought experiment.

The conventional double-slit experiment shows that determining which path a particle takes prevents the interference pattern from forming. To avoid the notion that the photon somehow knows when the other slit is open or closed (or is being watched), Wheeler suggested detecting which slit the photon used only long after it passed through the slits. Wheeler asked what happens when a single photon, presumably already determined to get detected as part of a two-slit interference pattern, suddenly gets detected in a path coming from only one slit. Does the interference pattern then disappear?

In terms of the traditional double-slit apparatus, the Wheeler delayed choice experiment is to put telescopes that are pointed directly at each of the two slits behind the removable detector wall. If the photon goes through telescope A it is argued that it must have come by way of slit A, and if it goes through telescope B it is argued that it must have come by way of slit B.

Wheeler planned a thought experiment in which two ways of observing an incoming photon could be used, and the decision of which one to use could be made after the photon had cleared

the double-slit part of the apparatus. At that point a detection screen could either be raised or lowered. If the detection screen were to be put in place, Wheeler fully expected that the photon would interfere with itself and (if many more photons were permitted to follow it to the screen) would form part of a series of fringes due to interference. If, on the other hand, the detection screen were to be removed, then: Sufficiently far beyond the region of the plate, the beams from upper and lower slits cease to overlap and become well separated. There place photodetectors. Let each have an opening such that it records with essentially 100 percent probability a quantum of energy arriving in its own beam, and with essentially zero probability a quantum arriving in the other beam. In that case, he argues, one of the two counters will go off and signal in which beam- and therefore from which slit- the photon has arrived. [59] Here we will follow David Bohms analysis on the delayed-choice experiments from Hiley and Callaghans article Delayed-choice experiments and the Bohm approach: The delayed choice experiments of the type introduced by Wheeler and extended by Englert, Scully, Sussmann and Walther [ESSW], and others, have formed a rich area for investigating the puzzling behavior of particles undergoing quantum interference. The surprise provided by the original delayed choice experiment, led Wheeler to the conclusion that no phenomenon is a phenomenon until it is an observed phenomenon, a radical explanation which implied that the past has no existence except as it is recorded in the present. However Bohm, Dewdney and Hiley have shown that the Bohm interpretation gives a straightforward account of the behavior of the particle without resorting to such a radical explanation. The subsequent modications of this experiment led both Aharonov and Vaidman and [ESSW] to conclude that the resulting Bohmtype trajectories in these new situations produce unacceptable properties In this paper we show that this conclusion is not correct and that if the Bohm interpretation is used correctly, it gives a local explanation, which actually corresponds exactly to the standard quantum mechanics explanation oered by Englert, Scully, Sussmann and Walther [ESSW].

The idea of a delayed choice experiment was rst introduced by Wheeler and discussed in further detail by Miller and Wheeler. They wanted to highlight the puzzling behavior of a single particle in an interferometer when an adjustment is made to the interferometer by inserting (or removing) a beam splitter at the last minute. Wheeler argued that this presents a conceptual problem even when discussed in terms of standard quantum mechanics (SQM) because the results seemed to imply that there was a change in behavior from wave-like phenomenon to particle-like phenomenon or vice-versa well after the particle entered the interferometer.

Figure 1: Sketch of the Wheeler delayed choice experiment

The example Miller and Wheeler chose to illustrate this eect was the Mach-Zender interferometer shown in Figure 1. In this set up a movable beam splitter BS2 can either be inserted or removed just before the electron is due to reach the region I2. When BS2 is not in position, the electron behaves like a particle following one of the paths, 50% of the time triggering D1 and the other 50% of the time triggering D2. However when the beam splitter is in place, the electron behaves like a wave following both paths, and the resulting interference directs all the particles into D1. Wheelers claim is that by delaying the choice for xing the position of the nal beam splitter forces the electron to somehow decide whether to behave like a particle or a wave long after it has passed the rst beam splitter BS1, but before it has reached I2. Experiments of this type, which have been reviewed in Greenstein and Zajonc, conrm the predictions of quantum theory and raises the question How is this possible? Wheeler resolves the problem in the following way: Does this result mean that present choice inuences past dynamics, in contravention of every formulation of causality? Or does it mean, calculate pedantically and dont ask questions?

Neither; the lesson presents itself rather like this, that the past has no existence except as it is recorded in the present. Although Wheeler claims to be supporting Bohrs position, Bohr actually comes to a di erent conclusion and writes: In any attempt of a pictorial representation of the behavior of the photon we woul d, thus, meet with the diculty: to be obliged to say, on the one hand, the photon always chooses one of the two ways and, on the other hand, that it behaves as if it passed both ways. Bohrs conclusion is not that the past has no existence until a measurement is made, but rather that it was no longer possible to give pictures of quantum phenomena as we do in classical physics. For Bohr the reason lay in the indivisibility of the quantum of action as he put it, which implies it is not possible to make a sharp separation between the properties of the observed system and the observing apparatus. Thus it is meaningless to talk about the path taken by the particle and in consequence we should simply give up attempts to visualize the process. Thus Bohrs position was, to put it crudely, calculate because you cannot get answers to such questions, a position that Wheeler rejects.

But it should be quite clear from the literature that many physicists even today do not accept either Bohrs or Wheelers position and continue to search for other possibilities of nding some form of image to provide a physical understanding of what could be actually going on in these situations, hence the continuing debate about the delayed choice experiment.

By now it is surely well known that the Bohm interpretation (BI) does allow us to form a picture of such a process and reproduce all the known experimental results. Indeed Bohm, Hiley and Dewdney have already shown how the above Miller-Wheeler experiment can be understood in a consistent manner while maintaining the particle picture. There is no need to invoke non-locality here and the approach clearly shows there is no need to invoke the past only coming into being by action in the present.

Do quantum particles follow trajectories? There is a deeply held conviction as typied by Zeh that a quantum particle cannot and does not have well-dened simultaneous values of position and momentum. This surely is what the uncertainty principle is telling us. Actually it is not telling us this. What the uncertainty principle does say is that we cannot measure simultaneously the exact position and momentum of a particle. This fact is not in dispute. But not being able to measure these values simultaneously does not mean that they cannot exist simultaneously for the particle. Equally we cannot be sure that a quantum particle actually does not have simultaneous values of these variables because there is no experimental way to rule out this possibility either. The uncertainty principle only rules out simultaneous measurements. It says nothing about particles having or not having simultaneous x and p. Thus both views are logically possible.

As we have seen Wheeler adopts an extreme position that not only do the trajectories not exist, but that the past does not exist independently of the present either. On the other hand the BI assumes particles could have exact values of position and momentum and then simply explores the consequences of this assumption. Notice we are not insisting that the particles do actually have a simultaneous position and momentum. How could we in view of the discussion in the previous paragraph?

If we adopt the assumption that quantum particles do have simultaneous x and p, which are, of course, unknown to us without measurement, then we must give up the insistence that the actual values of dynamical variables possessed by the particle are always given by the eigenvalues of the corresponding dynamical operators. Such an insistence would clearly violate what is well established through theorems such as those of Gleason and of Kochen and Specker. All we insist on is that a measurement produces in the process of interaction with a measuring instrument an eigenvalue correspond to the operator associated with that particular instrument. The particles have values for the complementary dynamical variable but these are not the eigenvalues of the corresponding dynamical operator in the particular representation dened by the measuring instrument.

This implies that the measurement can and does change the complementary variables. In other words, measurement is not a passive process; it is an active process changing the system under investigation in a fundamental and irreducible way. This leads to the idea that measurement is participatory in nature, remarkably a conclusion also proposed by Wheeler himself. Bohm and Hiley explain in more detail how this participatory nature manifests itself in the BI. It arises from the quantum potential induced by the measuring apparatus. We will bring this point out more clearly as we go along.

By assuming that a particle has simultaneously a precise position and momentum we can clearly still maintain the notion of a particle trajectory in a quantum process. Bohm and Hiley and Holland collect together a series of results that show how it is possible to dene trajectories that are consistent with the Schrodinger equation. The mathematics is unambiguous and exactly that used in the standard formalism. It is simply interpreted dierently. The equation for the trajectories contain, not only the classical potential in which the particle nds itself, but an additional term called the quantum potential, which suggests there is a novel quality of energy appearing only in the quantum domain.

Both the trajectory and the quantum potential are determined by the real part of the Schrodinger equation that results from polar decomposition of the wave function. We nd that the amplitude of the wave function completely determines the quantum potential. In its simplest form this suggests that some additional physical eld may be present, the properties of which are somehow encoded in the wave function. One of the features of our early investigations of the BI was to nd out precisely what properties this eld must have in order to provide consistent interpretation of quantum processes. It is the reasonableness of the physical nature of this potential and the trajectories that is the substance of the criticisms of Aharonov, Vaidman, of ESSW and of Scully...

Details of Wheelers delayed choice experiment Let us now turn to consider specific examples and begin by recalling the Wheeler delayed choice experiment using a two-beam interference device based on a Mach-Zender interferometer as shown in figure 1. We will assume the particles enter one at a time and each can be described by

a Gaussian wave packet of width very much smaller than the dimensions of the apparatus so that the wave packets only overlap in regions I1 and I2. Otherwise the wave packets have zero overlap.

The specific region of interest is I2, which contains the movable beam splitter BS2. In BI it is the quantum potential in this region that determines the ultimate behavior of the particle. This in turn depends upon whether the BS2 is in place or not at the time the particle approaches the region I2. The position of BS2 only affects the particle behavior as it approaches the immediate neighborhood of I2. Thus there is no possibility of the present action determining the past in the way Wheeler suggests. The past is past and cannot be affected by any activity in the present. This is because the quantum potential in I2 depends on the actual position of BS2 at the time the particle reaches I2. We will now show how the results predicted by the BI agree exactly with the experimental predictions of SQM.

Interferometer with BS2 removed Let us begin by first quickly recalling the SQM treatment of the delayed-choice experiment. When BS2 is removed (see figure 2) the wave function arriving at D1 is - (The /2 phase changes arise from reflections at the mirror surfaces). This clearly gives a probability |2| that D1 fires. The corresponding wave function arriving at D2 is i2, giving a probability |22| that D2 fires.

If BS1 is a 50/50 beam splitter, then each particle entering the interferometer will have a 50% chance of firing one of the detectors. This means that the device acts as a particle detector, because the particle will either take path 1, BS1M1D1, trigging the detector D1. Or it will travel down path 2, BS1M2D2, triggering detector D2. Now let us turn to consider how the BI analyses this experiment. Here we must construct an ensemble of trajectories, each individual trajectory corresponding to the possible initial values of position of the particle within the incident wave packet. One set of trajectories will follow the upper arm of the apparatus, while the others follow the lower arm. We will call a distinct group of trajectories a channel. Thus the wave function in channel 1 will be 1(r,t)=R1(r,t)exp[iS1(r, t)]

away from the regions I1 and I2 so that the Bohm momentum of the particle will be given by p1(r,t) = S1(r,t)

Figure 2: Interferometer acting as a particle detector.

and the quantum potential acting on these particles will be given by Q1(r,t) = (-1/2m)(2R1(r,t)/ R1(r,t)) There will be a corresponding expression for particles travelling in channel 2.

All of this is straightforward except in the region I2, which is of particular interest to the analysis. Here the wave packets from each channel overlap and there will be a region of interference because the two wave packets are coherent. To find out how the trajectories behave in this region, we must write =1+i2= Rexp[iS] and then use p = S and Q = (-1/2m)(R2/R)

Thus to analyze the behavior in the region I2, we must write Rexp[iS]= R1exp[iSl] + R2exp[iS2] so that R2 = R12 + R22 + 2R1R2cosS` where S`=S2-S1. The last equation clearly shows the presence of an interference term in the region I2 since there is a contribution from each beam 1 and 2, which depends on the phase difference S`.

We show the behavior of the quantum potential in this region in figure 4. The particles following the trajectories then bounce off this potential as shown in figure 3 so that the particles in channel 1 end up triggering D2, while the trajectories in channel 2 end up triggering D1. We sketch the overall behavior of the channels in figure 5. Notice that in all of this analysis the quantum potential is local.

Figure 3: Trajectories in the region I2 without BS2 in place.

Interference experiment with beam splitter BS2 in place Let us now consider the case when BS2 is in place (see figure 6). We will assume that beam splitter BS2 is also a 50/50 splitter. Using SQM the wave function at D1 is D1=-(1+2) while the wave function at D2 is D2=i(2-1)

Figure 4. Calculation of quantum potential in region I 2 without BS2 in place

Since R1=R2, and the wave functions are still in phase, the probability of triggering D1 is unity, while the probability of triggering D2 is zero. This means that all the particles end up trigging D1. Thus we have 100% response at D1 and a zero response at D2 and conclude that the apparatus acts as a wave detector so that we follow Wheeler and say (loosely) that in SQM the particle travels down both arms, finally ending up in detector D1. In this case the other detector D2 always remains silent.

How do we explain these results in the BI? First we must notice that at beam splitter BS1 the top half of the initial positions in the Gaussian packet are reflected while the bottom half are transmitted. This result is discussed in detail in Dewdney and Hiley. As the two channels converge on beam splitter BS2, the trajectories in channel 1 are now transmitted through it, while those in channel 2 are reflected. Thus all the trajectories end up trigging D1. It is straightforward to see the reason for this. The probability of finding a particle reaching D2 is zero and therefore all the particles in channel 1 must be transmitted. The resulting trajectories are sketched in figure 7.

The delayed choice version of the interferometer Now let us turn to consider what happens when beam splitter BS2 can be inserted or removed once the particle has entered the interferometer, passing BS1 but not yet reached BS2. We saw

above that this caused a problem if we followed the line of argument used by Wheeler. Applying the BI presents no such problem. The particle travels in one of the channels regardless

Figure 5. Sketch of the Bohm trajectories without BS2 in place

Figure 6: Interferometer acting as a wave detector

of whether BS2 is in position or not. The way it travels once it reaches the region I2 depends on whether BS2 is in position or not. This in turn determines the quantum potential in that region, which in turn determines the ultimate behavior of the particle.

If the beam splitter is absent when a particle reaches I 2 , it is reflected into a perpendicular direction no matter which channel it is actually in as shown in figure 5. If BS2 is in place then the quantum potential will be such as to allow the particle in channel 1 to travel through BS2. Whereas if the particle is in channel 2 it will be reflected at BS2 so that all the particles enter detector D1 as shown in figure 7.

Figure 7: Sketch of trajectories with BS2 in place.

The explanation of the delayed choice results is thus very straightforward and depends only on the local properties of the quantum potential in the region of I2 at the time the particle enters that region. The value of the quantum potential in I2 is determined only by the actual position of BS2. Hence there is no delayed choice problem here. There is no need to claim that no phenomenon is a phenomenon unless it is an observed phenomenon. The result simply depends on whether BS2 is in position or not at the time the particle reaches I2 and this is independent of any observer being aware of the outcome of the experiment. Remember BI is an onto-logical interpretation and the final outcome is independent of the observers knowledge

Measurement in quantum mechanics In the above analysis we have seen that the BI gives a perfectly acceptable account of how the energy is exchanged and the claims that the trajectories are surreal have not been substantiated. One of the confusions that seems to have led to this incorrect conclusion lies in role measurement plays in the BI.

One of the claims of the BI is that it does not have a measurement problem. A measurement process is simply a special case of a quantum process. One important feature that was considered

by Bohm and Hiley was to emphasize the role played by the macroscopic nature of the measuring instrument. Their argument ran as follows. During the interaction of this instrument with the physical system under investigation, the wave functions of all the components overlap and become fused. This fusion process can produce a very complex quantum potential, which means that during the interaction the relevant variables of the observed system and the apparatus can undergo rapid fluctuation before settling into distinct sets of correlated movement. For example, if the system is a particle, it will enter into one of a set of distinct channels, each channel corresponding to a unique value of the variable being measured. It should be noted that in this measurement process the complementary variables get changed in an unpredictable way so that the uncertainty principle is always satisfied, thus supporting the claim concerning participation made in section 2.

All of this becomes very clear if we consider the measurement of the spin components of a spin one-half particle using a Stern-Gerlach magnet. As the particle enters the field of the SternGerlach magnet, the interaction with the field produces two distinct channels. One will be deflected upwards generating a channel that corresponds to spin up. The other channel will be deflected 'downwards' to give the channel corresponding to spin down. In this case there are no rapid fluctuations as the calculations of Dewdney, Holland and Kyprianidis show. Nevertheless the interaction with the magnetic field of the Stern-Gerlach magnet produces two distinct channels, one corresponding to the spin state, (+) and the other (-). There is no quantum potential linking the two beams as long as the channels are kept spatially separate.

Thus it appears from this argument that a necessary feature of a measurement process is that we must produce spatially separate and macroscopically distinct channels. To put this in the familiar language of SQM, we must find a quantum process that produce separate, non-overlapping wave packets in space, each wave packet corresponding to a unique value of the variable being measured. In technical terms this means that we must ensure that there is no intersection between the supports of each distinct wave packet, eg. Sup(i) Sup(j)= 0 for i j. Clearly this argument cannot work for the case of the cavity shown in figure 2. Indeed Sup(0) Sup(1) 0 since both the excited and unexcited fields are supported in the same cavity. This was

one of the main factors why Dewdney, Hardy, and Squires, and Hiley, Callaghan and Maroney were content to introduce non-local exchanges of energy as a solution to the ESSW challenge. What these authors all assumed was that the non-overlapping of wave packets was a necessary and sufficient condition. What we have argued above is that it is not a necessary condition but it is merely a sufficient condition. What is necessary is for there to be a unit probability of the cavity being in a particular energy eigenstate, all others, of course, being zero. What this ensures is that as long as the energy is fixed in the cavity, there will be no quantum potential coupling between the occupied particle channel and the unoccupied channel. This also ensures that the particle will always behave in a way that is independent of all the other possibilities. This means that in the example shown in figure 2 considered above, the atom passing through the cavity travels straight through the region I2 and fires detector D2. This also means that in the language of SQM, the atom behaves as if the wave function had collapsed. Again in this language it looks as if the cavity is behaving as a measuring instrument even though there has been no amplification to the macroscopic level and no irreversible process has occurred. This is why Bohm and Hiley emphasised that in the Bohm approach there is no fundamental difference between ordinary processes and what SQM chooses to call measurement processes.

Notice that we do not need to know whether the cavity has had its energy increased or not for the interference terms to be absent. This is because we have an ontological theory, which means that there is a well-defined process actually occurring regardless of our knowledge of the details of this process. This process shows no interference effects in the region I2 whether we choose to look at the cavity or not. We can go back at a later time after the particle has had time to pass through the cavity and through the region I2 but before D1 or D2 have fired and find out the state of the cavity as ESSW have proposed. What we will find is this measurement in no way affects the subsequent behavior of the atom and D2 will always fire if the cavity is found in an excited state. Thus there is no call for the notion such as present action determining the past as Wheeler suggests.

All we are doing in measuring the energy in the cavity at a later time is finding out which process actually took place. The fact that this process may require irreversible amplification is of

no relevance to the vanishing of the interference effects. In other words there is no need to demand that the measurement is not complete until some irreversible macroscopic process has been recorded. These results confirm the conclusions already established by Bohm and Hiley and there is certainly no need to argue no phenomenon is a phenomenon until it is an observed phenomenon.

Conclusion What we have shown in this paper is that some of the specific criticisms of the Bohm interpretation involving delayed choice experiments are not correct. The properties of the trajectories that led Scully to term them as surreal were based on the incorrect use of the BI. Furthermore if the approach is used correctly then there is no need to invoke non-locality to explain the behavior the particles in relation to the added cavity. The results then agree exactly with what Scully predicts using what he calls standard quantum mechanics.

This of course does not mean that non-locality is removed from the BI. In the situation discussed by Einstein, Podolsky and Rosen, it is the entangled wave function that produces the non-local quantum potential, which in turn is responsible for the corresponding non-local correlation of the particle trajectories. The mistake that has been made by those attempting to answer the criticism is to assume wave functions of the type shown in equations are similar in this respect to EPR entangled states. They are not. They are not because of the specific properties of systems like the micro-maser cavity and polarized magnetic target. The essential property of these particular systems is that we can attach a unit probability to one of the states even though we do not know which state this is. The fact that a definite result has actually occurred is all that we need to know. When this situation arises then all of the other potential states give no contribution to the quantum potential or the guidance condition so that there is no interference.

This is not the same situation as in the case for the EPR entangled wave function. In this case neither particle is in a well-defined individual state. This is reflected in the fact that there is only a 50% chance of finding one of the two possible states of one of the particles. Therefore the interference between the two states in the entanglement is not destroyed and it is this interference that leads to quantum non-locality. However the interference is destroyed once we have a

process that puts one of the particles into a definite state. In conventional terms this can be used as a record of the result and then the process is called a measurement. But in the BI there is no need to record the result. The fact that one result will occur with probability one is sufficient to destroy interference. Thus delaying an examination of the reading is irrelevant. The process has occurred and that is enough to destroy interference.

It should be noted that in all of these discussions we offer no physical explanation of why there is a quantum potential or why the guidance condition takes the form it does. The properties we have used follow directly from the Schrodinger equation itself and the assumption that we have made about the particle possessing a simultaneous actual position and momentum. As has been pointed out by Polkinghorne, the key question is why we have the Schrodinger equation in the first place. Recently de Gosson has shown that the Schrodinger equation can be shown rigorously to exist in the covering groups of the symplectic group of classical physics and the quantum potential arises by projecting down onto the underlying group. One of us, Hiley, has recently argued that a similar structure can arise by regarding quantum mechanics as arising from a non-commutative geometry where it is only possible to generate manifolds by projection into the so-called shadow manifolds. Here the mathematical structure is certainly becoming clearer but the implications for the resulting structure of physical processes need further investigation. [60]

Information theory
According to Wikipedia, information theory is a branch of applied mathematics, electrical engineering, bioinformatics, and computer science involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data. Since its inception it has broadened to find applications in many other areas, including statistical inference, natural language processing, cryptography, neurobiology, the evolution and function of molecular codes, model selection in ecology, thermal physics, quantum computing, plagiarism detection and other forms of data analysis

Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPGs), and channel coding (e.g. for Digital Subscriber Line (DSL)). The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, and electrical engineering. Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. Important sub-fields of information theory are source coding, channel coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, and measures of information. [61]

Shannons entropy

Entropy H(X) (i.e. the expected surprisal) of a coin flip, measured in bits, graphed versus the fairness of the coin Pr(X=1), where X=1 represents a result of heads.

As Wikipedia says, a key measure of information is entropy, which is usually expressed by the average number of bits needed to store or communicate one symbol in a message. Entropy quantifies the uncertainty involved in predicting the value of a random variable. For example, specifying the outcome of a fair coin flip (two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a die (six equally likely outcomes)

Shannon entropy is the average unpredictability in a random variable, which is equivalent to its information content. The concept was introduced by Claude E. Shannon in his 1948 paper A Mathematical Theory of Communication. Shannon entropy provides an absolute limit on the best possible lossless encoding or compression of any communication, assuming that the communication may be represented as a sequence of independent and identically distributed random variables. Shannons source coding theorem shows that, in the limit, the average length of the shortest possible representation to encode the messages in a given alphabet is their entropy divided by the logarithm of the number of symbols in the target alphabet.

A single toss of a fair coin has an entropy of one bit. A series of two fair coin tosses has an entropy of two bits. The number of fair coin tosses is its entropy in bits The entropy rate for a fair coin toss is one bit per toss. However, if the coin is not fair, then the uncertainty, and hence the entropy rate, is lower. This is because, if asked to predict the next outcome, we could choose the most frequent result and be right more often than wrong. The difference between what we know, or predict, and the information that the unfair coin toss reveals to us is less than one heads-or-tails message, or bit, per toss. As another example, the entropy rate of English text is between 1.0 and 1.5 bits per letter, or as low as 0.6 to 1.3 bits per letter, according to estimates by Shannon based on experiments where humans were asked to predict the next letter in a sample of English text. [62] So we see that information entropy (or Shannons entropy) is a measure of the amount of information contained in a message. The more random the message, the more its entropy. An absolutely random message contains the maximum amount of information, while, a highly organized message, for example the message comprising this sentence, contains a minimum degree of entropy. It has been said that humans are at odds with nature because they tend to minimize entropy by arranging things around them, while if things were left alone they would be disorganized. But humans are part of nature. So nature through human beings retrieves a high level of order and complexity. Thus we fight against entropy on a daily basis.

Another important aspect is whether a message can contain an infinite amount of entropy or whether its entropy is limited up to a certain, finite, amount. In other words the question is if a

message, any message, contains a certain degree of pre-established order, a couple of patterns which reveal the general form of the message, as well as some things about its content. More simply put, do all messages include a kind of meaning which is necessary for a message to be different from all the rest? Somebody could say that we are the ones who give meaning to a message. Still, we also consist of packets of information, so that if we attribute a meaning to things, this meaning is also found within us. We will soon return to this subject about meaning and information.

Active information
Not all information comprises a message. In other words, not all information is active. We usually ignore most of the information around us, choosing just a bit of it either because it is intense or because it is important to us. Communication also needs a correspondence between the transmitter and the receiver in order to be achieved. In other words, both the transmitter and the receiver have to have the right form so that they can exchange active information between them. But how does information become active? It gets a certain meaning only when it is perceived by us and by our sensory organs. The part of information which becomes functional comprises active information. Since we need a suitable receptor in our brain, this implies an archetypal structure of thought. It is true that most information is not expressed but remains inactive in the unconscious. This way is formed a sort of collective memory, the part of which remaining inactive or unexpressed is called the collective unconscious. So it seems that not only we need a cause but also a reason in order to think. We will now discuss the following article, Active Information, Meaning and Form, by F. David Peat. In this article, it is proposed, in the spirit of open speculation, that science is now ready to accommodate a new principle, that of active information, that will take its place alongside energy and matter. Information connects to concepts such as form and meaning which are currently being debated in a variety of fields from biology and the neurosciences, to consciousness studies and the nature of dialogue. It may well provide the integrating factor between mind and matter

Towards the end of the 1980s David Bohm introduced the notion of active information into his ontological interpretation of Quantum Theory. His idea was to use the activity of information as a way of explaining the actual nature of quantum processes and, in particular, the way in which a single physical outcome emerges out of a multiplicity of possibilities. Initially this idea, of information as a physical activity, was tied to Bohms particular theory but, as this essay paper proposes, it is possible to go further and elevate information to the level of a new physical concept, one that can be placed alongside matter and energy.

Information connects to clusters of fertile ideas being debated in physics, biology, consciousness research and the neurosciences. These are grouped around the notions of information, form and meaning, each of which is discussed below

At first sight information plays no role in the world of physics. Facts, data and information are contained in books, or collated by scientists, but have no independent, objective existence in the physical world apart from their interpretation by human subjects. Shannon and Weavers information theory, for example, is concerned with the way data- bites of information- travels along a telephone line or within other transmission system. The meaning of a particular message is irrelevant, what is significant is only the mechanism of its encoding and decoding, plus the relative roles played by noise and redundancy.

Yet a deeper investigation suggests that things are more subtle and less obvious. The notion of entropy, for example, is present in both information theory and thermodynamics. It is another concept that began with an uncertain ontology. On the one hand in thermodynamics entropy is related to well defined variables, like temperature and heat capacity, on the other, it is spoken of, somewhat subjectively, as the degree of disorder within a system, or as the breakdown of order and information, or the degradation of the information content of a message.

There is the question of the amount information required to define a system. An apparently ordered system, exhibiting strictly periodic behavior requires little information for its definition. Highly complex or random systems, on the other hand, are defined by a potentially infinite amount of information. But are these purely objective measures or do they always depend upon

some human subject to assign significance? Prigogine is attempting to place the concept of entropy within a clearer objective basis but the debate continues. The mystery of information deepens with Beckensteins discovery of the relationship between amount of information passing through an event horizon, entropy content, and the radius of a black hole. Here is a totally objective, quantitative definition of information within physics. (In this case, however, it is the amount of information rather than its actual content or meaning that is significant.) The breakthrough in giving information a more physical role comes with Bohms proposal that information plays an active role in quantum systems. Bohms 1952 hidden variable papers proposed an alternative approach to quantum theory in which the electron is a real particle guided by a new kind of force, the quantum potential. While at first sight Bohms theory appears somewhat classical (electrons have real paths), the quantum potential is entirely novel. Unlike all other potentials in physics its effects do not depend upon the strength or size of the potential but only on its form. It is for this reason that distant objects can exert a strong influence on the motion of an electron. Bohms approach to his own theory became more subtle over the years and he soon began to speak of not only of the form of the quantum potential and also of the information it contains. The action of the quantum potential is not to push or pull the electron along its path. Rather, Bohm likened it to a radar signal that guides a ship approaching a harbor. The information within the radar signal acts, via a computer or automated steering device, to change the direction of the ship. Information itself does not push the ship, rather it in-forms the gross energy of the engines. Information therefore allows a distinction to be made between what could be called raw or unformed energy and a more subtle energy, an activity that can be identified with information. This information acts on raw energy to give it form

The actual nature of the information and the way it is carried is not yet entirely clear. Is it really correct, for example, to speak of a field of information, since information does not fall off with distance, neither is it associated with energy in the usual sense. Possibly the notion of field should be widened or, at the quantum level, we should be talking about pre-space structures, or about algebraic relationships that precede the structure of space and time Information would have an objective nature. It would play an active role in giving form to energy and be responsible for quantum processes. As a field of active information it provided a collective, global form for a superconductor or superfluid. Information would be co-present as an aspect of physical law, but also through what appear to be more subjective elements such as meaning and significance. It particular, information may be responsible for global processes in the brain and have a role to play in the nature of consciousness.

Form is a key concept in biology. The function of everything from the activity of an enzyme to a cell or organ is related to its physical form. Growth from the fertilized cell to the adult is a process of differentiation and transformation of form; hence biologists from Aristotle to Waddington, Sheldrake and Goodwin have postulated notions of morphic fields.

The universal nature of form and its transformation was, in the 1960s, the subject of a new branch of mathematics, Rene Thorns Catastrophe Theory. Form has associated with it the idea of a Gestalt, of global patterns, perception and non-locality; such notions connect with the functioning of consciousness and with the immune system.

Form has its role to play in physics. In classical physics it is the form of the Hamiltonian that remains invariant under canonical transformations. In this way, Newtonian mechanics can be transformed from the mechanical interaction of individual particles into global form-preserving processes. Likewise, general relativity is about the invariance of form under all possible coordinate transformations. In this sense, motion under gravity has to do with the preservation of form. One could perhaps generalize the concept of inertia to that of the law of persistence of form.

Most dramatically form appears in the guise of the wave function. It is the global form of the wave function (symmetric or anti-symmetric) that is responsible for the existence of Fermi-Dirac or Bose-Einstein statistics. The fact that such forms are non-factorizable (into spatially independent components) is the deep reason for quantum non-locality (Bells mysterious correlation between distant particles). The form of the wave function is ultimately responsible for collective modes in physics- plasma, superfluid, superconductor and hypothetical Froehlich systems. The form of the wave function orchestrates each of an astronomical number of particles into a highly coordinated dance. Bohms quantum potential is unique in that the magnitude of its effects, on the motion of electrons, does not arise from its strength or intensity but from the form of the potential- that is, its particular complex shape. It is for this reason that the effects of the quantum potential do no fall off with distance and that well separated quantum objects can remain strongly correlated.

It is highly suggestive that form may also be responsible for global quantum proper within the brain that gives rise to consciousness. Form, a global property as opposed to a local one, may have something to do with the evolution of space-time structure out of some more primitive quantum pre-space. Penrose, for example, proposes that the quantum mechanical collapse of the wave function is a global phenomenon connected with the geometrical properties of space-time. He also speculates that global quantum process have a role to play in the liaison between consciousness and brain structure.

These are speculative but compelling speculations that revolve around the same cluster of ideas and connect different areas of interest, such as consciousness, life and fundamental physics. They raise the question: How does the global nature of form relate to active information? Is information a new principle of the physical world that applies in a wide variety of fields of interest? The answer to this question must begin with a period of sorting out and clarification of basic ideas and their multiple interconnections

If form begins with biology (and leads into quantum theory), meaning surely starts in psychology. It was Carl Jung who stressed the role of meaning in Synchronicity- that region

where form and pattern spill over the boundaries between mind and matter. For Jung the key was the deep internal significance associated with an experience of synchronistic patterns, a significance that did not end at the boundaries of personal consciousness. Meaning was both subjective and objective. As Wolfgang Pauli emphasized, just as psychology had uncovered the objective in psyche (the collective or objective unconscious) so physics must find the subjective in matter. Jung termed this speculum between matter and mind as the psychoid, its integrating factor is meaning. In the context of Dialogue groups Bohm spoke of a field of meaning shared by all participants. He also stressed that the way to bring about effective social change is through an overall change of meaning. Meaning, which could be thought of as a field of form, Bohm associated this with the immune system. The Immune system is what keeps the body whole, it processes coordinated and is another manifestation of meaning, if meaning is degraded the body becomes sick. Bohm stressed that his maxim a change of meaning is a change of being was to be taken literally. Laboratory research suggests that shifts in meaning bring about subtle restructuring of nerve pathways and the sensitivities of connections. Meaning which is normally taken to be subjective turns out to have an objective, physical consequence.

Meaning can act on matter and, presumably, matter on meaning. (The significance of what we see or think is affected by the electrochemical environment of our bodies.) Does the idea extend from consciousness into the physical world? I believe it does. Information is, in some way, encoded in the wave function, or some sort of a field of form, or some set of pre-quantum algebraic relationships. Yet what information is encoded? One solution is that all information, about the entire universe is encoded, or enfolded, within the global form. (Or as Bohm may have said, within the implicate order.) Yet only that which has meaning, or significance, for the electron is active. Consciousness becomes a certain dynamical aspect of this underlying field or order. Mind is fundamentally distributed throughout the material world. [63]

Meaning and information

Here we can return to the question which we posed earlier: Does information contain an intrinsic meaning? Many researchers from many scientific fields believe so. The psychiatrist Carl Jung

saw archetypes underlying the formation of all physic processes and structures. The biologist Rupert Sheldrake states that morphic fields guide matter in order to form complex biological structures. George Boole, as we said, thought that ethics, in the form of logic, comprise the basic core of the human intellect. In information theory, it is the message that should have its own meaning; otherwise its entropy would be infinite and the message would be destroyed. Here we will discuss the relation between information and meaning according to David Bohm. When we use the term meaning, this includes significance, purpose, intention and value. However, these are only points of departure into the exploration of the meaning of meaning. Evidently, we cannot hope to do this in a few sentences. Rather, it has to be unfolded as we go along. In any case, there can be no exhaustive treatment of the subject, because there is no limit to meaning. Here, we can usefully bring in Korzybskis statement that whatever we say anything is, it isnt. It may be similar to what we say, but it is also something more and something different. Reality is therefore inexhaustible, and so evidently is meaning. What is needed is thus a creative attitude to the whole, allowing for the constantly fresh perception of reality, which requires the unending creation of new meanings. This is especially significant, in the exploration of the meaning of meaning.

Meaning is inseparably connected with information. The operative notion here is that information has to do with form. Literally, to inform means to put form into something. First of all, information has to be held in some form, which is carried either in a material system (e.g. a printed page) or in some energy (e.g. a radio wave). We find that in general a pure form cannot exist by itself, but has to have its subsistence in some kind of material or energetic basis; and this is why information has to be carried on such a basis. Thus, even the information in our sense impressions and in our thought processes has been found to be carried by physical and chemical processes taking place in the nervous system and the brain.

What is essential for a form to constitute information is that it shall have a meaning. For example, words in a language that we cannot read have no meaning, and therefore convey no information to us. Gregory Bateson has said, Information is a difference that makes a difference. But to be more precise, we should put it this way: Information is a difference of

form that makes a difference of content, i.e., meaning. (For example, a difference in the forms of letters on a printed page generally makes a difference in what they mean.)

Just how is information related to meaning? To go into this question, it is useful to consider the notion of active information. As an example, let us take a radio wave, whose form carries information representing either sound or pictures. The radio wave itself has very little energy. The receiver, however, has a much greater energy (e.g. from the power source). The structure of the radio is such that the form carried by the radio wave is imposed on the much greater energy of the receiver. The form in the radio wave thus literally informs the energy in the receiver, i.e. puts its form into this energy, and this form is eventually transformed (which means 'form carried across') into related forms of sound and light. In the radio wave, the form is initially inactive, but as the form enters into the electrical energy of the receiver, we may say, that the information becomes active. In general, this information is only potentially active in the radio wave, but it becomes actually active only when and where there is a receiver which can respond to it with its own energy.

A similar notion holds in a computer. The form is held in the silicon chips, which have very little energy, but this form enters into the much greater energy of the overall activities of the computer, and may even act outside the computer (e.g. in a ship or an airplane controlled by an automatic pilot guided by the information in radar waves).

In all these cases, we have been considering devices made by human beings, that respond actively to information. However, in modern molecular biology, it is assumed that the DNA molecule constitutes a code (i.e. a language), and that the RNA molecules read this code, and are thus in effect informed as to what kind of proteins they are to make. The form of the DNA molecule thus enters into the general energy and activity of the cell. At any given moment, most of the form is inactive, as only certain parts of it are being read by the RNA, according to the stage of growth and the circumstances of the cell. Here, we have a case in which the notion of active information does not depend on anything constructed by human beings. This shows that the idea of active information is not restricted to a human context, and suggests that such information may apply quite generally.

It is clear, of course, that the notion of active information also applies directly to human experience. For example, when the form of a road sign is apprehended in the brain and nervous system, the form is immediately active as meaning (e.g. if the traffic sign says stop, the human being brings the car to a halt).

A still more striking example is that of a person who encounters a shadow on a dark night. If this persons previous experience is such as to suggest that there may be assailants in the neighborhood, the meaning of an assailant may be immediately attributed to this form. The result will be an extensive and powerful activity of mind and body, including the production of adrenaline, the tensing of the muscles, and an increase in the rate of the heart. But if, on closer inspection, this person sees further evidence indicating that it is only a shadow, all this activity stops, and the body and mind become quiet again. It is clear then that any form to which meaning can be attributed may constitute information. This is generally potentially active, and may become actually active in the mind and body of a human being under suitable conditions. Such relationships of activity in mind and body have been called psychosomatic, where psyche means mind or soul and soma means the body. This suggests two separate systems that interact. But the examples that we have been discussing indicate a relationship much closer than mere interaction of separate entities. Rather, what is suggested is that they are merely two sides or aspects of an overall process, separated in thought for convenience of analysis, but inseparably united in reality.

I would like to suggest then that the activity, virtual or actual, in the energy and in the soma is the meaning of the information, rather than to say that the information affects an entity called the mind which in turn operates somehow on the matter of the body. So the relationship between active information and its meaning is basically similar to that between form and content, which we know is a distinction without a real difference or separation between the elements distinguished.

To help focus attention on this kind of distinction, I shall suggest the term soma-significant, instead of psychosomatic. In doing this, I am generalizing the notion of soma to include all matter. Each manifestation of matter has form, and this form has meaning (at least potentially, if not actually). So we see quite generally that soma is significant. But in turn, this significance may give rise to further somatic activity (e.g. as with the shadow on a dark night). We shall call this activity signa-somatic. So we have the two inseparable movements of soma becoming significant and the significance becoming a somatic activity. This holds not only for human beings, but also for computers (e.g. computers can now recognize forms and act in a way that differs according to differences of form). Similarly the RNA in the cell can respond to the form of the DNA, so that the soma of the DNA becomes significant, and this acts signa-somatically to produce proteins that differ according to differences in the form of the DNA. So the actions of soma-significant and signa-somatic can thus be extended beyond the domain of human experience, and even beyond the domain of devices constructed by human beings.

It is important to consider the fact that the activity of meaning may be only virtual, rather than actual. Virtual activity is more than a mere potentiality. Rather, it is a kind of suspended action. For example, the meaning of a word or of any other form may act as imagination. Although there is no visible outward action, there is nevertheless still an action, which evidently involves the somatic activity of brain and nervous system, and may also involve the hormones and muscular tension, if the meaning has a strong emotional charge. However, at some stage, this action may cease to be suspended, so that an outward action results. For example, in reading a map the forms on the paper constitute information, and its meaning is apprehended as a whole set of virtual activities (e.g. in the imagination), representing the actions that we might take in the territory represented by the map. But among these, only one will be actualized externally, according to where we find ourselves to be at the moment. The information on the map is thus potentially and virtually active in many ways, but actually and externally active at most in one way.

If, however, we can find no place, at least for the moment, to which the map is actually relevant, all such external activity may be suspended. As has indeed already been indicated, this sort of suspension of outward activity is nevertheless still a kind of inward activity that flows out of the total meaning of the available information, (which now includes the realization that there is no

place to which the map is actually relevant). More generally then, all action (including what is called inaction) takes place at a given moment directly and immediately according to what the total situation means to us at the moment. That is to say, we do not first apprehend the meaning of the information and then choose to act or not act accordingly. Rather, the apprehension of meaning is, at the very same time, the totality of the action in question (even if this should include the action of suspending outward activity).

This inseparable relationship of meaning and action can be understood in more detail by considering that meaning indicates not only the significance of something, but also our intention toward it. Thus I mean to do something signifies I intend to do it. This double meaning of the word meaning is not just an accident of our language, but rather, it contains an important insight into the overall structure of meaning.

To bring this out, we first note that an intention generally arises out of a previous perception of meaning or significance of a certain total situation. This gives all the relevant possibilities and implies reasons for choosing which of these is better. As a simple example, one may consider the various foods that one may eat. The actual choice may be made according to the significance of the food as something that one likes or dislikes, but it may depend further on the meaning of the knowledge that one has about the nutrient qualities of the food. More generally, such a choice, whether to act or not to act, will depend on the totality of significance at that moment The source of all this activity includes not only perception and abstract or explicit knowledge, but also what Polanyi called tacit knowledge; i.e., knowledge containing concrete skills and reactions that are not specifiable in language (as for example is demonstrated in riding a bicycle). Ultimately, it is this whole Significance, including all sorts of potential and virtual actions, that gives rise to the overall intention, which we sense as a feeling of being ready to respond in a certain way.

It must be kept in mind, however, that most of the meaning in this process is implicit. Indeed, whatever we say or do, we cannot possibly describe in detail more than a very small part of the total significance that we may sense at any given moment. Moreover, when such significance gives rise to an intention, it too will be almost entirely implicit, at least in the beginning. For example, implicit in one's present intention to write or speak is a whole succession of words that

one does not know in detail until one has actually spoken or written them. Moreover, in speaking or writing, these words are not chosen one by one. Rather, many words seem to be enfolded in any given momentary intention, and these emerge in a natural order, which is also enfolded.

Meaning and intention are thus seen to be inseparably related, as two sides or aspects of one activity. In actuality, they have no distinct existence, but for the sake of description we distinguish them (as we have done also with information and meaning). Meaning unfolds into intention, and intention into action, which, in turn, has further significance, so that there is, in general, a circular flow, or a cycle. Closely related to meaning and intention is value. Thus, to say This means a great deal to me signifies This has a very high value to me. The word value has the same root as valor, and it therefore suggests a kind of strength or virtue. Generally speaking, that which has for us a broad and deep significance will give rise to a sense of value, which arouses us to some kind of response, and infuses us with a corresponding strength or intensity of the kind of energy that is needed to carry out our intention. Without such a sense of value, we will have little interest and energy, and our action will tend to be weak and ineffective. It is thus clear that meanings implying some kind of high value will bring about strong and firm intentions. When such intentions are focused on a determinate end or aim (once again dependent on the overall meaning) they are called will. Thus, intention, value and will may be seen as key aspects of the soma-significant and signa-somatic cycle. It follows then that all three of these, together with meaning, flow and merge into each other in an unbroken movement. The distinctions between them are only in thought. These distinctions are useful in trying to understand and talk about this process, but should not be taken to correspond to any real separation between them.

Thus far, we have been discussing how already-known meanings take part in the cycle described above. Generally speaking, such meanings implicitly contain a disposition to act in a corresponding way. Thus, if our view of a road suggests that it is level, our bodies will immediately be disposed to walk accordingly. Moreover, if there are unexpected pot-holes in the road, these may trip us up until we see the meaning of the new situation, and thus immediately alter the disposition of our bodies. All meanings indeed imply (or enfold) various kinds of such

disposition to act (or not to act), and these are an essential part of the signa-somatic activity of meaning.

As long as the action flowing out of a given set of such already-known meanings is coherent and appropriate, this sort of disposition will constantly be re-enforced, until it becomes a habit, or a fixed disposition. But sooner or later, a situation will be encountered in which this disposition is no longer appropriate. It is then necessary to suspend the older dispositions, and to observe, to learn, and to perceive a new meaning, implying a new disposition.

As an example, consider a very young child, to whom bright objects have always signified goodness, happiness, pleasant excitement, etc., in which are implied a disposition to reach out and take hold of such objects. Suppose now that for the first time the child encounters a fire, and acts according to its habitual disposition. It will burn itself and withdraw its hand. The next time the child sees a fire, the initial disposition to reach out for it will be inhibited by the memory of the pain. When action is thus suspended, the mental energy in the intention to act will tend to go into the calling up of images of previous experiences with such objects. These will include not only images of many pleasing bright objects, but also the memory of the fire, which was pleasing when experienced far enough away but painful in the experience of contact. In a way, these images now constitute a new level of somatic form, resembling that of the original objects, but of a more subtle nature. This form is, as it were, scanned or surveyed from a yet deeper and more subtle level of inward activity.

We emphasize again that in such a process, that which was previously the meaning (i.e. the images and their significance) is now being treated as a somatic form. The child can operate on this form, much as it can operate on the forms of ordinary objects. Thus, the child is able to follow the image of the fire, as it gets closer and at a certain point it evokes a memory-based image of pain. Out of this emerges a new meaning, enabling the child to solve the problem of determining an appropriate relationship to the fire, without having to be in danger of burning itself again. In this new meaning, the fire is pleasant when the hand is far enough away and painful when it is too close. And a new disposition arises, which is to approach the fire more carefully and gradually, to find the 'best' distance from it. As the child engages in many similar

learning experiences, there arises a still more subtle and more general disposition to learn in this way in approaching all sorts of objects. This makes for facility and skill in using the imagination in many different contexts to solve a wide range of problems of this general nature.

It is clear that this process can be carried to yet more subtle and more abstract levels of thought. In each stage, what was previously a relatively subtle meaning, can, as in the case of the fire, now be regarded as a relatively somatic form. The latter, in turn, can give rise to an intention to act on it. The energy of this intention is able then to give rise to an ever-changing sequence of images with yet more subtle meanings. This takes place in ways that are similar to those that took place with the image of the fire. Evidently, this process can go on indefinitely, to levels of ever greater subtlety. (The word subtle is based on a root signifying finely woven, and its meaning is rarefied, highly refined, delicate, elusive, indefinable and intangible.)

Each of these levels may then be seen from the mental or from the material side. From the mental side it is an information content with a certain sense of meaning as a subtle virtual activity. But from the material side it is an actual activity that operates to organize the less subtle levels, and the latter thus serve as the material on which such an operation takes place. Thus, at each stage, the meaning is the link or bridge between the two sides.

It is being proposed then that a similar relationship holds even at indefinitely greater levels of subtlety. The suggestion is that this possibility of going beyond any specifiable level of subtlety is the essential feature on which intelligence is based. That is to say, the whole process is not intrinsically limited by any definable pattern of thought, but is in principle constantly open to fresh, creative and original perceptions of new meanings.

This way of looking at the subject contrasts strongly with the commonly-held notion, to which I have referred earlier, that matter and mind are considered to be separate substances. In the view that I have been proposing, the mental and the material are two sides of one overall process that are (like form and content) separated only in thought and not in actuality. So there is only one energy which is the basis of all reality. The form, as apprehended on the mental side, gives shape to the activity of this energy, which later acts on less subtle forms of process that constitute, for

this activity, the material side. Each part thus plays both roles, i.e., the mental and the material, but in different contexts and connections. There is never any real division between mental and material sides, at any stage of the overall process.

This implies, in contrast to the usual view, that meaning is an inherent and essential part of our overall reality, and is not merely a purely abstract and ethereal quality having its existence only in the mind. Or to put it differently, in human life, quite generally, meaning is being. Thus, if one were to ask what sort of person a given individual is, one would have to include all his or her characteristic tendencies and dispositions to act, which, as we have seen, come out of what everything means to that person. Thus our meanings flow into our being, and because the somatic forms in this being are significant, such being flows back into meaning. Each thus comes to reflect the other. But ultimately, each is the other. For the activity to which information gives rise is our being, and this being is actuality and action that are thus informed. So meaning and being are separated only in thought, but not in actuality. They are but two aspects of one overall reality.

It is clear that because there is no limit to the levels of subtlety of meaning that are possible, the being flowing out of meaning is in principle infinite and inexhaustible. One can see that this also follows: in another way by noting that all meaning is to some degree ambiguous, because each content depends on some context. But this latter in turn can become a content, which depends on a yet broader context (which may include many levels of subtlety), and so on indefinitely. So meanings are inherently incomplete, and subject to change, as they are incorporated in broader, deeper, and more subtle meanings, arising in new contexts

The notion that meaning is being has in this way been extended to inanimate matter at the level of the most fundamental laws of physics that are known to us so far. Thus, if we were to ask what an electron is, we would have to include in the answer to this a description of how it behaves under various circumstances. According to classical physics, an electron is an entity that moves mechanically and is deflected only by external forces and pressures, that do not in general significantly reflect distant features of its environment. But according to the quantum theory, an electron is something that can significantly respond to information from distant features of its

environment, and this mode of response, which is the meaning of the information, is essential to what the electron is.

In analogy to what has been said about human experiences, the particles constituting matter in general may be considered to represent a more gross (explicate) somatic level of activity, while the Schrodinger wave field corresponds to a finer, subtler, more implicate and mind-like level. In human experience however, it has been proposed that each mind-like level can be regarded as a somatic bearer of form when seen from a yet finer and more subtle level. This would imply firstly that the information represented by the Schrodinger wave field is being carried by a finer and subtler level of matter that has not yet been revealed more directly. But even more important, it also implies that there may be a finer and more subtle level of information that guides the Schrodinger field, as the information on the Schrodinger field guides the particles. But this in turn is a yet more subtle somatic form, which is acted on by a still more subtle kind of information, and so on. Such a hierarchy could in principle go on indefinitely. This means, of course, that the current quantum mechanical laws are only simplifications and abstractions from a vast totality, of which we are only scratching the surface. That is to say, in physical experiments and observations carried out this far, deeper levels of this totality have not yet revealed themselves.

In this way, we arrive at a notion of matter in general which is closely parallel to what was proposed earlier with regard to the relationship of mind and matter in the human being. How then are these two hierarchies of active information, the material and the mental, related? Or are there actually two distinct and independently existent hierarchies?

It is being proposed here that there is in fact only one such hierarchy. In this, the more subtle levels, some of which we experience as thoughts, feelings, intention, will, etc., merge continuously with the less subtle levels. And therefore, what we experience as mind is ultimately connected, soma-significantly, and signa-somatically, to the Schrodinger wave field and to the particles. In this way, we can account for how matter at the ordinary level is knowable through what is called mind, and how the latter can affect what is called the soma of the body, and through this, matter more broadly. So we do not have a split between mind and matter in general.

As with information and meaning they are two sides of one process, separable only in thought but not in actuality.

This implies of course that human consciousness is not something altogether outside the overall universe of matter. But matter has now come to signify a totality of being, ultimately of a subtlety beyond all definable limits. And thus, it may equally be called mind, or mind-matter, or matter-mind. In this one totality, meaning provides all being and, indeed, all existence. [64]

Turing test

Turing test is about creating a machine with adequate intelligence to recognize and correct its own logical fallacies, a universal computer with unlimited powers of intelligence.

According to the site dedicated to Alan Turing, he wrote the paper about his test while employed at the Computing Laboratory in Manchester University. This was where the worlds first storedprogram digital computer had been engineered. The prospect of artificial intelligence was raised as an issue for the general public from the very start, stimulated by the success of wartime science technology. In 1949 the brain surgeon Geoffrey Jefferson spoke out against it in a lecture The Mind of Mechanical Man. In response, Turing told the London Times that their research would be directed to finding out... to what extent [the machine] could think for itself. Of course, this was in no way the official purpose of the Computing Laboratory, and was at variance with anything Newman or Williams would have said. Controversial and provocative, Alan Turing went on to write the 1950 paper as his own individual contribution, making very few references to other peoples ideas.

Nevertheless, Turing had some useful external stimuli. Besides Jefferson and Max Newman, the chemist and philosopher Michael Polanyi, and the zoologist and neurophysiologist J. Z. Young, were active participants in discussions with him. A large interdisciplinary Discussion on the Mind and the Computing Machine was held at Manchester on 27 October 1949.

Reading the transcript is rather like reading the conversations generated by computers, described on the next page. Few of the discussions can stick to a point or actually address a question! But it is nevertheless a striking document. From the discussion of Gdels theorem, to the reference to neural networks, to the connection with detailed brain physiology, all the topics are completely relevant today. In the midst of this came a joke against Turing (and perhaps Newman): the question Are mathematicians human beings? Such murmurs are liable to provoke a revolt o f the nerds, and Turing was entirely willing to be the revolutionary. He was fully aware that his propositions contradicted widely held assumptions about the uniqueness of human abilities and happily called himself a heretic putting forward an unpopular view. Turings discussion of intelligent machinery did not begin in 1950. It went back to the 1936 Turing machine modeling the human mind. Turings 1950 paper is best read as the successor to two earlier papers, unpublished in Turings own lifetime. These were a 1947 talk and a 1948 report, both accessible in the Turing Archive. These have more technical and mathematical detail, and add much to the 1950 paper. However, the 1950 paper was the first properly published work. According to this letter in the Turing Archive, Bertrand Russell enjoyed it very much. Turings argument was of course in line with Russells materialist and atheist philosophy, and indeed Turing poked fun at religion in this paper, albeit somewhat superficially. The BBC invited him to give a talk on its new highbrow radio Third Program in 1951 Turing took part in a further radio discussion in January 1952 There is no known recording of his hesitant voice. But the scripts can be read in the Turing Archive: 1951 talk and 1952 discussion.

Both broadcasts have further points of interest regarding Turings discussion of Artificial Intelligence. So do two further Turing texts: he gave another 1951 lecture entitled Intelligent Machinery: A heretical theory, and wrote an article on computer chess-playing. The original typescripts of these can also be seen in the Turing Archive.

But it was in the 1950 paper that Turing held, most fully and confidently, that computers would, in time, be programmed to acquire abilities rivaling human intelligence. Even where he saw difficulties and was doubtful about what could be achieved, he advocated experiment. He saw this not a dogma, but as an important conjecture, to guide future research.

The paper has many aspects to it, including an exposition of the discrete state machine model, which gets away from all earlier discussion of homunculi, life forces, etc. etc. and places the discussion within a clear logical framework. A rather corner-cutting account of computability and the universal machine as Turing had discovered in 1936. Answers to many specific objections to the prospect of intelligent machinery, broadly reflecting discussions such as that held in 1949.

Constructive suggestions for how artificial intelligence might be arrived at, including what are in modern terms both top-down and bottom-up approaches. But the most famous element of his paper lies in his test. Turing put forward the idea of an imitation gam, in which a human being and a computer would be interrogated under conditions where the interrogator would not know which was which, the communication being entirely by textual messages. Turing argued that if the interrogator could not distinguish them by questioning, then it would be unreasonable not to call the computer intelligent, because we judge other peoples intelligence from external observation in just this way.

The Test allows Turing to avoid any discussion of what consciousness is. It seems to provide a scientific, objective, criterion of what is being discussed- but with the rather odd necessity of imitation and deceit coming into it, for the machine is obliged to assert a falsity, whilst the human being is not. Turings imitation game is now usually called the Turing test for intelligence.

Turings paper is still frequently cited and people still discover new things in it. It has certainly generated an enormous number of academic discussions Turing envisaged in this paper that machine intelligence could take off once it reached a critical mass. This picture is expounded in a science-fiction novel, The Turing Option, by the AI pioneer Marvin Minsky. In 2008, Ray Kurzweil received much media attention for speaking at an American Association for the Advancement of Science meeting and saying that human-level AI would be achieved by 2030.

The logical consequence of this thesis is that machines would not merely match but would surpass human abilities. Transhumanists take this as a completely serious starting-point. Thus the Singularity Institute for Artificial Intelligence states that In the coming decades, humanity will likely create a powerful artificial intelligence. The Singularity Institute... exists to confront this urgent challenge, both the opportunity and the risk. The article How long before super-intelligence by Nick Bostrom gives essentially the same argument as that of Turings 1950 paper, though without worrying so much about the objections. So does When will computer hardware match the human brain? by Hans Moravec. Turing estimated 1010 bits of storage to be sufficient, but of course this now seems ridiculously low for computer storage. Modern writers now give other figures and use Moores Law to estimate future power. In one way Turings predictions certainly fell short of what has happened in 50 years, because miniaturisation has gone far beyond what he imagined possible. He was right, though, in citing the finite speed of light as an essential factor determining constraints on technology

The restriction to textual communication is now less significant, as computer files are used for all the media and sensory inputs of virtual reality. A suggestion for an Ultimate Turing Test exploits this fact. What this shows is that the mathematics of computability is the real bedrock on which the whole question rests. Anything a computer can do is computable; anything computable can be done on a computer (as a universal machine); if what the brain does is computable it can be imitated by a computer. That is the fundamental argument.

Turing himself argued in this paper that the question of uncomputability in mathematics was not in fact relevant to the question of mental faculties. In 1961 the Oxford philosopher J. R. Lucas published a paper on the significance of Gdels theorem which argued to the contrary. Gdel himself also criticised Turings assertions about human minds in the 1960s. Turings view was defended by his wartime colleague, the mathematician and statistician I. J. Good. It was later much elaborated by Douglas Hoftstadter in his 1979 book Gdel, Escher, Bach. In 1989, Roger Penrose published The Emperors New Mind, which took a completely fresh approach, connecting uncomputability with unknown laws governing quantum physics. His work Shadows of the Mind followed in 1994, making a specific suggestion about the physics of the brain. A good entry point into this argument is the on-line paper Beyond the Doubting of a Shadow, Penroses response to criticisms of Shadows of the Mind.

Nowadays the thesis that no physical process can go beyond the bounds of computability, is known as the Physical Church-Turing thesis. In discussing the Argument from Continuity of the Nervous System, Turing's 1950 paper comes very close to asserting this thesis. But this is one area where Turings post-1950 texts are well worth studying, because in the 1951 radio talk, Turing briefly gave a different discussion, this time bringing in the difficulty posed by quantum mechanics. This radio talk also expressed more concern about the significance of uncomputability. [65]

ntelligent design
Is a machine able to reach human intelligence, or is there some God Factor that doesnt allow it? In Turings words, it is a matter of uncomputability. But could we really buil d someday such a machine that the word God could compute? Gdel proved that the system of logic is incomplete, in other words limited. So how can a human being with a limited intellectual capacity built a computer with an unlimited mental power? It has being suggested that human intelligence, and in general living beings, is superior to inanimate matter, due to a sort of life force. So what information has to do with all this? Is God found within information which makes the difference, and how could this factor be defined? I personally believe that whatever the factor may be, divine or trivial, even a computer algorithm needs some initial conditions,

axioms, in order to begin working. In the case of humans, these conditions have to do with notions such as creativity, self-awareness, spontaneity, ambition, and many more which a computer certainly lacks. I am not suggesting here that an intelligent design is a perquisite of human intelligence or progress in general; but in order to be able to explain the reason why humans evolved over other creatures (and certainly over inanimate things such as computer machines) we should consider some facts or factors which have less to do with mathematics or logic, and more with faith, imagination and intuition.

According to the site intelligentdesign.org, Intelligent design refers to a scientific research program as well as a community of scientists, philosophers and other scholars who seek evidence of design in nature. The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection. Through the study and analysis of a systems components, a design theorist is able to determine whether various natural structures are the product of chance, natural law, intelligent design, or some combination thereof. Such research is conducted by observing the types of information produced when intelligent agents act. Scientists then seek to find objects which have those same types of informational properties which come from intelligence. Intelligent design has applied these scientific methods to detect design in irreducibly complex biological structures, the complex and specified information content in DNA, the life- sustaining physical architecture of the universe, and the geologically rapid origin of biological diversity in the fossil record during the Cambrian explosion approximately 530 million years ago.

Is intelligent design the same as creationism? No. The theory of intelligent design is simply an effort to empirically detect whether the apparent design in nature acknowledged by virtually all biologists is genuine design (the product of an intelligent cause) or is simply the product of an undirected process such as natural selection acting on random variations. Creationism typically starts with a religious text and tries to see how the findings of science can be reconciled to it. Intelligent design starts with the empirical evidence of nature and seeks to ascertain what inferences can be drawn from that evidence. Unlike creationism, the scientific theory of

intelligent design does not claim that modern biology can identify whether the intelligent cause detected through science is supernatural.

Is intelligent design a scientific theory? Yes. The scientific method is commonly described as a four- step process involving observations, hypothesis, experiments, and conclusion. Intelligent design begins with the observation that intelligent agents produce complex and specified information (CSI). Design theorists hypothesize that if a natural object was designed, it will contain high levels of CSI. Scientists then perform experimental tests upon natural objects to determine if they contain complex and specified information. One easily testable form of CSI is irreducible complexity, which can be discovered by experimentally reverse- engineering biological structures to see if they require all of their parts to function. When ID researchers find irreducible complexity in biology, they conclude that such structures were designed.

Beyond the debate over whether intelligent design is scientific, a number of critics argue that existing evidence makes the design hypothesis appear unlikely, irrespective of its status in the world of science. For example, Jerry Coyne asks why a designer would stock oceanic islands with reptiles, mammals, amphibians, and freshwater fish, despite the suitability of such islands for these species. Coyne also points to the fact that the flora and fauna on those islands resemble that of the nearest mainland, even when the environments are very different as evidence that species were not placed there by a designer. Previously, in Darwins Black Box, Behe had argued that we are simply incapable of understanding the designers m otives, so such questions cannot be answered definitively. Odd designs could, for example, have been placed there by the designer... for artistic reasons, to show off, for some as- yet undetectable practical purpose, or for some not guessable reason. Coyne responds that in light of the evidence, either life resulted not from intelligent design, but from evolution; or the intelligent designer is a cosmic prankster who designed everything to make it look as though it had evolved.

Intelligent design proponents such as Paul Nelson avoid the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design. Behe cites Paley as his inspiration, but he differs from Paleys expectation of a perfect Creation and proposes that designers do not necessarily produce the best design they can. Behe suggests that, like a parent

not wanting to spoil a child with extravagant toys, the designer can have multiple motives for not giving priority to excellence in engineering. He says that the argument for imperfection critically depends on a psychoanalysis of the unidentified designer. Yet the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are. This reliance on inexplicable motives of the designer makes intelligent design scientifically untestable. Asserting the need for a designer of complexity also raises the question What designed the designer? Intelligent design proponents say that the question is irrelevant to or outside the scope of intelligent design. Richard Wein counters that the unanswered questions an explanation creates must be balanced against the improvements in our understanding which the explanation provides. Invoking an unexplained being to explain the origin of other beings (ourselves) is little more than question- begging. The new question raised by the explanation is as problematic as the question which the explanation purports to answer. Richard Dawkins sees the assertion that the designer does not need to be explained, not as a contribution to knowledge, but as a thoughtterminating clich. In the absence of observable, measurable evidence, the very question What designed the designer? leads to an infinite regression from which intelligent design proponents can only escape by resorting to religious creationism or logical contradiction. [66]

Gdels incompleteness theorems, free will and mathematical thought

We have already investigated the problem of logic: consistency leads to contradiction, which needs revision, which puts the power of logic under question; but objection is a product of logic, without which the original question could not be posed. So logic created the problem in the first place; and the cycle repeats itself.

As far as free will is concerned, we should trace the power of innovation and creativity somewhere within the context of information. We didnt really invent free will; we just exercise and use it. So our ability to act on our own could go back to some fundamental properties of information, such as propagation, preservation and recombination. We have inherited all our basic instincts from the essence of information itself!

We will refer here to some abstracts from Gdels incompleteness theorems, free will and mathematical thought, of Solomon Feferman:

Gdel on minds and machines Gdel first laid out his thoughts in this direction in what is usually referred to as his 1951 Gibbs lecture, Some basic theorems on the foundations of mathematics and their implications. The text of this lecture was never published in his lifetime, though he wrote of his intention to do so soon after delivering it. After Gdel died, it languished with a number of other important essays and lectures in his Nachlass until it was retrieved for publication in Volume III of Gdels Collected Works.

There are essentially two parts to the Gibbs lecture, both drawing conclusions from the incompleteness theorems. The first part concerns the potentialities of mind vs. machines for the discovery of mathematical truths. The second part is an argument aimed to disprove the view that mathematics is only our own creation, and thus to support some version of platonic realism in mathematics; only the first part concerns us here. Gdel there highlighted the following dichotomy: Either the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems...

By a diophantine problem is meant a proposition of the language of Peano Arithmetic of a relatively simple form whose truth or falsity is to be determined; its exact description is not important to us. Gdel showed that the consistency of a formal system is equivalent to a diophantine problem, to begin with by expressing it in the form that no number codes a proof of a contradiction. According to Gdel, his dichotomy is a mathematically established fact which is a consequence of the incompleteness theorem. However, all that he says by way of an argument for it is the following: [I]f the human mind were equivalent to a finite machine then objective mathematics not only would be incompletable in the sense of not being contained in any well-defined axiomatic

system, but moreover there would exist absolutely unsolvable problems, where the epithet absolutely means that they would be undecidable, not just within some particular axiomatic system, but by any mathematical proof the mind can conceive.

By a finite machine here Gdel means a Turing machine, and by a well-defined axiomatic system he means an effectively specified formal system; as explained above, he takes these to be equivalent in the sense that the set of theorems provable in such a system is the same as the set of theorems that can be effectively enumerated by such a machine. Thus, to say that the human mind is equivalent to a finite machine even within the realm of pure mathematics is another way of saying that what the human mind can in principle demonstrate in mathematics is the same as the set of theorems of some formal system By objective mathematics Gdel means the totality of true statements of mathematics, which includes the totality of true statements of arithmetic. Then the assertion that objective mathematics is incompletable is simply a consequence of the second incompleteness theorem. Examined more closely, Gdels argument is that if the human mind were equivalent to a finite machine, or what comes to the same thing an effectively presented formal system S, then there would be a true statement that could never be humanly proved, namely Con(S). So that statement would be absolutely undecidable by the human mind, and moreover it would be equivalent to a diophantine statement. Note however, the tacit assumption that the human mind is consistent; otherwise, it is equivalent to a formal system in a trivial way, namely one that proves all statements. Actually, Gdel apparently accepts a much stronger assumption, namely that we prove only true statements; but for his argument, only the weaker assumption is necessary...

Though he took care to formulate the possibility that the second term of the disjunction holds, theres a lot of evidence outside of the Gibbs lecture that Gdel was convinced of the antimechanist position as expressed in the first disjunct So why didnt Gdel state that outright in the Gibbs lecture instead of the more cautious disjunction in the dichotomy? The reason was simply that he did not have an unassailable proof of the falsity of the mechanist position. Indeed,

despite his views concerning the impossibility of ph ysico-chemical explanations of human reason he raised some caveats in a series of three footnotes to the Gibbs lecture, the second of which is as follows: [I]t is conceivable ... that brain physiology would advance so far that it would be known with empirical certainty, 1) That the brain suffices for the explanation of all mental phenomena and is a machine in the sense of Turing; 2) that such and such is the precise anatomical structure and physiological functioning of the part of the brain which performs mathematical thinking.

Some twenty years later, Georg Kreisel made a similar point in terms of formal systems rather than Turing machines: [I]t has been clear since Gdels discovery of the incompleteness of formal systems that we could not have mathematical evidence for the adequacy of any formal system; but this does not refute the possibility that some quite specific system encompasses all possibilities of (correct) mathematical reasoning In fact the possibility is to be considered that we have some kind of nonmathematical evidence for the adequacy of such [a system]. I shall call the genuine possibility entertained by Gdel and Kreisel, the mechanists empirical defense (or escape hatch) against claims to have proved that mind exceeds mechanism on the basis of the incompleteness theorems.

Lucas on minds and machines The first outright such claim was made by the Oxford philosopher J. R. Lucas in his article, Minds, machines and Gdel (Lucas 1961): Gdels theorem seems to me to prove that Mechanism is false, that is, that minds cannot be explained as machines. His argument is to suppose that there is a candidate machine M (called by him a cybernetical machine) that enumerates exactly the mathematical sentences that can be established to be true by the human mind, hence exactly what can be proved in a formal system for humanly provable truths. Assuming that,

[we] now construct a Gdelian formula in this formal system. This formula cannot be provedin-the-system. Therefore the machine cannot produce the corresponding formula as being true. But we can see that the Gdelian formula is true: any rational being could follow Gdels argument, and convince himself that the Gdelian formula, although unprovable-in-the-system, was nonetheless true This shows that a machine cannot be a complete and adequate model of the mind. It cannot do everything that a mind can do, since however much it can do, there is always something which it cannot do, and a mind can therefore we cannot hope ever to produce a machine that will be able to do all that a mind can do: we can never not even in principle, have a mechanical model of the mind. Paul Benacerraf and Hilary Putnam soon objected to Lucas argument on the grounds that he was assuming it is known that ones mind is consistent, since Gdels theorem only applies to consistent formal systems. But Lucas had already addressed this as follows: a mind, if it were really a machine, could not reach the conclusion that it was a consistent one. [But] for a mind which is not a machine no such conclusion follows. It therefore seems to me both proper and reasonable for a mind to assert its own consistency: proper, because although machines, as we might have expected, are unable to reflect fully on their own performance and powers, yet to be able to be self-conscious in this way is just what we expect of minds: and reasonable, for the reasons given. Not only can we fairly say simply that we know we are consistent, apart from our mistakes, but we must in any case assume that we are, if thought is to be possible at all; and finally we can, in a sense, decide to be consistent, in the sense that we can resolve not to tolerate inconsistencies in our thinking and speaking, and to eliminate them, if ever they should appear, by withdrawing and cancelling one limb of the contradiction.

In this last, there is a whiff of the assertion of human free will. Lucas is more explicit about the connection in the conclusion to his essay: If the proof of the falsity of mechanism is valid, it is of the greatest consequence for the whole of philosophy. Since the time of Newton, the bogey of mechanist determinism has obsessed

philosophers. If we were to be scientific, it seemed that we must look on human beings as determined automata, and not as autonomous moral agents But now, though many arguments against human freedom still remain, the argument from mechanism, perhaps the most compelling argument of them all, has lost its power. No longer on this count will it be incumbent on the natural philosopher to deny freedom in the name of science: no longer will the moralist feel the urge to abolish knowledge to make room for faith. We can even begin to see how there could be room for morality, without its being necessary to abolish or even to circumscribe the province of science. Our argument has set no limits to scientific enquiry: it will still be possible to investigate the working of the brain. It will still be possible to produce mechanical models of the mind. Only, now we can see that no mechanical model will be completely adequate, nor any explanations in purely mechanist terms. We can produce models and explanations, and they will be illuminating: but, however far they go, there will always remain more to be said. There is no arbitrary bound to scientific enquiry: but no scientific enquiry can ever exhaust the infinite variety of the human mind.

According to Lucas, then, FMT (Formalist-Mechanist Thesis) is in principle false, though there can be scientific evidence for the mechanical workings of the mind to some extent or other insofar as mathematics is concerned. What his arguments do not countenance is the possibility of obtaining fully convincing empirical support for the mechanist thesis, namely that eventually all evidence points to mind being mechanical though we cannot ever hope to supply a complete perfect description of a formal system which accounts for its workings. Moreover, such a putative system need not necessarily be consistent. Without such a perfect description for a consistent system as a model of the mind, the argument for Gdels theorem cannot apply. Lucas, in response to such a suggestion has tried to shift the burden to the mechanist: The consistency of the machine is established not by the mathematical ability of the mind, but on the word of the mechanist (Lucas 1996), a burden that the mechanist can refuse to shoulder by simply citing this empirical defense. Finally, the compatibility of FMT with a non-mechanistic account for thought in general would still leave an enormous amount of room for morality and the exercise of free will.

Despite such criticisms, Lucas has stoutly defended to the present day his case against the mechanist on Gdelian grounds. One can find on his home page most of his published rejoinders to various of these as well as further useful references to the debate. The above quotations do not by any means exhaust the claims and arguments in his thoroughly thought out discussions. [67]

Towards a theory of consciousness

We may say that all dead-ends and paradoxes of thought are fallacies of logic. A key feature of logic is bivalence- something must be either true or false. Gdel however showed that truth is something that cannot be logically proved. This way he showed the vulnerability of logic against physical reality. This is also an indication, if not proof, that logic is inherent to humans, not empirical, and, strange as it may seem, ethics as well as instincts are subsets of logic. That is commands of right and wrong. Even our genetic code is a logical set of commands, with genes in a binary state of on and off. But what could be out there or inside, here and now which could lift the paradoxes of thought and all moral obstacles? Apart from some new approaches of classical logic, during the last century a new form of logic has been established, quantum logic. According to quantum logic, something can be simultaneously true and false, without this being in contradiction with physical reality. Quantum physics will also demolish other strongholds of classical logic, such as causality, locality and determinism, thus revealing a new way of thinking, of natural intelligence and of Reason.

Is logic empirical?
According to Wikipedia, this is the title of two articles of Hilary Putnam and Michael Dummett, dealing with the question of whether the empirical data of quantum mechanics can reverse the usual classical logic. Putnam used an analogy between the laws of logic and the laws of geometry. Once we accepted Euclidean geometry (flat space) as the true topology of space. Currently we use non-Euclidean geometries (curved space) for the description of space-time.

Putnam also uses the uncertainty principle (that we cannot know the position and the momentum of a quantum object simultaneously) to indicate the need for a new logic, i.e. quantum logic.

Dummett in turn indicates that a realistic vision of the world through logic preserves the law of distributivity, in the same way that it maintains the principle of consistency. For Dummet, realism and its preservation cannot explain how some things can be neither right nor wrong, considering thus the need for an intuitive logic, according which ambiguity and metaphysics is a reality. [68]

Putnam in his article states, We must now ask: what is the nature of the world if the proposed interpretation of quantum mechanics is the correct one? The answer is both radical and simple. Logic is as empirical as geometry. It makes as much sense to speak of physical logic as of physical geometry. We live in a world with a non-classical logic. Certain statements - just the ones we encounter in daily life - do obey classical logic, but this is so because the corresponding subspaces form a very special lattice under the inclusion relation: a so-called Boolean lattice. Quantum mechanics itself explains the approximate validity of classical logic in the large, just as non-Euclidean geometry explains the approximate validity of Euclidean geometry in the small. [69]

But is really geometry empirical? Is it sufficient that we recognize objects and distances in space with our senses, or is it necessary that abstract thought constructs geometrical properties? Does an ant or a bat, for example, recognize any kind of geometry? Even the conclusions of quantum physics can be reproduced in the binary code of simple Boolean logic. What does this mean? There is in fact no quantum logic. There is quantum theory, which analyzes phenomena beyond human perception and experience, and which humans try to understand using the only logic they have. One way or the other, it is a conflict between experience, which is created through the senses, and theory, which stems from the real world and phenomena which exceed human understanding.

The conflict between experience and theory looks like a game of chess. Our eyes detect the position of the various pieces on the chessboard, while our mind simultaneously collates the next strategic moves, using some reasonable, permissible combinations. The rules which both the senses and the brain operate with are fixed and unbreakable. Otherwise we lose the game of

reality. This of course prevents us neither to think originally nor to believe in ghosts. But it is important to understand that all our visions and bold theories stem from functions and structures of the human nature. The answer to the question: What lies outside us? is truly found within ourselves together with all our theories, beliefs and intuitions.

A bit of intelligent information

Fuzzy Logic Diagram

For what reason do we believe that humans is the smartest species of all? The simplest answer is because it was humans who asked the question. But the question remains whether there was any intelligence within the universe before humans appeared to pose the question. Again the answer is rather simple: Obviously yes, otherwise we wouldnt be here to ask. The essential question is therefore if humans are intelligent, or if this is just a little bits of a stupid question.

Perhaps, it is also simple to show that intelligence is necessarily found from the beginning of the universe. Otherwise there would be no physical laws according which matter was organized, which in turn made us to think about it. The world exists but also changes. In biology it is considered that a series of mutations in the DNA progressively leads to the genetic diversity of species. Mutations have to do with changes in genetic information stored in the DNA. In this sense it is information which changes, recombines its constituent pieces, and this often takes place independently of the factors which activate the differentiation or of the place where information resides.

This general and not necessarily physical nature of information, and consequently of the evolution of species, shows us that this evolution as well as what we call species has more to do with a conceptual rather than a biological context. Of course, one can say that there is no intelligence outside living matter. However, this argument can very easily be falsified: it is sufficient to consider the case of artificial intelligence, or even more abstract non-living entities, such as light, physical fields, energy in general, and the laws of nature, which seem to be able to self-organize, to propagate, to conserve, to transform, to be created and be destroyed, just as any biological being can do.

So we arrive in a straightforward a way at the last stronghold of a concept of intelligence fragmented and departed from the rest of reality: that of awareness. Could one now claim that what distinguishes people as truly intelligent beings is the fact that they have some sense of themselves? Still, however, this self-reference is actually a common feature of all natural systems. As we know, in biology, living organisms have the ability of self-regulation, thanks to which they can survive against changes in the natural environment. In general, all physical systems have this ability of self-preservation, through thermodynamic processes, and in the context of conservation of energy. At a level of pure information, it is information itself which is self-organized, can spread and vary, and all this happens at a primordial and fundamental level, long before any biological or even physical process.

On this last property of self-reference as a necessary condition for the building of human intelligence and even more for awareness we could focus and analyze it more. Thought, and not by chance, has inherited from nature the same characteristics that nature has. We have seen that mater and information in general, in the form of energy fields, natural laws, ways of thinking, and so on, has (apart from the almost magical property of spontaneous birth) the ability of selforganization. In the case of human thought and in particular of logic, self-organization appears in the form of logical loops, i.e. repetitions of trains of thought which lead to the documentation of events in our minds. We could talk about a process of squaring the cycles, where each next cycle or logical loop overlaps with the previous one, until our judgment considers a logical conclusion as valid. Here, we are not so much interested in the details of this procedure but in the

procedure itself. In other words, is the condition of self-reference not only necessary but also sufficient in order to explain self-awareness?

It is true that we usually find it hard to understand the diversity and specificity that exists around us. And this happens not only at a human level, where, for example, a Westerner would say that all Easterners are identical: short, with slanted eyes! We find it much more difficult to discern the differences in other species. For example, all dogs of a specific race look identical (except perhaps our own). This of course has often led us to a kind of genetic and intellectual racism. But neither an animal is less intelligent because it doesnt talk, nor we because we cannot climb on trees.

This is the phenomenon of intellectual divergence, as I would call it. It has to do with the diversification of information at a level equivalent but different from that of biology. Information is organized in nature so as to provide coherent structures, as well as it is differentiated so that any structure is different from any other which has ever existed. Moreover, when such a natural system of information comes into contact with another, processes are activated which lead to further mutual diversification. This informational kind of divergence is what often makes us recognize our own intelligence in contrast with that of other people, or our intelligence as a species against that of all other species. This, of course, is mainly a problem of misinformation.

We have talked about self-reference with regard to the organization and self-sustainability of information. Although self-reference is a term used in logic, information alone is not logical, unless of course we consider logic as a special case of self-reference. Indeed, logic forms a closed, we can say, system, since even though logic may wonder about the outside world, it still remains something which refers to itself. Logic, moreover, is extremely selfish. It constantly seeks confirmation by self-reference and makes comparisons at every opportunity with any other logic it may encounter. However, we should not be hard in our criticism against this selfish nature of logic, since it derives from the same feature of information self-regulation. Through this capacity of setting up its bits or quanta, information is able to comprise a coherent and unique entity.

Self-reference goes far beyond the limits of biology, psychology or logic, and reaches the core of the very nature of the world. A similar meaning in the most fundamental level where from nothing begins something is that of vibration. Everything vibrates, even the vacuum. And this vibration is nothing more than a self-sustained and consistent with respect to time returning to equilibrium, a continuous playback of the initial conditions, a form of eternal recovery of the spontaneous nature which originally gave birth to the system itself. This incessant oscillation is precisely what human thought performs, from the past to the future, having the present as a central point of reference, between right and wrong or good and evil, from the meaning of life to nothingness.

The issue here, however, is how thought manages to disengage from this infinite loop of selfrepetition. In other words, how humans pass from imitation to creativity? First of all, it is worth mentioning that we often fool ourselves that we are creative, while, at the same time, we imitate. Let us take for example the writing of a poem. Essentially, it is a recombination by imagination of concepts, feelings and words, according some grammatical rules.

No matter how pedestrian it may sound, this endless return of memory to its roots, even when these roots are fantastic, is the main mechanism which thought and nature itself use. Superimposed trains of thought, or circles within circles, just like the cycles made by a pebble dropped in the water. What is amazing, however, is how intelligence by this mechanism of selfreference is able to become enriched with new knowledge, to make progress and acquire magical skills, such as intuition, foresight, and of course creativity.

Perhaps the answer is- once again- hopelessly simple: Humans using their intelligence learn little by little how to produce their own unique thoughts. This is of course a fundamentally selfreferential procedure, where each next thought refers to the previous one, before making a step forward to produce the next thought, and so on. This procedure may be summed up in the following hypothesis:

Case: Every system of information (such as that of human thought) contains a proposition (or a group of propositions), which is self-referential, and which may upgrade the system to such an extent that the system finally surpasses itself.

What could this hypothesis really mean? What the key-proposition unlocking the secrets of inspiration and of genius could be? Perhaps the sentence Thought is creative is sufficient so that someone leads himself to creativity. Of course, social opportunities and material resources we will have to assist, but at a level of pure information it is essentially a matter of selfawareness; a kind of squared perception, a strange loop, or simply self-reference. The key concept, therefore, in order someone to become intelligent is to realize intelligence as a fundamental pre-supposition. From that moment onwards, one acquires the capability to manipulate ones thought at will. In other words, while we need to take an initial look in the mirror to become aware of ourselves, we then have to combine many other personalities and faces of reality in order to get a complete picture about nature and the meaning of the world.

In fact, the most shocking event for an intelligence cut off from the world is when it comes in contact with this world, realizing that reality is something very different from what it had imagined. How is possible someone having been so terribly wrong? Can we say that the phenomenon of intellectual bias or divergence is sufficient to explain the diversity between all species and human beings? How is possible that the same self-referential procedure by which conscience is constructed can lead to results that are foreign even for the same conscience that originally produced them? I would say in a few words, that the secret in this process is that each next mental event arises not just from the immediately preceding one, but also from the sum of all previous ones. In other words, rather than a process of the form 1,2,3,4, etc., we have a process of the form 1,2,3,6,12,24, etc., where the number 24, e.g., results from the sum of all the previous ones, and so on. This of course leads to an intellectual divergence exponentially. Obviously, there isnt any decisive stage or any unique and specific information which leads to a shift from imitation to creativity. Instead, it is the sum of all information gradually leading to this shift. Human thought is constructed on endless repetitions of infinite loops with the characteristic of self-reference, it is revised by its logical paradoxes, it learns from its mistakes, it is rewarded

by the right conclusions, and it is driven this way towards knowledge. Intellectual divergence is an expression of how quickly the sequences of trains of thought can depart from the initial point of reference and diversify in order to reach an independent and original intellectual level. Furthermore, the memory of the series of trains of thought emphasizes that instead of a mere spontaneous way of thinking referring to its own origin and wondering about any possible progress, what is more important is the overall totality of trains of thought, which preceded and which guide intellectual development. This is in fact the way that a fragmented and withdrawn self-referential way of thinking is driven towards the outside world and wholeness.

A new theory of the relationship of mind and matter

Now we will discuss an article of David Bohm concerning the eternal problem of the relationship between mind and matter: The problem of the relationship of mental and physical sides of reality has long been a key one, especially in Western philosophy. Descartes gave a particularly clear formulation of the essential difficulties when he considered matter as extended substance (i.e. as occupying space) while mind was regarded as thinking substance (which clearly does not occupy space). He pointed out that in mind, there can be clear and distinct thoughts that correspond in content to distinct objects that are separated in space. But these thoughts are not in themselves actually located in separate regions of space, nor do they seem to be anything like separate material objects in other ways. It appears that the natures of mind and matter are so different that one can see no basis for a relationship between them

Descartes solved the problem by assuming that God, who created both mind and matter is able to relate them by putting into the minds of human beings the clear and distinct thoughts that are needed to deal with matter as extended substance. It was of course also implied by Descartes that the aims contained in thoughts had somehow to be carried out by the body, even though he asserted that thought and the body had no domain in common. It would seem (as was indeed suggested at the time by Malebranche) that nothing is left but to appeal to God to arrange the desired action somehow. However, since that time, such an appeal to the action of God has

generally ceased to be accepted as a valid philosophical argument. But this leaves us with no explanation of how mind and matter are related

The new approach described in this article is made possible from the side of matter by the quantum theory, which is currently the most basic theory of the nature of matter that we have. Certain philosophers of mind would criticize bringing physics into the study of mind in this way, because they assume mind to be of such a different (and perhaps emergent) quality that physics is not relevant to it (even though they also assume that mind has a material base in the brain). Such criticisms are inspired, in large part, by the belief that physics is restricted to a classical Newtonian form, which in essence ultimately reduces everything to a mechanism of some kind.

However, the quantum theory, which is now basic, implies that the particles of physics have certain primitive mind-like qualities which are not possible in terms of Newtonian concepts (though, of course, they do not have consciousness). This means that on the basis of modern physics even inanimate matter cannot be fully understood in terms of Descartess notion that it is nothing but a substance occupying space and constituted of separate objects. Vice versa, it will be argued that mind can be seen to have always a physical aspect, though this may be very subtle. Thus, we are led to the possibility of a real relationship between the two, because they never have the absolute distinction of basic qualities, that was assumed by Descartes and by others, such as the emergent materialists.

The question of the relationship of mind and matter has already been explored to some extent in some of my earlier work in physics. In this work, which was originally aimed at understanding relativity and quantum theory on a basis common to both, I developed the notion of the enfolded or implicate order. The essential feature of this idea was that the whole universe is in some way enfolded in everything and that each thing is enfolded in the whole. From this it follows that in some way, and to some degree everything enfolds or implicates everything, but in such a manner that under typical conditions of ordinary experience, there is a great deal of relative independence of things. The basic proposal is then that this enfoldment relationship is not merely passive or superficial. Rather, it is active and essential to what each thing is. It follows that each thing is internally related to the whole, and therefore, to everything else. The external

relationships are then displayed in the unfolded or explicate order in which each thing is seen, as has already indeed been indicated, as relatively separate and extended, and related only externally to other things. The explicate order, which dominates ordinary experience as well as classical (Newtonian) physics, thus appears to stand by itself. But actually, it cannot be understood properly apart from its ground in the primary reality of the implicate order.

Because the implicate order is not static but basically dynamic in nature, in a constant process of change and development, I called its most general form the holomovement. All things found in the unfolded, explicate order emerge from the holomovement in which they are enfolded as potentialities and ultimately they fall back into it. They endure only for some time, and while they last, their existence is sustained in a constant process of unfoldment and re-enfoldment, which gives rise to their relatively stable and independent forms in the explicate order.

The above description then gives, as I have shown in more detail elsewhere a valid intuitively graspable account of the meaning of the properties of matter, as implied by the quantum theory. It takes only a little reflection to see that a similar sort of description will apply even more directly and obviously to mind, with its constant flow of evanescent thoughts, feelings, desires, and impulses, which flow into and out of each other, and which, in a certain sense, enfold each other Or to put it differently, the general implicate process of ordering is common both to mind and to matter. This means that ultimately mind and matter are at least closely analogous and not nearly so different as they appear on superficial examination. Therefore, it seems reasonable to go further and suggest that the implicate order may serve as a means of expressing consistently the actual relationship between mind and matter

A brief account of the causal interpretation of the quantum theory will now be given. The first step in this interpretation is to assume that the electron, for example, actually is a particle, following a well- defined trajectory (like a planet around the sun). But it is always accompanied by a new kind of quantum field. Now, a field is something that is spread out over space. We are already familiar, for example, with the magnetic field, shown to spread throughout space by means of iron filings around a magnet or a current carrying wire. Electric fields spreading out

from a charged object are also well known. These fields combine to give electromagnetic waves, radiating out through space (e.g. radio waves).

The quantum field is, however, not simply a return to these older concepts, but it has certain qualitatively new features. These imply a radical departure from Newtonian physics. To see one of the key aspects of this departure, we begin by noting that fields can generally be represented mathematically by certain expressions that are called potentials. In physics, a potential describes a field in terms of a possibility or potentiality that is present at each point of space for giving rise to action on a particle which is at that point. What is crucial in classical (Newtonian) physics is then that the effect of this potential on a particle is always proportional to the intensity of the field. One can picture this by thinking of the effect of water waves on a bobbing cork, which gets weaker and weaker as the waves spread out. As with electric and magnetic fields, the quantum field can also be represented in terms of a potential which I call the quantum potential. But unlike what happens with electric and magnetic potentials, the quantum potential depends only on the form, and not in the intensity of the quantum field. Therefore, even a very weak quantum field can strongly affect the particle. It is as if we had a water wave that could cause a cork to bob up with full energy, even far from the source of the wave. Such a notion is clearly fundamentally different from the older Newtonian ideas. For it implies that even distant features of the environment can strongly affect the particle

Or to put it differently, what is basically new here is the feature that we have called non-locality, i.e. the ability for distant parts of the environment (such as the slit system) to affect the motion of the particle in a significant way (in this case through its effect on the quantum field).

I would like to suggest that we can obtain a further understanding of this feature by proposing a new notion of active information that plays a key role in this context Our proposal is then to extend this notion of active information to matter at the quantum level. The information in the quantum level is potentially active everywhere, but actually active only where the particle is. Such a notion suggests, however, that the electron may be much more complex than we thought. This suggestion goes against the whole tradition of physics over the past few centuries which is committed to the assumption that as we analyze matter into smaller and smaller parts, their

behavior grows simpler and simpler. Yet, assumptions of this kind need not always be correct. Thus, for example, large crowds of human beings can often exhibit a much simpler behavior than that of the individuals who make it up...

The notion of active information implies the possibility of a certain kind of wholeness of the electron with distant features of its environment. This is in certain ways similar to Bohrs notion of wholeness, but it is different in that it can be understood in terms of the concept of a particle whose motion is guided by active information. On the other hand, in Bohrs approach, there is no corresponding way to make such wholeness intelligible.

The meaning of this wholeness is, however, much more fully brought out by considering not a simple electron as we have done thus far, but rather a system consisting of many such particles. Here several new concepts appear. First, two or more particles can affect each other strongly through the quantum potential even when they are separated by long distances. This is similar to what happened with the slits, but it is more general. Such non-local action at long distances has been confirmed in experiments aimed at testing whether the Bell criterion that I mentioned earlier is satisfied. Secondly, in a many particle system, the interaction of the particles may be thought of as depending on a common pool of information belonging to the system as a whole, in a way that is not analyzable in terms of pre-assigned relationships between individual particles

A more detailed analysis shows that the quantum potential for the whole system then constitutes a non-local connection that brings about the above described organized and orderly pattern of electrons moving together without scattering. We may here make an analogy to a ballet dance, in which all the dancers, guided by a common pool of information in the form of a score, are able to move together in a similar organized and orderly way, to go around an obstacle and reform their pattern of movement.

If the basic behavior of matter involves such features as wholeness, non-locality and organization of movement through common pools of information, how then do we account for ordinary large scale experience, in which we find no such features? It can be shown that at higher temperatures, the quantum potential tends to take the form of independent parts, which implies that the

particles move with a corresponding independence. It is as if, instead of engaging in a ballet dance, people were moving independently, each with his own separate pool of information. They would then constitute a crowd, in which the organized movement of the ballet has broken up.

It follows from the above that the possibilities for wholeness in the quantum theory have an objective significance. This is in contrast to what happens in classical physics, which must treat a whole as merely a convenient way of thinking about what is considered to be in reality nothing but a collection of independent parts in a mechanical kind of interaction. On the other hand, in the quantum theory, the ballet-like behavior in superconductivity, for example, is clearly more like that of an organism than like that of mechanism. Indeed, going further, the whole notion of active information suggests a rudimentary mind-like behavior of matter, for an essential quality of mind is just the activity of form, rather than of substance. Thus, for example, when we read a printed page, we do not assimilate the substance of the paper, but only the forms of the letters, and it is these forms which give rise to an information content in the reader which is manifested actively in his or her subsequent activities. A similar mind-like quality of matter reveals itself strongly at the quantum level, in the sense that the form of the wave function manifests itself in the movements of the particles. This quality does not, however, appear to a significant extent at the level at which classical physics is a valid approximation.

Let us now approach the question from the side of mind. We may begin by considering briefly some aspects of the nature of thought. Now, a major part of the significance of thought is just the activity to which a given structure of information may give rise. We may easily verify this in our subjective experience. For example, suppose that on a dark night, we encounter some shadows. If we have information that there may be assailants in the neighborhood, this may give rise immediately to a sense of danger, with a whole range of possible activities (fight, flight, etc.). This is not merely a mental process, but includes an involuntary and essentially unconscious process of hormones, heart-beat, and neurochemicals of various kinds, as well as physical tensions and movements. However, if we look again see that it is only a shadow that confronts us, this thought has a calming effect, and all the activity described above ceases. Such a response to information is extremely common (e.g. information that X is a friend or an enemy, good or

bad, etc.). More generally, with mind, information is thus seen to be active in all these ways, physically, chemically, electrically, etc.

Such activity is evidently similar to that which was described in connection with automatic pilots, radios, computers, DNA, and quantum processes in elementary particles such as electrons. At first sight, however, there may still seem to be a significant difference between these two cases. Thus, in our subjective experience action can, in some cases at least, be mediated by reflection in conscious thought, whereas in the various examples of activity of objective information given here, this action is immediate. But actually, even if this happens, the difference is not as great as might appear. For such reflection follows on the suspension of physical action. This gives rise to a train of thought. However, both the suspension of physical action and the resulting train of thought follow immediately from a further kind of active information implying the need to do this.

It seems clear from all this that at least in the context of the processes of thought, there is a kind of active information that is simultaneously physical and mental in nature. Active information can thus serve as a kind of link or bridge between these two sides of reality as a whole. These two sides are inseparable, in the sense that information contained in thought, which we feel to be on the mental side, is at the same time a related neurophysiological, chemical, and physical activity (which is clearly what is meant by the material side of this thought).

We have however up to this point considered only a small part of the significance of thought. Thus, our thoughts may contain a whole range of information content of different kinds. This may in turn be surveyed by a higher level of mental activity, as if it were a material object at which one were looking. Out of this may emerge a yet more subtle level of information, whose meaning is an activity that is able to organize the original set of information into a greater whole. But even more subtle information of this kind can, in turn, be surveyed by a yet more subtle level of mental activity, and at least in principle this can go on indefinitely. Each of these levels may then be seen from the material side. From the mental side, it is a potentially active information content. But from the material side, it is an actual activity that operates to organize the less subtle

levels, and the latter serve as the material on which such operation takes place. Thus, at each level, information is the link or bridge between the two sides...

One may then ask: what is the relationship of these two processes? The answer that I want to propose here is that there are not two processes. Rather, I would suggest that both are essentially the same. This means that that which we experience as mind, in its movement through various levels of subtlety, will, in a natural way ultimately move the body by reaching the level of the quantum potential and of the dance of the particles. There is no unbridgeable gap of barrier between any of these levels. Rather, at each stage some kind of information is the bridge. This implies, that the quantum potential acting on atomic particles, for example, represents only one stage in the process.

The content of our own consciousness is then some part of this over-all process. It is thus implied that in some sense a rudimentary mind-like quality is present even at the level of particle physics, and that as we go to subtler levels, this mind-like quality becomes stronger and more developed. Each kind and level of mind may have a relative autonomy and stability. One may then describe the essential mode of relationship of all these as participation, recalling that this word has two basic meanings, to partake of, and to take part in. Through enfoldment, each relatively autonomous kind and level of mind to one degree or another partakes of the whole. Through this it partakes of all the others in its gathering of information. And through the activity of this information, it similarly takes part in the whole and in every part. It is in this sort of activity that the content of the more subtle and implicate levels is unfolded (e.g. as the movement of the particle unfolds the meaning of the information that is implicit in the quantum field and as the movement of the body unfolds what is implicit in subtler levels of thought, feeling, etc.).

For the human being, all of this implies a thoroughgoing wholeness, in which mental and physical sides participate very closely in each other. Likewise, intellect, emotion, and the whole state of the body are in a similar flux of fundamental participation. Thus, there is no real division between mind and matter, psyche and soma. The common term psychosomatic is in this way seen to be misleading, as it suggests the Cartesian notion of two distinct substances in some kind of interaction (if not through the action of God, then perhaps in some other way).

Extending this view, we see that each human being similarly participates in an inseparable way in society and in the planet as a whole. What may be suggested further is that such participation goes on to a greater collective mind, and perhaps ultimately to some yet more comprehensive mind in principle capable of going indefinitely beyond even the human species as a whole. [70]

Beyond the principle of analogy

The notion of analogy has been used since the ancient times in various contexts and different ways. This fact shows the vastness of the notion. After all the principle of analogy is a principle of logic. The word itself reveilles its meaning: logos, percentage, part, fraction, or relation between two or more things. The meaning of logos is not only logic but cause as well (in the beginning there is Logos), in which case the principle underlines the path we have to take: From the Beginning, by using some, lets say, accelerating pace, through analogies and comparisons, till our final result or goal. The principle lies within the foundations of human thought, and outgrows with the process of reasoning, so that by dividing and conquering the subject of our study we can reform it like a garment to fit in perfectly with the shape of our logic.

Up until this point, we have followed an analysis in accordance with the principles of our own logic. Because it is important for someone to realize that the laws of nature are functions of our intelligence. However, we have concealed, consciously or unconsciously, a deeper reality of the relation between us and the world: The universe and what we think of the universe is not necessarily the same thing. This is the basic assumption of the so called anthropic principle. This principle states that the universe was programmed to produce intelligent life at some stage of its history, since all universal constants seem to be suitable for such a purpose (fine tuning). Still, this sort of intelligent design lies within the sphere of our own intelligence. The fact th at we exist and that we are intelligent doesnt mean that the universe had such an intention from the beginning. Even the notion of intention itself is understood by means of our own causality. We can make here the following statement regarding the principle of analogy:

We can understand the world as long as the principle of analogy is valid in nature. But if it is just a principle of human logic, then, by applying it to nature, we will lead ourselves to paradoxes and absurdities.

We may have the ability to understand the world because nature works the same way as our mind does. If we belong to nature, contained in it, can the process of dividing, comparing, and rebuilding offer as a sufficient or a complete picture of nature and ourselves, within the context of the anthropic principle and the principle of analogy? If we really are part of nature, then it may be a matter of time to earn a bigger share of the world, till we have it perfectly understood. The degree of this perfection, however, depends on the question if our logic can understand everything for its own sake. The answer to the previous question given by Gdel, with his incompleteness theorem, is negative. Because, according to this theorem, our assumptions about the world (our logical foundation) are arbitrary, so another assumption is needed to justify the first one, then another one, and so on ad infinitum. This infinite induction of our thought, during the process of logically proving its major, or even minor, problems, would result in losing ourselves and our consciousness on an endless route without return. Luckily enough, our principle presupposes the reversibility of logical processes, by virtue of the symmetry between the parts under comparison. In other words, if the steps we make on the way back are small and steady, then we can preserve the memory of our journey, without losing any information necessary for this trip.

The reversibility of natural processes and the notion of symmetry are the fundamental assumptions guaranteeing that when we get back we will find things, more or less, as we left them. However, a route, lets say, from a point A to a point B, is never the same with the route from B to A. For example it can be a slope or, in any case, time will have elapsed. In fact, when we return we find a new point C. How can we resolve this paradox? That, while in nature there are few (if any because of the ruthless 2nd law of thermodynamics) reversible phenomena and everything, sooner or later, decays, logical processes, on the contrary, return intact, without having lost any of their rigor or their memory of the journey and for what cause? Have we

misunderstood nature (for example, that time may not be a one way process) or is it just our logic failing to conceive its own deeper essence? The principle of analogy assures that as we so the word, and the opposite. Even if we make up a small fraction of the world, this word lies inside us miniscule, dim, but as a totality, so that we may find a way to express it in all its aspects, with an appropriate scale of magnification. This possibility, even if it doesnt promise full accuracy, neverthel ess it reassures us that nature functions the same way as our logic does. Besides, our logic could be nothing else but a part of nature.

Therefore, I would suggest in the context of this principle that the personal expansion of human genius corresponds to the evolution of intelligence in the universe. The symmetries we recognize in nature, for example the shape of a flower, reappear in human conscience, which, by obtaining this way the necessary tools for comparison, is able to restore its relation with the world. The reversibility ensures the preservation of information, hence our memories, so that man returning to his roots can ask questions about his future purposes. Besides, our destination is at the same time the fate of the human universe, which, in turn, is a proper subset of the real universe. Thus, the observer realizes that what he observes exists simultaneously outside and within his conscience, while he is a fundamental part of the whole experiment. This sort of entanglement between us and the cosmos, between an observer and what he observes, has become a common notion in modern physics, which regards itself as part of the phenomena it studies. As a result of this realization the entangled states of systems have been recognized. Quantum entanglement represents a natural phenomenon as well as a state of the human mind, and has also set the limits between the old and the new age of thought. Nave realism falters, since at the moment of entanglement the whole universe collapses above or inside the head of the observer, causality is abolished, since quantum entanglement takes place instantaneously, and it is replaced by an idea about the world where space and time may have never existed outside our conscience. And if the participation of the observer in the cosmic being and becoming is the missing link between man and the universe, then the principle of analogy

turns into an ultimate form of symmetry, into an identity of what we want from the world and what the world really expects from us. [71]

Global consciousness
The issue of the so-called paranormal phenomena has been existing since the ancient times. The timelessness of this subject shows that something seems to be happening, although even today we are not in position to explain what. Telepathy, psychokinesis (PK), psychometry, foresight, are all vague terms which comprise the set of a sixth sense, the importance and the nature of which seems for some reason to be incomprehensible. Have all these phenomena to do with just a logical paradox or a human illusion, or is it exactly this illusion hiding a new world which we are barely able to realize?

Modern science is still investigating these phenomena, trying to understand their meaning. Relatively recently, a very well organized attempt has been created by a team of scientists on a global scale and even with very remarkable results. It is the Global Consciousness Project, which began in 1998. The group uses the so-called RNGs (Random Number Generators), i.e. machines which produce sets of numbers at random. Then, this random sequence of numbers is compared with another one which is produced while subjects are trying to influence the machine. On a broader scale, the randomly generated sets of numbers are compared with sets of numbers produced by the machines at certain times when social events of particular importance take place. In a text that this group published, titled Correlations of continuous random data, the following are stated: The interaction of consciousness and physical systems is most often discussed in theoretical terms, usually with reference to the epistemological and ontological challenges of quantum theory. Less well known is a growing literature reporting experiments that examine the mindmatter relationship empirically. Here we describe data from a global network of physical random number generators that shows unexpected structure apparently associated with major world events. Arbitrary samples from the continuous, four-year data archive meet rigorous criteria for

randomness, but pre-specified samples corresponding to events of broad regional or global importance show significant departures of distribution parameters from expectation. These deviations also correlate with a quantitative index of daily news intensity. Focused analyses of data recorded on September 11, 2001, show departures from random expectation in several statistics. Contextual analyses indicate that these cannot be attributed to identifiable physical interactions and may be attributable to some unidentified interaction associated with human consciousness.

A typical example is the correlation between randomly generated numbers with the numeric sequences which were generated by the machines on 11 September 2001, when the attack on the World Trade Center occurred. The research team found that the series produced by the machines deviated significantly from a random sequence. This is obviously an amazing conclusion, since, if the processing of the data and the theoretical prediction of randomness are correct, the results indicate that humans in an individual or collective level can with their consciousness affect physical systems, such as RNG machines, and that human thoughts may produce an unknown form of force, which not only affects other minds or objects, but also spacetime itself: Quantum indeterminate electronic random number generators (RNG) are designed to produce random sequences with near maximal entropy. Yet under certain circumstances such devices have shown surprising departures from theoretical expectations. There are controversial but substantial claims that measurable deviations of statistical distribution parameters may be correlated, for unknown reasons, with conditions of importance to humans. To address this putative correlation in a rigorous way, a long-term, international collaboration was instituted to collect standardized data continuously from a globally distributed array of RNGs. First deployed in August 1998, the network uses independent physical random sources designed for serial computer interfacing and employs secure data-collection and networking software. Data from the remote devices are collected through the Internet and stored in an archival database The data archive, continuously updated, is freely accessible through the Internet and can be investigated for correlations with data from many disciplines: earth sciences, meteorology, astronomy, economics, and other metrics of natural or human activity

On September 11 the global network of 37 online RNGs displayed strong deviations in several statistics. The registered prediction for this event designated an examination period from 08:35 to 12:45 (EDT local time) However, a trend exhibiting substantial excess in the network

variance measure began early in the morning and continued for more than two days, until roughly noon on September 13. A cumulative deviation plot shows a notable departure from the expectation for this statistic The trend beginning at the time of the World Trade Center attack and maximizing 51 hours later is statistically unlikely, as shown by iterative resampling analysis

In summary, we find evidence for a small, but replicable effect on data from a global network of random generators that is correlated with designated periods of intense collective human activity or engagement, but not with any physical sources of influence We attribute this result to a real correlation that should be detectable in future replications as well as in analyses using correlates independent from the project prediction registry

The post hoc analyses presented here indicate possible extensions of this research. For example, the September 11 results imply that there is a correlation between the intensity or impact of an event and the strength of deviations present in the data. The September 11 event is arguably the most extreme in the database in terms of its social, psychological, emotional, and global impact. As the analysis has shown, it also exhibits the largest and most consistent deviations in the database on the statistical measures we have investigated. It will be important to develop strategies to test this conjecture over the full set of replications and in newly acquired data. The September 11 analysis also suggests that the effect detected in the formal replications is distributed over the database and is not isolated to the prediction periods. The statistical significance of these excursions is limited to roughly three normal deviations. Thus, as isolated, post hoc analyses, none of these individually would be sufficient to conclude a causal or other direct link between the September 11 events and the measured deviations. In light of the formal result, however, these analyses do suggest that independent metrics spanning the database and consistent with the experimental hypothesis may reveal other correlations with our statistical measures. This suggestion is supported by the news index analysis in which deviations in the RNG data correlate with an objective measure of news intensity. It is likely that more

sophisticated metrics with optimized statistical power could provide independent verification of the results generated by the ongoing experiment as well as the capability to probe secondary correlates in the data. [72]

The relationship between consciousness and the rest of the world, as exotic as it may sound, is a matter which modern quantum theory is dealing with. The phenomenon of quantum entanglement refers to a mysterious kind of interaction not only between physical microscopic systems, but also between the observer and the observed physical system, as well as with the entire experimental set up he uses to study the system. Still, what remains to be answered is of what nature this force of consciousness is, how it is produced, and how it affects physical systems. Also, we need to specify what we mean when we say consciousness, and how, or why, we distinguish consciousness from the rest of the world. For, apart from the fact that the mathematics we use to determine if a phenomenon is random or not may be insufficient, the same concepts of chance or of the relationship between man and nature may be ambiguous and limited. [72]

Towards an existential singularity

The term technological singularity refers to the hypothetical appearance in the future of a higher than human intelligence. Since the capacities of such intelligence would be difficult to understand from the point of view of current human intelligence, the fact of a technological singularity may be considered as an intellectual event horizon, beyond which the future is almost impossible to be seen. However, the advocates of this theory expect such an event to cause a mental Big Bang, at which point super-intelligence will create increasingly powerful brains. According to Wikipedia, the first use of the term singularity in this context was made by von Neumann. In the mid-1950s he spoke of ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue. The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible

causes of the singularity. Futurist Ray Kurzweil cited von Neumanns use of the term in a foreword to von Neumanns classic The Computer and the Brain. [73] Ray Kurzweil also wrote a book called Singularity is near. In the beginning o f this book the following are stated: At the onset of the twenty-first century, humanity stands on the verge of the most transforming and thrilling period in its history. It will be an era in which the very nature of what it means to be human will be both enriched and challenged, as our species breaks the shackles of it genetic legacy and achieves inconceivable heights of intelligence, material progress, and longevity.

For over three decades, the great inventor and futurist Ray Kurzweil has been one of the most respected and provocative advocates of the role of technology in our future. In his class The Age of Spiritual Machine, he presented the daring argument that with the ever-accelerating rate of technological change, computers would rival the full range of human intelligence at its best. Now, in The Singularity Is Near, he examines the next step in this inexorable evolutionary process: the union of human and machine, in which the knowledge and skills embedded in our brains will be combined with the vastly greater capacity, speed, and knowledge-sharing ability of our own creations.

The merging is the essence of the Singularity, an era in which our intelligence will become increasingly non biological and trillions of times more powerful than it is today- the dawning of a new civilization that will enable us to transcend out biological limitations and amplify our creativity. In this new world, there will be no clear distinction between human and machine, real reality and virtual reality. We will be able to assume different bodies and take on a range of personae at will. In practical terms, human aging and illness will be reversed; pollution will be stopped; world hunger and poverty will be solved. Nanotechnology will make it possible to create virtually any physical product using inexpensive information processes and will ultimately turn even death into a soluble problem.

While the social and philosophical ramifications of these changes will be profound, and the threats they pose considerable, The Singularity Is Near maintains a radically optimistic view of the future course of human development. As such, it offers a view of the coming age that is both a dramatic culmination of the centuries of technological ingenuity and a genuinely inspiring vision of our ultimate destiny. [74] The six epochs Although the social and philosophical implications of this potential development are enormous, the author maintains a fundamentally optimistic view about the future of human evolution. It offers a perspective for an upcoming possible point time which will be the peak of all previous centuries of technological invention and serves as an intelligent vision of an ultimate fate. Among other things, he uses the following plan to show the stages of human development:

The six epochs, according to Kurzweil

Epoch One: Physics and Chemistry Recent theories of physics concerning multiple universes speculate that new universes are created on a regular basis, each with its own unique rules, but that most of these either die out quickly or else continue without the evolution of any interesting patterns (such as Earth-based biology has created) because their rules do not support the evolution of increasingly complex

forms. Its hard to imagine how we could test these theories of evolution applied to early cosmology, but it's clear that the physical laws of our universe are precisely what they need to be to allow for the evolution of increasing levels of order and complexity.

Epoch Two: Biology and DNA In the second epoch, starting several billion years ago, carbon-based compounds became more and more intricate until complex aggregations of molecules formed self-replicating mechanisms, and life originated. Ultimately, biological systems evolved a precise digital mechanism (DNA) to store information describing a larger society of molecules. This molecule and its supporting machinery of codons and ribosomes enabled a record to be kept of the evolutionary experiments of this second epoch.

Epoch Three: Brains The third epoch started with the ability of early animals to recognize patterns, which still accounts for the vast majority of the activity in our brains. Ultimately, our own species evolved the ability to create abstract mental models of the world we experience and to contemplate the rational implications of these models. We have the ability to redesign the world in our own minds and to put these ideas into action. Epoch Four: Technology Combining the endowment of rational and abstract thought with our opposable thumb, our species ushered in the fourth epoch and the next level of indirection: the evolution of human created technology. This started out with simple mechanisms and developed into elaborate automata (automated mechanical machines). Ultimately, with sophisticated computational and communication devices, technology was itself capable of sensing, storing, and evaluating elaborate patterns of information. To compare the rate of progress of the biological evolution of intelligence to that of technological evolution, consider that the most advanced mammals have added about one cubic inch of brain matter every hundred thousand years, whereas we are roughly doubling the computational capacity of computers every year.

Epoch Five: The Merger of Technology with Human Intelligence Looking ahead several decades, the Singularity will begin with the fifth epoch. It will result from the merger of the vast knowledge embedded in our own brains with the vastly greater capacity, speed, and knowledge-sharing ability of our technology. The fifth epoch will enable our human-machine civilization to transcend the human brains limitations of a mere hundred trillion extremely slow connections. The Singularity will allow us to overcome age-old human problems and vastly amplify human creativity. We will preserve and enhance the intelligence that evolution has bestowed on us while overcoming the profound limitations of biological evolution. But the Singularity will also amplify the ability to act on our destructive inclinations, so its full story has not yet been written. Epoch Six: The Universe Wakes Up In the aftermath of the Singularity, intelligence, derived from its biological origins in human brains and its technological origins in human ingenuity, will begin to saturate the matter and energy in its midst the dumb matter and mechanisms of the universe will be transformed into exquisitely sublime forms of intelligence, which will constitute the sixth epoch in the evolution of patterns of information This is the ultimate destiny of the Singularity and of the universe.

Kardashevs scale

Figure of a Dyson swarm surrounding a star

Kurzweils analysis addresses human evolution mainly from the point of view of biology and information theory. A different approach from the point of view of physics and cosmology is that of Kardashev, 1964, who used the homonymous scale to measure the technological level of a civilization. Although the measures are arbitrary, the scale may quantify the level of development of a civilization according to the amount of power this civilization consumes. The scale is as follows:

Type civilization (They have fully utilized the energy resources of their own planet).

A Type I civilization extracts its energy, information, and raw-materials from fusion power, hydrogen, and other high-density renewable-resources; is capable of interplanetary spaceflight, interplanetary communication, mega-scale engineering, and colonization, medical and technological singularity, planetary engineering, government, trade and defense, and stellar system-scale influence; but are still vulnerable to extinction.

Type II civilization (They have fully utilized the energy resources of their own solar system, the sun included).

A Type II civilization extracts fusion energy, information, and raw-materials from multiple solar systems; its capable of evolutionary intervention, interstellar travel, interstellar communication, stellar engineering, and star cluster-scale influence; the resulting proliferation and diversification would theoretically negate the probability of extinction.

Type III civilization (They have fully utilized the energy resources of their own galaxy).

A Type III civilization extracts fusion energy, information, and raw-materials from all possible star-clusters; its capable of intergalactic travel via wormholes, intergalactic communication, galactic engineering and galaxy-scale influence.

Type IV civilization (They have fully utilized the energy resources of their own universe).

A Type IV civilization extracts energy, information, and raw-materials from all possible galaxies; its nearly immortal and omnipotent, possessing the ability of instantaneous matterenergy transformation and teleportation, as well as the ability of time travel and universal-scale influence; in fiction, these civilizations may be perceived as omnipresent/omnipotent gods.

Type V civilization, and beyond Such hypothetical civilizations have either transcended their universe of origin or arose within a multiverse or other higher-order membrane of existence, and are capable of universe-scale manipulation of individual discrete universes from an external frame of reference.

Human civilization is found somewhere below Type I, as it is able to exploit only a fraction of the energy of its planet. This is why human civilization has also been called a Type 0 civilization. It is considered that we are somewhere at 0.72 on the scale, and that we will reach Type I in about 100-200 years, Type II in a few thousand years, and Type III in 100.000- a million years from now. [75]

The seventh epoch (The new Creation) All previous paradigms may be summarized in the following table, with one extra addition of my own:

At the later epochs humans will probably be so much different than modern ones that they will be unrecognizable. They may have abolished their material or carnal body and have become more like energy spheres or spiritual entities. In any case, what will prevail is not the human type as a species but species of information, advanced intelligent forms that may be very different from us or from what we have ever imagined. In this context, the danger of selfdestruction has more to do with information itself, rather than a simple life form: this is the moment where the end and the beginning come face to face, where meeting the creator means performing the ultimate self-referential loop at the end of a universal circle and at the beginning of a new one. This is something quite unimaginable

Paradigm shift

The duck-rabbit optical illusion

As Wikipedia says, a paradigm shift (or revolutionary science) is, according to Thomas Kuhn, in his influential book The Structure of Scientific Revolutions, (1962), a change in the basic assumptions, or paradigms, within the ruling theory of science. It is in contrast to his idea of normal science. According to Kuhn, A paradigm is what members of a scientific community, and they alone, share. [76]

This means that, in a given historical period, the scientific community accepts a theory or interpretation concerning a natural phenomenon, against all other possible intepretations of the same phenomenon. Kuhn demonstrates this by using the duck-rabbit optical illusion, a figure someone sees either as a duck or as a rabbit. A paradigm shift, according to Kuhn, takes place when science is faced with an anomaly or singularity, which cannot be explained by the existing paradigm. When several such anomalies accumulate then, according to Kuhn, a scientific crisis occurs, during which new ideas are tested. Finally, a new paradigm is established, and a mental battle takes place between the supporters of the new paradigm and those of the old one.

Some examples are as follows:

- The transition in cosmology from a Ptolemaic cosmology to a Copernican one.

- The acceptance of the theory of biogenesis- that all life comes from life, as opposed to the theory of spontaneous generation, which began in the 17th century, and was not complete until the 19th century with Pasteur. - Darwins theory of evolution, which replaced Lamarcks theory that an organism can pass on characteristics which it acquired during its lifetime to its offspring. - The acceptance of Mendelian inheritance, as opposed to pangenesis in the early 20th century. - Newtons law of universal gravitation, which explained the way heavenly bodies move. - The replacement of the ether by Einsteins theory of relativity. - Quantum theory, by which classical determinism was abandoned.

Moores law According to Wikipedia, Moores law describes a long-term trend in the history of computers. In particular, the number of transistors which can be installed without charge on an integrated circuit doubles every year. This trend has continued for more than half a century and is expected to continue until 2020 or later. Moores law also incorporates other computer components, such as processing speed, memory capacity, or even the number of screen pixels. Kurzweil, among other things, proposed an extension of Moores law. He says that an analysis of the history of technology shows that the change (of computer capacity) is exponential, contrary to the common linear perception. So, we will experience not 100 years of progress in the 21st century, but more than 20,000 years... Within a few decades, artificial intelligence will surpass human intelligence, resulting in the singularity- a technological paradigm shift so fast and deep that will bring a break in the fabric of the history of humanity. The implications include the merger of biological and non-biological intelligence, humans immortal in terms of software, and intelligence levels that will extend outside the universe with the speed of light.

Each time a technology approaches an obstacle, according to Kurzweil, a new technology will be invented to overcome that obstacle. He even predicts that such paradigm shifts will become increasingly common, leading to an increasingly rapid technological progress. He believes that a technological singularity will take place before the end of the 21st century, in 2045.

The limits of Moores law Perhaps Kurzweils view is over-optimistic, because if it is true then most of us will likely find ourselves in a time when, for example, the progress of human life expectancy will be such that we will never die. But in fact the development in any field of science or technology cannot be (positively) either linear or exponential forever.

Moore himself stated: (Exponential growth) cant continue forever. The nature of exponentials is that you push them out and eventually disaster happen In terms of size (of transistors) you can see that were approaching the size of atoms which is a fundamental barrier, but it ll be two or three generations before we get that far- but thats as far out as weve ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. Some, however, find the limits of Moores law very far in the future. Lawrence Krauss and Glenn Starkman put an upper limit 600 years from now, based on an estimate of the total data processing capacity of any possible system within the universe. Consequently, it seems that Moores law will remain in force for a very long time. [77]

Diracs delta function

Although we have already made a step from linear to exponential growth of human technological progress, there is at least another step toward the truth of our destination. Evolution is neither stable nor unidirectional. Our history is characterized by periods both of growth and of recession. Furthermore, technology does not fully cover the meaning of progress. For the moment, however, we will only focus on the concept of singularity, in order to understand that singularity can only be achieved if afterwards there is something more which can be compared with what already existed. The best, mathematically at least, way of describing a singularity is the famous Diracs function. As shown in the figure, it is a distribution which is everywhere zero and infinite at a point, on the top, where singularity is found. Diracs function is derived from a Gaussian distribution, pressed while keeping constant the area of the surface which lies below the curve, until it becomes a Diracs function.

This function has many applications. For example, it may represent the creation of a particle (singularity point) in a field. In our case, it could indicate the point of technological singularity. However, we may use instead the smoother Gaussian distribution, with the well-known bell shape. In this case, singularity is again found on the top of the distribution. But the difference in comparison with the exponential function is that after the anomaly, or singularity, the values of the function will progressively be reduced to zero, unless something happens so that the parameters change. Such a function is much more realistic, and indicates that, on the one hand, the more one ascends the more one is likely to fall and that, on the other hand, technology alone is not enough to cover all the aspects of progress. In other words, the increase in the number of transistors of a processor will never make a computer able to understand or write a poem, for example. The question, of course, which is the function of intelligence called inspiration or creativity, and where this function is located, is an unanswered one. Any attempt to answer this question, however, leads us to a way of thinking which has more to do with what we commonly call instinct, soul, or spirit, rather than processing capacity of intelligence. In the final analysis,

it is the same structure of human nature which brings us in track and in harmony with the deepest essence of the world, something that a robot cannot understand or achieve.

Existential singularity As regards the over-optimistic, lets suppose, statements of Kurzweil, as well as other similar views about the replacement of the human brain by bio-machines and artificial intelligence, we can say that, not only technological progress cannot remain indefinitely exponential, but also that the improvement of artificial intelligence and the progress of robotics and biotechnology does not guarantee the replacement of the human mind. Technology, generally, does not necessarily promote culture, nor is culture itself. This is shown if we take a look back in history, at the last great technological revolution, i.e. the industrial revolution. The promise then was the abolition of slavery and of manual labor thanks to the automation offered by the machines. However, not only human employment contracts have not been abolished, but instead the demand for cheap labor has been increased, in order to meet the needs of the wealthy technocrats. History therefore indicates that it is not the tools that make the difference but the hands, the brains and the intentions of those possessing the instruments. Moreover, recalling Gdels incompleteness theorem we should be somewhat suspicious about the alleged unlimited capacities of a machine, as he showed that a machine cannot do everything with respect to intelligence and computing power. Let alone to incorporate into a machine the mechanism of inspiration, a program that would give the machine an original way of processing data. Human thought is in fact a product of evolution, and is enriched by feelings and, more generally, mental states, which a machine is not able to possess. At this point, I would like to expose seven stages in the historical process of thought, from the first stages of information within the universe, till the later stages of organized and wellmanifested intelligenceThe seven stages of thought:

1) Diffused information in the universe; energy in the form of hot plasma.

2) First regulated information in the form of mater and energy. 3) First rudimentary biological self-replicating structures 4) Semi-intelligence stage: It is dominated by half-conscious forms of information, and instincts, which constantly repeat themselves, without any real ability of transcendence. 5) Primitive intelligence: Half- conscious, since it is still dominated by instincts, although the first traces of creativity and genuine altruism appear. 6) Advanced intelligence: The achievements and the originality of human thought are supplemented and strengthened by artificial intelligence. 7) Consciousness sate shift: existential singularity (the universal thought): Realization of the limits of artificial intelligence. Paradigm shift towards a deeper and holistic way of thinking in harmony with nature and the universe. With respect to the concept of consciousness sate shift, here we can formulate a basic thesis concerning the evolution of human thought, as it passes from mimicry to creativity:

There is a critical point of consciousness, beyond which humans pass from mimicry to creativity, breaking the vicious circle of mechanical repetition of information, and at which point they reach a new level of thought, where they may deliberately create new information from all previous one. We can call this stage existential singularity, beyond which humans become truly intelligent beings. Once one reaches this level, a consciousness state shift occurs, the infinite loops of ordinary logic break apart, and ingenious thought acquires a timeless and universal character.

Existential singularity therefore is the final step of human consciousness. It is found far ahead technological singularity. An interesting question which can be raised here is whether a civilization has to pass through the technological- material stage before they reach dematerialization, as I would call the state of full conscience, the ultimate paradigm shift. I think that it is yet unknown if spirituality can be achieved without technology, or if at some stage intelligence can get rid of material technology, and pass into a pure state of being. The only certainty is that, having (although not fully) overcome the danger of nuclear self-destruction, we are getting more and more interested in the environment, as well as in issues concerning the

quality of life (although we are still willing to exploit other people for our own sake), and we are beginning to reach a level of posing advanced questions, concerning the origins of our thought, and the corresponding purposes and destination.

1. An Investigation of the Laws of Thought 2. The nature and significance of Gdels incompleteness theorems 3. Penrose triangle 4. Parallel postulate 5. Non-Euclidean geometry 6. Strange loop 7. Reduction to infinity 8. The facts of perception 9. Visual perception 10. Retro-vision: A new aspect of vision? 11. Backward Causation 12. The Sense of Being Stared At 13. Retention and protention 14. The extended present 15. The Archetypes of the Collective Unconscious 16. Synchronicity: An acausal connecting principle 17. Atom and Archetype: The Pauli/Jung Letters, 1932-1958 18. The psychology of attention 19. Attention 20. Free Will 21. Mind-Body problem 22. Quantum physics in neuroscience and psychology: a neurophysical model of mindbrain interaction 23. Memes- The new replicators 24. The Meme Machine 25. Game theory 26. [Newcomb's Paradox] 27. Prisoners dilemma 28. Whats Memetics, Game Theory, Free Will, and Transfinite Math Got to Do with It? 29. Zerosum game

30. Wheelers delayed choice experiment 31. Generalized absorber theory and the Einstein-Podolsky-Rosen paradox 32. Closed timelike curve 33. Bernd Schneiders Star Trek Site 34. Grandfather paradox 35. Zenos Paradox and the Problem of Free Will 36. Quantum superposition 37. Quantum entanglement 38. Quantum logic 39. The Analyst 40. Georg Cantor 41. Bells Theorem 42. Schrdingers cat 43. Observer effect 44. Participatory Anthropic Principle 45. Anthropic Principle 46. Many-worlds interpretation 47. Quantum suicide and immortality 48. Does the many-worlds interpretation of quantum mechanics imply immortality? 49. The Many Minds Approach 50. On the Problem of Hidden Variables in Quantum Mechanics 51. A suggested interpretation of the quantum theory in terms of hidden variables 52. Holography 53. Holonomic brain theory 54. Holonomic brain theory 55. Wholeness and The Implicate Order 56. The Sense of Being Stared At 57. Retro-vision: A new aspect of vision? 58. Wheelers delayed choice experiment 59. Delayed choice experiments and the Bohm approach 60. Information theory

61. Entropy (information theory) 62. Active Information, Meaning and Form 63. Meaning and Information 64. The Alan Turing Internet Scrapbook 65. Intelligent design 66. Gdels incompleteness theorems, free will and mathematical thought 67. Is logic empirical? 68. The logic of quantum mechanics 69. A new theory of the relationship of mind and matter 70. The principle of analogy 71. Correlations of continuous random data with major world events 72. Technological singularity 73. The Singularity is near 74. Kardashev scale 75. Paradigm shift 76. Moores law

The origins of thought, Chris Tselentis, September 2013

Under an attribution, non-commercial, share alike license Email: christselentis@gmail.com