Вы находитесь на странице: 1из 31

A New Multi-Disciplined Approach to Electromedicine Research

ALEXIS GUY OBOLENSKY1, PHILLIP SHINNICK, Ph.D., M.P.A.2, JOSEPH P. MAIZE.3

Abstract:
The purpose of this paper is to present a rational recommendation for new directions of integrated research within the field of electromedicine therapy. We prepare a foundation prior to our discussion of this new direction by interleaving a brief historical perspective of the development of the electromagnetic and thermodynamic disciplines. We springboard from this foundation as justification for presenting three primary tenets for directed study. These tenets shall suggest how trends in present and future research must include and integrate presently overlooked disciplines of modern scientific perspective in order to ultimately achieve a comprehensive understanding of Electric Field (EF), Magnetic Field (MF),and Pulsed Electromagnetic Field (PEMF) interactions within living tissue. The primary tenets suggested for implementation are: 1. The inclusion of modeling electromagnetic phenomena within the greater hierarchal mathematical framework of modern mathematical Topology, 2. The inclusion of the physics of modern thermodynamics of non-linear far-from-equilibrium systems as related to the stability & instability of complex system behavior, and 3. The marriage of perspectives 1. & 2. into a new bio-interactive research model via the inclusion of the mathematical formalism of Continuous Topological Evolution, derived also from Cartan topology. The discipline of tenet 1 has provided a new mathematical formalism which to date has not only successfully verified known classical and modern structures of electromagnetism, but has developed mathematical frameworks predicting as-of-yet untested structures and behaviors that hold great promise for discovery and practical realization. The tenets of 2 and 3 present disciplines of research that have also so far demonstrated, through a mathematically rigorous and quantifiable science, that entropy flow within complex systems may be stymied or actually reversed through controlled processes for long periods of time, which lead to the mathematically demonstrable existence of truly irreversible systems. This demonstration thereby circumvents the purely statistical interpretation laid upon the Second Law of Thermodynamics as an illusion of macroscopic observation. These results begin to provide a mathematical bridge from the microscopic dynamics of particles to the macroscopic formation of complex behavior within physical systems (such as living organisms) through the processes of emergence and self-organization, whose thresholds of transition are describable by non-commutating mathematical Operators of Entropy and Time upon density distributions [which statistically describe the ensemble behavior]. Remarkably, these processes are as well describable with enhanced understanding by non-statistical models generated through the application of the modern Cartan topology of exterior differential forms, with specific focus upon the analytic tools of Continuous Topological Evolution. Though still in its infancy, this multi-disciplined approach offers realistic perspectives into comprehending large scale interactive dynamics within many fields as never before. The focus of applying these techniques within electromedicine stands as a bold yet exciting challenge to practitioners within the field, while requiring an unprecedented development and application of multi-disciplined understanding and skill sets.

Key Words:
Pulsed Electric Field, Pulsed Magnetic Field, Pulsed Electromagnetic Fields, PEMF, electromedicine, farfrom-equilibrium, non-equilibrium, irreversible thermodynamics, non-linear mathematics, complexity, complex behavior, chaos, stability, instability, dissipative systems, emergence, self-organization, Cartan topology, Continuous Topological Evolution, exterior differential forms.

1 2

ALEXIS GUY OBOLENSKY, Natural Energy Institute, Inc. PHILLIP SHINNICK, Ph.D., M.P.A., The Research Institute of Global Physiology Behavior and treatment, Inc. 3 JOSEPH P. MAIZE, B.S.E.E. Scientist, Orbital Sciences Corp .
Vol. 4 No.14-15, Jan-April 2012

JSHO

Synopsis for Scientific Redirection:


The safety and efficacy of Electric Field (EF), Magnetic Field (MF),and Pulsed Electromagnetic Fields (PEMF) have been investigated in numerous clinical trials and written in many expository studies. However, the true mechanisms by which PEMF field energy effects global system behavior for its healing or detriment has not been comprehensively identified. Consequently, in existing literature and research, sensitivity parameters have been based more on trial and error, clinical empirical evidence, or more on assumption than insight, and should not be accepted with comprehensive confidence. To accurately define any PEMF field energy interaction process at work in electromedicine therapy, and to have confidence in reproducible, predictable results, with comprehensive boundaries of safety understood, there is need to have a total system model which: a) incorporates a complete representation of electromagnetic field structure, b) incorporates a model which is not purely reductionist of the interaction medium and is also representative of the macroscopic behavior of complex thermodynamic systems existing as far-from-equilibrium processes, and c) incorporates a practical physical/mathematical methodology that ties together the interaction process between these two models. These new high-level interactive processes may offer yet-to-be discovered and presently incomprehensible Gestalt effects upon living tissues, such as bone, cartilage, muscle, tendon, cyst, microorganism, or organ and their particular biochemistry. This is because the distribution of interactive energy fields in the tissue systems of the whole body are far too complex for todays perceived approach of employing Maxwells Equations solely in a purely reductionist biological setting (more on this later). Continued attempts to formulate a comprehensive understanding of the Electric Field (EF), Magnetic Field (MF), and ElectroMagnetic Field (EMF) interaction mechanisms within living tissue that are based upon this familiar but inadequate framework not only may lead to dead ends, but also may divert serious investigators down delusional paths of limited understanding. This paper does not by any means attempt to presently resolve these profound underlying theoretical issues, for the true mechanisms are still quite mysterious to the present authors, and the suggested techniques of employment are a new frontier. However, we make an empirical suggestion for a new redirection herein based upon deductive reasoning, which we believe will prove fruitful in ultimately bringing clarity and rigor to extant theoretical conundrums. Painstaking clinical trial and error in Pulsed Electromagnetic Field (PEMF) electromedicine can reveal much in regard to what modes of stimulus yield what types of empirical results. Such comprehensive studies take decades, and much research must be performed upon animals rather than humans. This has been in many circumstances a viable and only approach available. This approach must be supplemented in parallel with additional methods to realize the hope for the development in the near future of a comprehensive electromedicine science. It is a principle tenet of the authors of this paper that a newly developed model is necessary, providing a more complete and rigorous understanding of PEMF interactions within biological processes, which will lead to predictable theories and practices; and that this new model can only be accomplished through incorporation of a more rigorous application of the modern topological science of electrodynamics, specifically as it applies to the interaction with complex entropy-reducing thermodynamic systems in their natural states which are far from equilibrium. For this model to be of value, it is required that this new model must:

JSHO

Vol. 4 No.14-15, Jan-April 2012

a. Encompass and be in harmony with existing observations and experiments, even if present observations and theories represent a limiting case of the broader theory, b. Provide deeper understanding of known and unexplainable processes, and c. Lead to successful predictions of yet to be discovered processes. The following is a brief itemized summary of the three basic steps needed to achieve this improved understanding: 1. The application of a modern electrodynamic topology theory, derived and expanded from the true original structure of the electromagnetic equations of James Clerk Maxwell (1831 1879), which today can be freshly recast within the more robust mathematical structure of modern Cartan Topology, 2. The inclusion of the non-entropic thermodynamics of far-from-equilibrium complex systems, as studied through the new sciences of non-linear mathematics as it relates to instability, chaotic dynamics, attractors, distribution functions, and the self-organization of dissipative structures1. (Much of this work was spearheaded and partially developed by the Nobel-laureate Ilya Prigogine (1917 2003), and his associates and contemporaries.) 3. The integration of the above two forms of study, into a cohesive discipline studying topological electromagnetic interactions within the topology of irreversible complex non-linear systems, which themselves are subject to the thermodynamic properties of self-organization. Such representations appear to be successfully modeled through the mathematics of Continuous Topological Evolution, as presented in modern Cartan Topology. It is necessary to mention that the study of complex systems is not a specific discipline of research, but represents various communities of researchers united by their intersection of interests. Any appropriate inclusion of multi-disciplined fields for the insight and understanding in electromedicine are not only valid but absolutely necessary. To elucidate and justify our combining the previously mentioned areas of study for a comprehensive understanding of electromagnetic processes in living organisms, we give a brief history which weaves together the pertinent threads of our tapestry.

Historical Perspective of Electromagnetics What exists today as our Classical legacy of 19th Century electrodynamics is a distilled and slightly distorted remnant of James Clerk Maxwells (1831 1879) original masterpiece in electromagnetic theory. The evolution of events occurring after Maxwells death in 1879 relating to the alteration and final reduction of his work is a topic for an engaging novel. Suffice it to say that both altruistically practical and possibly nefarious purposes congealed to create an environment within which a major revamping of Maxwells work could and did occur. We shall concentrate on the altruistically practical. The great British mathematical physicist of the 19th century, Oliver Heaviside (18501925), held a personal loathing for both Maxwells mathematical notation employing potentials and his vector algebra of quaternions. He was not alone in his disdain, and had the support of distinct British physicists of the time in the 1880s1890s.2 Concurrently in this period of history were the efforts of an Italian engineer and scientist, Giovanni Giorgi (18711950), who was determined to implement a dominantly new practical measurement system for applications in the daily usage and design of electrical products and motors. The meter-kilogram-second (MKS) system was proposed by Giorgi as a replacement for the previously well-

JSHO

Vol. 4 No.14-15, Jan-April 2012

4
established absolute measurement system known as the centimeter-gram-second (CGS) system. Subsequently there were extended debates centered on whether absolute or practical measurement systems were to be dominant, with a variant of Giorgis system ultimately prevailing. Absolute measurement implies employing units of measure solely defined in terms of length, mass, time, charge, etc., and the identification of relationships among these units that are not affected by scale, yet whose resolution accurately expresses the complete absolute value of the quantity being presented or measured. The advent of the CGS system had evolved at a time when no practical machinations of electromagnetics existed, and were the fundamental attempts at providing a self-consistent and rigorous dimensional analysis to the evolving theory. CGS was not the only extant system of the time, and competed with several others. A more practical measurement system was sought for industrial usea system which employed larger scale unit-based systems (such as MKS) and relationships which were linear or log-linear, closer in scale to the magnitudes of practical applications, and as such more readily manipulated. A larger scale of rectangular linearity were sought after, providing more convenience than the original circumferentially based centimeter-gram-second (CGS) system. Giorgi sought as well the council of Oliver Heaviside in these matters, though having strong opinions of his own.3 If it were just a matter of scale the change would have been innocuous and the gain in convenience of a nature equal to that of one finding it more convenient to express ones weight in pounds instead of ounces. Of lesser but still significant concern, as a result of such a system having a coarser resolution, the tracking and need for analysis of finer-grained physical behavior tend to be lost in the context of large-scale significant figures.4 These historical consequences take intriguing directions themselves, and would be worthy of an entire book; however it is mentioned here only to elucidate some of the driving forces for change at the time. Maxwell understood the need for an absolute, scale-neutral system, which Gauss had developed a century earlier. He also had developed his equations on a rigorous mathematical theory, which incorporated all the known experimental results of electrical phenomena up until his time, as well as encompassing his own deductive theoretical mechanism based upon the axioms of the existence of a type of virtual fluid-flow model of electromagnetics, derivable, though not verifiable at the time, from Newtonian principles. His mathematical representation relied fundamentally on the concept of potentials, which were the cornerstone of his treatise. However, for Heaviside, Giorgi, and other early 20th century pragmatists, the potential-based original form of Maxwells equations, as well as the CGS system, was too cumbersome for elementary engineering applications. Oliver Heaviside himself had used the too cumbersome argument in criticizing Maxwells original equations often and offered a reduced form of the original set which gradually become known as todays Maxwells Equations. These were the driving forces seeking practical simplification in applied engineering during the late 1880s. To distinguish these two very different systems and minimize the chance for confusion, we will avoid labeling the simplified set of relationships as Maxwells equations and instead call them Heavisides reduction. It was during an 1888 meeting of a group of respected physicists in Bath, England that a consensus was achieved to proceed with the adoption of Heavisides reduction, though the works and suggestions of G. Fitzgerald and H. Hertz played a significant role as well. Maxwells original work was structured in 20 distinct equations, which covered all of the known behavior of electromagnetism at the time, however it was not without errors. Maxwell likewise promulgated the algebra Sir William Rowan Hamiltons quaternions a higher hierarchal, yet cumbersome, version of complex number theory and vectors as we know them today. The consequential Heavisides reduction included several fundamental changes: it removed Maxwells focus upon the magnetic potential A and electric potential and centralized the core equations on the electric and magnetic force fields E and H; it converted the 20 equations into 4 compact

JSHO

Vol. 4 No.14-15, Jan-April 2012

5
presentable equations employing the still-new Vector Analysis of the time the algebra we are familiar with today and it discarded as well the awkwardness of Hamiltons quaternion system for the new field theory of vectors.5 At first scrutiny, and with the penetrating eyes of the times, this reduction was a great boost to field analysis of the day, allowing not only the theorys meaning to diffuse as deep comprehension among the scientific community of Maxwells work, but also to allow practical applications of the field equations to telegraphy and telephony, which were previously difficult to impossible with the previous equations employing potentials and the difficult quaternion calculus. However, the equations reductions were to have more consequential effects that could not have possibly been forecast, and consequences were truly left for future insight to unravel. In 1901 at the Associazione Elettrotecnica Italiana conference, Giovanni Giorgi, introduced his plan to supplant Maxwells twenty potential-based equations with Oliver Heavisides reduced equations. Following this 1901 genesis, Giorgis proposed MKS system became the law of the scientific landscape in 1908 when the International Congress of Electricians adopted it in place of Maxwells quaternion equations and Gausss absolute CGS unit numerics.6 Segueing to the latter part of the twentieth century, in spite of tremendously successful technological developments within our modern society, we find under deeper examination a significant deficit in the physical theories of our electrodynamic science. There are several anomalous phenomena of an electromagnetic nature that have emergedphenomena that cannot be explained by either the Maxwell version, the Heaviside reduction equations, or the quantum electrodynamics of our vaunted modern quantum physics.7 However, the recent application of some evolving mathematical structures of the early 21st century, in particular those focused upon non-linear equations and the topological structure of the vacuum, have shown great promise through precise structural integration in its classification of what we already know, as well as postulating intriguing physical relationships and new entities of which we have yet to discover. It so happens that Maxwells original structured equations of 20 relationships offer a perfect foundation for development into these larger mathematical structures, while Heavisides do not.8 Without delving into detail irrelevant to this paper, we present that the vector algebra of the Heaviside reduction equations are to be of a symmetry classification from mathematical Group Theory of a type known as the U(1) group of transformations, whereas the quaternions of Maxwells original equations contain the group symmetry structure of the SU(2) group of transformations a higher classification with greater mathematical structure.9, 10 Such symmetry and higher equation structure open the doors for further development and refinement of the electromagnetic equations, and therefore provide solutions and explanations for complex behavior previously imponderable. As an example, Solitons (which are non-linear wave packets demonstrating non-dispersive propagation) are entities normally associated with fluid flow, not electromagnetism. They are entities that propagate in media which allow compression and expansion of the media (such as water or sound waves), and take the form of longitudinal waves. They do not attenuate or lose their shape as they travel, as conventionally transmitted electromagnetic waves do from broadcasting antennas. Solitons do not diverge and spread their energy, as all light beams and radio waves do. This is from the inherently non-linear, frequency interdependent structure, as opposed to linear superposed waves which always disperse. However, recent developments in non-linear differential equation studies and modern topological electromagnetism have tied Solitons together with practical manifestations within the realm of electromagnetism! Such results imply that there must be a physical structure to what we call the vacuum and that presents tantalizing possibilities.

JSHO

Vol. 4 No.14-15, Jan-April 2012

6
Physicist Terence Barrett states in one of his publications: This recent extension of Soliton theory to linear equations of motion, together with recent demonstrations that the non-linear Schrdinger equation and the Korteweg-de-Vries equation - equations with Soliton solutions are reductions of the self-dual Yang-Mills (SDYM) equation, are pivotal in understanding the extension of Maxwells U(1) theory [Heavisides reduction] to higher order symmetry forms such as SU(2)11 [Maxwells original set of equations authors notes] The readers indulgence is requested here it is not expected that a comprehensive understanding be derived from this excerpt. What is of import is that the reader gleam that a greater structure of mathematical sophistication is present in the higher symmetry groups, and that as a consequence more physical structure is present in electromagnetism than was historically believed. These discovered relations and symmetry structures allow the existences of pseudo-particle solutions in electromagnetic interactions within the vacuum and matter mediasolutions that are not yet well understood or investigated: twister forms, magnetic monopole constructs, soliton forms, and much more. As a pertinent example, the existence of magnetic monopoles (as topological manifestations) appears to have been verified12 - these pseudo-particle structures, or instantons as they have been defined, do not exist as an isolatable particle within the vacuum, but as topological superstructures within large-ensemble, complex systems. Such structural complexity can only adequately be described through topological behavior, and as we shall see, through the science of self-organizing structures of non-linear dynamics. It is too early to begin speaking of the topology of electromagnetism, as many additional concepts need be presented and developed. We defer this discussion toward the latter part of our treatise, and at that time shall employ the plethora of yet-to-be developed ideas toward how the higher mathematical structures of electromagnetics direct us to the discipline of Topology. Our present thread of discussion now leads into our next topic as we construct our tapestry.

Historical Perspectives on the Theory of Thermodynamics:


As a definitive branch of study of the physical sciences, Thermodynamics of the late 19th century once stood separately as a cognitively independent discipline of science, as did other major disciplines of the physical sciences. By the transition into the 20th century, thermodynamics was a rigorously-derivable and self-consistent mathematical theory, experimentally verifiable within the confines of the controlled laboratory for a spectrum of gases and liquids, and a good approximate analysis tool for designing demonstrable engineering machinations13. However, as a discipline it was not directly derivable from mechanistic first principles of the dynamics of Newtonian mechanics. Neither was it reconcilable with the then-recent mathematical developments of Maxwells electrodynamics, which tied together the experimental phenomena of Faraday, Ampere, and other electromagnetic experimenters. Its origins lay in parental ties to the field of Chemistry, where the study of the empirical processes of temperature differences, heat exchange, and large-scale kinetics in chemical reactions gave birth to its principles. Classical thermodynamics (as it is now referred to) stood as a powerful tool for analysis and predictions within the fields of Chemistry and Machine Engineering, giving it precedence as a valid independent discipline within the realms of Physics. Implying directly in its name the study of the dynamics of heat exchange, its noteworthy highlights of that time were in the study and prediction of energy exchange in the form of heat within chemical interactions and low-density gases, as well as material or mechanical systems - especially thermal engines. It is important to note also that early classical thermodynamics was the study of macroscopic system behavior, without the specific regard to the underlying dynamics of individual particles.

JSHO

Vol. 4 No.14-15, Jan-April 2012

7
The extraction and development of the significantly fundamental concepts of temperature, heat and entropy from the field of thermodynamics were great conceptual achievements, and were to have an importance incomprehensible at the time. The eventual acceptance of heat as the exchange of expended energy (work) over system boundaries - and not as another type of ethereal fluid flowing within matter represented great progress in the physical sciences. Both concepts of heat and energy had been around for centuries before being married into a single phenomenological connection. Distinctively, the introduction of the concept of entropy in 1865 by Rudolf Clausius, one of the founders of the classical science of thermodynamics, was a fresh intuitive leap, having little hold upon practical experience for scientist or average citizen.14 As a clear narrative on the intuitive meaning of entropy, an excerpt from biographer David Lindley is presented: 15 Physicists distinguish two types of changes: reversible ones, in which the system [thermodynamic state] would be exactly restored to its starting point, and irreversible ones in which it could not not, that is to say, without the application of further external energy. In reversible changes, something stayed the same. In irreversible ones, something did not. That something, Clausius said in 1865, was entropy [emphasis added] The property of entropy was mathematically quantified and grammatically labeled by Clausius in his paper of 1865. The mathematical term was in the tradition of a non-descript lumped quantity, whose changes were directly correlated to established concepts: changes of heat in ratio-proportion to temperature. This entropy was eventually included as an additional parameter to the total system energy, but of itself was not an observable quantity. It was derived from empirical experimental observations and the reader was forced to expand their consciousness to include an abstract conception posing in formulae as a quantifiable, yet unobservable, term. Only its consequences could be observed and its existence inferred. With the additional contributions of James Clerk Maxwell (classical EM field theorys patriarch), Ludwig Botlzmann (German physicist 1844-1906), and Josiah Gibbs (American physicist 1839-1903) in the latter 19th century, the association with probabilistic statistical behavior of large ensembles of particles (as derived from the random distributions of particle velocities and collisions) began taking firm root as a mathematical framework for the concept of entropy. How kinetic energy was distributed amongst the theorized particulate nature of matter clumped among certain areas with directionality or evenly smoothed in random directions were associated with the concepts of order (less entropy) or disorder (greater entropy). Isolated systems, where no energy or material transfer could occur over the boundaries, were mathematically predicted and experimentally proven to lead to higher states of entropy with the passage of time, until a maximum entropy of system equilibrium was attained. Non-isolated systems, where matter and/or energy could cross the barrier, were theoretically describable by expanding the boundaries of the system until the model contained a larger perspective that appeared isolated itself, and therefore was congruent with the ever-increasing entropy law. Obviously this boundary expansion can be extended ad-infinitum to contain the universe itself. During this early phase, such systems were assumed to be synonymous with irreversibility of processes in nature, and ever-increasing entropy became the foundation of the 2nd Law of Thermodynamics (also attributed to William Thomson and Clausius 15, 16). Boltzmanns mathematical derivations held a fundamental contribution which applied the first mathematical formalism to the concept of irreversibility: this was a contribution that accounted for both particle flow (translational motion) and particle collisions (interactions which added unpredictability). His principally derived function, called the H function, was synonymous with the concept of entropy, derived

JSHO

Vol. 4 No.14-15, Jan-April 2012

8
from individual statistical grounds. (This important contribution will be returned to later, as its projection even into modern quantum theory would not find resolution on issues of irreversibility until recently.) Though these concepts held great usefulness, there still existed a lack of connectivity between Newtonian dynamics and the derivation of entropy. The first insightful challenge upon Boltzmanns work was from the famous argument posed by J. J. Loschmidt known as the reversal paradox.17, 18 The equations of Newton had applicability to each particle individually (no matter how many particles were involved) and these equations always worked in perfect time-reversal. If time were reversed in the equations of motion (by inserting t where t had been) for all individual particles, then all particle motion was allowed to follow its previous trajectories backward in time, and the laws of nature, relative to Newtonian dynamics, still held true! Nothing in theory prevented this from occurring. In contrast, Boltzmanns H function held the distinction of being unidirectional in time exactly the mathematical form necessary for representing true irreversibility, yet now in perceptual conflict with elementary ideas in classical dynamics! This implied that all processes could reverse themselves in identically opposite pathways, no matter how complex, and undue any state that had evolved. Adding to the difficulties, Henry Poincar (French mathematician/physicist 1854-1912) further developed the mathematical proof that any closed system derived from Newtonian trajectories would eventually revisit arbitrarily close to any previous state over the time evolution of the system (known as Poincars Recurrence Theorem). This implied that if observations were continued for an immeasurably long time, that all systems would eventually appear reversible! This held true for any system based upon individual particle trajectories. Through this, Poincare also demonstrated that the concept of a trajectory-based system was completely incompatible with irreversibility if you had trajectories, you would not have true irreversibility. However, the 2nd Law of Thermodynamics stated irreversibility toward maximum entropy as a Law. A Law must be true for all situations and frames of reference, and reversible behavior took away the status of the 2nd statement as such, as well as casting doubt upon the total accuracy of Boltzmanns work. This seeming contradiction eventually demanded deep scrutiny by Boltzmann and led him to derive the following explanation to critics: though the Newtonian perspective fundamentally allowed time reversal (and therefore motion reversal) on an individualized particle level, he argued that the chance for large ensembles of all particles to reverse backward along all their previous paths simultaneously was so small as to be probabilistically unlikely within the tiny life spans of humans, and therefore unobservable in practice. This explanation, invoked by Botlzmann himself, stood as a philosophical explanation of the results of the mathematically statistical derivations.19 Boltzmanns lucid explanation was to eventually condemn his own theory. Summarizing the primary two consequences of these controversies: 1. Although the philosophical explanation of irreversibility logically followed from Boltzmanns statistical mathematics, it also mathematically violated the essence of his explanation. Though logically implying that reversals in ensemble dynamic behavior were highly improbable, and the likely tendency was toward greater entropy and more disorder, the statistical probability implied that reversal was still possible, no matter how remote. This possibility contradicted the interpretation of the 2nd Law as an absolute law of physics, and morphed it into a special case of complex behavior actually to an illusion of irreversibility from the macroscopic scale, due to the remote chance of viewing it, and 2. It illuminated what seemed to be an inherent flaw in Boltzmanns derivation: the H function could not be truly unidirectional in time, casting doubt upon any additional conclusions that could be drawn from the work. Time reversal was a crux of Newtons laws, and Boltzmanns work stood disconnected from classical dynamics: there still was no causal mathematical derivation of large-ensemble behavior derived directly from Newtonian trajectory dynamics the most established and trusted theory of the day and everyone at the time knew that individual trajectories were the physics driving ensemble behavior in the collective! These potential contradictions did not sit well with many scientists, especially those who still questioned the

JSHO

Vol. 4 No.14-15, Jan-April 2012

9
atomistic theory of matter recall that the electron and atomic constituency of matter still had not been discovered at this timeand the atomistic theories of matter to which Maxwell and Boltzmann intuitively and mathematically adhered to were not yet widely accepted by all in the physical sciences. As the turn of the century unfolded, the advent of early quantum mechanics and the evolving discoveries of the atom thrust a more complex perspective on the way energy related to atomic and molecular matter. There began a more conclusive interweaving of the macroscopic and microscopic worlds of physical behavior. The quantum mathematical solutions at the turn of the century of both the Black Body Radiation problem by Max Planck and the Photo-Electric effect by Albert Einstein acted as spring boards, propelling the development of quantum mechanics into the realms of electromagnetism, relativity, and thermodynamics. The early quantum theory by Neils Bohr (1885 1962) explained the energy emission spectrums of simple atomic structures, such as the Hydrogen atom in particular the Balmer Series of emission. The more advanced versions by Erwin Schrdinger (18871961), involving the quantization of the advanced Newtonian analysis tool of the time, Hamilton-Jacobi theory, was able to explain the added emission spectra of the Zeeman, Stark, and hyperfine emission structures, as well as interesting single particle boundary value problems. Later the Dirac equation added relativistic insights into some aspects of electrodynamics, as well as predicting the existence of anti-matter. As an additional example of its effectiveness, Quantum Mechanics of the late 1920 and early 1930s was able to re-derive the general lumped principles of 19th century thermodynamics as a limiting case, directly from quantized statistical principles a major achievement of the theory. Quantum Mechanics demonstrated that what we witnessed in the everyday world of large-ensemble phenomena as the Newtonian classical trajectory was actually the averaged illusion of combinations of quantum probabilistic effects summed over vast numbers of particles. This inherent probabilistic nature of atoms was hidden in the large ensemble. Quantum probability implied that there were many possible particle states to simultaneously select from in any parameter measurement (parameters of measurement being momentum, coordinates, energy, or time), where in Newtonian views only one state could happen for a solution from a dynamic equation at any instant of measurement. A measurement could still be made accurately of a single parameter on a single particle, but prediction of the exact state that would present itself for measurement became inherently impossible. Likewise, simultaneous accuracy of certain parameters was ruled out if such parameters were connected by quantum complementarity.20 The new microscopic physics replaced the trajectory concept of a particle with the wave function conception of a particle, adding as well the ambiguous wave/particle duality of matter. As an important aside, the wave function description (being a solution to the quantum mechanical wave equation) is itself a trajectory representation within the phase space describing it. It carries the distinction of being deterministic in its own way (like its classical counterpart) with respect to its derivation of possible system states in a given boundary value problem; but it introduces probabilistic outcomes in the measurement process, not knowing which system state will present itself for measurement. Additionally, the wave equation is mathematically reversible in time, and again offers a similar dilemma in not bridging the domain of the reversible microscopic to the irreversible macroscopic!21 Additionally, entropys statistical interpretation over the first 50-60 years of the 20th century was well reinforced by further developments in quantum mechanics through the successful practical demonstration of quantum principles: from the atomic bomb and semi-conductor physics of applied electronics on one side of the scientific pendulum, to the other pendulum side of the purely theoretical cosmology of the Big Bangoriginated Universe a Universe that is ultimately destined to run down to a higher state of disorder throughout time, being the ultimate closed system - unless a re-collapse could occur to regenerate its explosive origins.

JSHO

Vol. 4 No.14-15, Jan-April 2012

10
In spite of all this success, nagging problems have been ever apparent for decades to open-minded scientific investigators in many cross-disciplines regarding what appeared to be irreconcilable problems: the definitive chasms between the available mathematics of the physical sciences and the incomprehensible order and structure (low entropy) observed in large-scale complexity, such as biological systems - not excluding hydrodynamic flows, chemical processes, and weather systems. We now proceed to our final discussion, as we seam together the tapestry we seek to display.

Non-Equilibrium Dynamics of Self-Organization


Under the previous section on this paper entitled Synopsis for Scientific Redirection, the necessity for deliberate avoidance of purely reductionist approaches to modeling and investigation were mentioned, when specifying the newly recommended approaches to electromedicine. From the physical sciences to the life sciences, reductive investigation has been a driving and profitable approach to understanding nearly every aspect of scientific inquiry - the continued process of reducing hierarchal understanding of systems via increasingly smaller subsystems, to the point of finding what is irreducible at any current level of our capacity of scientific inquiry. This floor of irreducibility changes with time, and, as a historical rule, the hierarchy of understanding becomes greater in depth, with more relevant knowledge of the systems in question, concomitant with the increasing sophistication of our measurement technology. The process of reductionist science is, in and of itself, an indispensable approach that is completely full of veracity for the goals of discerning scientific knowledge. This approach can and does reveal insight into large-scale process behavior. However, this should not, nor cannot, be the sole approach to understanding the large system behavior of our world. One must be very careful of the subsequent extrapolations projected into the Gestalt macroscopic dynamics which are principally derived from the reductionist microscopic knowledge-base. Such extrapolated projections by practitioners of scientific inquiry, may well be beyond the rational, and far from actual truth. In essence, such extrapolations are based upon processes of inductive reasoning, not deductive reasoning, and the inductive process guarantees no uniqueness to the descriptive bridge being attempted between the microscopic and macroscopic realms. Irrational extrapolation is by no means unique and parcel to any one branch of the sciences. However, it does appear that the biological life sciences are particularly plagued by irrational extrapolative conclusions from the micro to the macro worlds, specifically within the pharmacological applications to biochemical processes. It has been witnessed time and again how reductionist perspectives in synthetic drug science have overlooked large-scale system consequences by simply designing toward a specific operational chemical pathway. Serious illness, birth deformity, compromise of specific biological processes, and death are the worst of the consequences of such tunnel vision. Regarding the physical sciences within the recent past, microscopic extrapolations projected toward understanding large scale system behavior has been effected by the lack of specific analytic tools to bridge the gap. Focus upon irreducible elements, with efforts to engineer larger scale systems from these elements, has been the main thrust of our technological base. For quite some time now, our complete understanding of large scale systems has been limited to this bottom-up perspective, and represents the essence of human contributions to non-natural machinations. However, Nature itself has stood in mocking challenge to our knowledge-base as far-from-equilibrium systems have still defied connective explanation through the physical sciences, with respect to hydrodynamics, chemistry, and weather, let alone biological life! This lack of specific tools to bridge the missing connectivity has manifested in three basic approaches to large system behavior within the physical sciences: 1. the narrowing of analysis to equilibrium systems alone, or upon the approach to equilibrium; 2. the implementation of purely statistical approaches to large

JSHO

Vol. 4 No.14-15, Jan-April 2012

11
ensemble behavior, which again focused on approaches to final states of equilibrium; 3. and deliberately ignoring the importance of the non-equilibrium transition regions to begin with.22 For clarification on the distinct separation of germane process functionality between the micro and macro, examples from electronics are presented. The basic operation of a transistor is by no means a simple representation in semi-conductor physics. Its applications in electronics is varied and the transistor is completely useful in its isolated condition. However, let us consider its constituency in the design of a system such as an operational amplifier, a common yet sophisticated component of integrated circuitry in electronic engineering. The general transistor (say NPN or PNP bipolar for this example) has both linear as well as non-linear characteristics, following relations for explicit performance at all three of its terminals. See Figure 1.A thru Figure 1.C for symbolic representations and the circuit model from Network Theory for equivalent functional operation. An integrated operational amplifier circuit such as the one seen in Figure 2 A is seen to be composed of many transistors, as well as capacitors, resistors. The ultimate design goal of the practical operational amplifier is to amplify without distortion a voltage across its input terminals by a specific and accurate gain relation. However the complexity of design has taken into account performance over temperature, range of input and output voltage, power supply limitations, operational frequency range before distortion, rate of reaction to changes in the input, and more. The large scale system performance characteristics of the operational amplifier (here a macro-unit to the sub-units of its constituents) are well beyond the functional behavior of any single transistor circuit, or beyond any functional performance of any single component within the system. The next step beyond this macro-unit is to include many sub-macro units (i.e. units at the scale of the operational amplifier) into an even a larger system, such as a non-linear control system as seen in Figure 2.C. Now, the structural hierarchy is even greater and the large scale performance far more removed from the individual performance of the original sub-units of transistors, resistors, etcetera, which we started with. It is important to state that in the design process specific parameters of the subunits, such as a transistor base currents or collector-emitter voltages, were constrained as to be implemented over specific ranges in the design, while other characteristics were avoided such as using the linear range of a transistor collectoremitter curve and avoiding the clamping non-linear range near what is called saturation; or employing matching temperature compensation devices at critical stages to obtain stable performance over wide temperature deviations. Selective implementation of specific properties of individual subunits, applied under well defined constraints, add up to the desired large-scale system performance. These samples of the subunit characteristics are key to understanding how critical sub-unit performance is to the overall system outcome! It is also critical to point out that, in regard to this example, the sum of the parts is equal to the whole because all of the characteristic subunit behavior was specifically engineered for the large-scale outcome into a symphonic, predetermined dependency of integrated operations. However, the wholes resultant outcome is just far too complicated to predict without having a rigorous set of repeatable analytic laws and tools which may be applied to each of the sub-units, and which may derive in a rigorous hierarchal development the large-scale behavior of the whole! This is also a classic example of the bottom-up engineering approach mentioned earlier.

JSHO

Vol. 4 No.14-15, Jan-April 2012

A simple PNP Transistor substrate and its associated schematic symbol (batteries show forward biased junctions)

A simple NPN Transistor substrate and its associated schematic symbol (batteries show forward biased junctions)

FIGURE 1.A
An equivalent network theory model of transistor operation: an h-parameter model

FIGURE 1.B

FIGURE 1.C
An circuit schematic for a 741 Operational Amplifier showing PNP/NPN transistors, resistors, & capacitors The block Integrated symbol for the near-ideal Operational Amplifier

FIGURE 2.A
Wikipedia: http://en.wikipedia.org/wiki/PNP_transistor#PNP Wikipedia: http://en.wikipedia.org/wiki/Operational_amplifier

FIGURE 2.B

JSHO

Vol. 4 No.14-15, Jan-April 2012

Circuit Below Simulates Large Inductance And Replaces L1 in Main Circuit.


+9V 100n C4

The circuit shown models the following Non-linear equations for executing a greater complex behavior:
X3

INPUT 8.2n

10 R1

High Gain
100n C3

C1

TL082
-9V 100k R3

VR VR1 2k 100%

The larger system behavior is not reflected within its elements!

22k R1 +9V X3 L1 18m 100n C4 +9V

220 R5

X1

100n C6

100n IC=100m C2

10n IC=10m C1

High Gain TL082


3.3k R3 -9V 22k R2 100n C3

Low Gain TL082


2.2k R4 +9V 9 -9V 220 R6 V2 100n C5

Rtn Current

9 -9V V1

FIGURE 2.C LARGE OPERATIONAL AMP SYSTEM

JSHO

Vol. 4 No.14-15, Jan-April 2012

Basically, a significant failure of singular reductionist perspectives in providing a larger global understanding is to bias the investigator to not even be aware of the existence of the true operational order of larger system behavior, and thereby mistakenly classify, model, or miss it entirely. The process is analogous to the hiker walking in a forest looking ever downward, incessantly following a trail of crumbs for his guidance to some new destination, when an aerial view from above the forest trees would have revealed to the hiker a larger pattern of crumbs forming an enclosed figure-8! Organized large scale system behavior may be entirely masked by reductionist focus, and forever out of our awareness and grasp. A modern saga of this very situation is illuminated in the new sciences of Chaos and Complexity. They are intimately tied together, while being distinctive in their properties. To encapsulate the thrust of these ideas, lets start with the concept of Chaos. We state directly that grand, repeatable symmetry was found to exist in mathematical processes which were once thought to be purely random, or Chaotic (the adjective form of the noun Chaos). The Chaos in mathematical theory has not the same meaning as in literary discourse. Idiomatically, Chaos is characterized by unpredictability and perhaps random complexity. This colloquial definition 23 of Chaos can be stated as: 1. a state of things in which chance is supreme (i.e. complete unpredictability); 2. a state of utter confusion (disorder with no discernable pattern). This definition appears to be the opposite qualities of a dynamical system which is deterministic in its nature. And since we are interested in developing tools which more accurately model nature, we are definitely interested in those which represent non-linear behavior, over the idealisms we are trying to circumvent. As a matter fact, a Non-linear Deterministic System is a system from which future system states can be determined from initial conditions, and the system contains no random elements. Now this lack of random elements would seem to imply such systems are predictable. However, the deterministic nature of such systems does not make them predictable: non-linear system states may be highly sensitive to small changes in initial conditions. Such sensitivity yields widely diverging outcomes in trajectories of the dynamical variables in Phase Space [just the n-dimensional state space through which the system dynamical variables change]. As a consequence, round-off error in computation or inaccuracies in measurement [which are always present] can create rapid divergence away from predictable trajectories e.g. non-random unpredictability! Some of these regions of Phase Space demonstrate Chaotic System States. Chaotic, by definition, means gross unpredictability resulting from small changes in initial conditions. This renders long-term prediction of future states impossible in general. Such characteristic Chaotic system behavior is then termed Deterministic Chaos, or simply Chaos. Most studies in Chaotic systems relate to the analysis of a small number of non-linear relationships. Complexity on the other hand is intrinsically indeterministic, due to the vast amounts of constituents, the forces upon the constituents, and the inability to know anything about modeling the elemental details. The study of Complexity then deals with the how vast amounts of dynamical relationships can generate simple behavioral patterns, which can somehow be trended using the analytic tools from distribution theory, stability-instability theory, non-linear dynamics, and even Chaos theory. As a matter of recourse, as simple, ultimately unpredictable, and somewhat complex as Chaotic systems are, they are used to study and understand the behavior and trending of large Complex systems, which are of an intrinsically indeterministic an unwieldy nature. Because of the above descriptions, it has been said that the study of Chaos and the study of Complexity are actually near opposites of one another.
JSHO Vol. 4 No.14-15, Jan-April 2012

15

Prior to the 1960s, our lack of computing power prevented us from having the eyes to see what was before us. As computing power became accessible to universities and research institutes, and the curiosity of the human mind began to sojourn in the direction of large computational recursive mathematical behavior, only then did sophisticated patterns of order in Chaos begin to emerge which were previously hidden to us in both in time and space. Systems may find a set point of dynamic behavior which migrate around or between the states of instability and unpredictability, demonstrating greater order and pattern in space and time than ever previously conceived. These set points are defined as states of the dynamical variables which are revisited with some relative frequency or pattern. This revisitation of system states is the property of system Attractors or Chaotic Attractors. Attractors may be observable in our own realm of space and time (such as turbulence patterns in fluid pipes or air flow in wind tunnels), or they may only be visible by plotting the key dynamical variables of the changing system within a coordinate representation called the Phase Space. Chaos theory then becomes a study of these regions and boundaries separating ordered behavior and unpredictability. New tools had to be developed which included the detailed study and interweaving of the disciplines of stability and instability theory, non-linear dynamics, and recursive mathematical behaviors. An example of a Chaotic Attractor plotted within a Phase Space is shown in Figure 3. The Lorenz Attractor was derived from a simple condensation of more complex equations used to model swirling turbulent patterns within the atmosphere, by Edward N. Lorenz (1917 2008) in 1963. It held significant import for studying weather patterns, and supported the conception that the weather was inherently unpredictable and intrinsically sensitive to initial conditions. The Phase Space here is just the spatial displacement x-y-z where a turbulent pattern may form. The plot is a 2-dimensional projection of a more complex 3-diemnsional pattern, and represents the physical space within which the air currents revisit in their swirling pattern. To gain an understanding of Phase Space, one simply constructs an orthogonal (perpendicular) coordinate representation from the dynamical variables of a dynamical system. If the dynamical variables were pressure, temperature, and volume of say an ideal gas contained in a box, the Phase Space would be made up of individual axies scaled for each of these parameters. A line within this 3-dimensional space would represent a trajectory through several contiguous ordered triples of (P, T, V), which represent the gas changing system states as energy crosses the boundaries or the box expands or contracts, etc. Just as a high altitude jet leaves a contrail, or an aerobatic plane blowing smoke leaves a trail for where it has traversed, so do ensembles of particles leave a mathematical trajectory or trail in the respective phasespace of position, momentum, temperature, volume, pressure, etc., as time evolves for the dynamic system. Figures 3 is an example of Chaotic behavior which demonstrate patterns of dynamics observable only through the broader picture of recursive mathematical behavior. The figure demonstrates both order and unpredictability, each within their own respective recursive algorithm. As a practical adjunct to this theoretical example, a demonstration of how Chaotic behavior may be actually experienced and measured within systems follows. Electronic non-linear systems are subject to possible Chaotic system states. In order for Chaotic behavior to be observed, a non-linear circuit well established within the literature is employed as a template. Reproduced by one of the authors is a simple Tank Resonator with a non-linear element, known as the Chua Circuit [introduced by Dr. Leon Chua in 1983]. The resonator consists of an inductor, 2 capacitors, one variable resistor, and a non-linear element whose resistance changes with the amount of voltage across it. See Figure 4.

JSHO

Vol. 4 No.14-15, Jan-April 2012

Where is the Prandtl number is the Rayleigh number

Known as The Lorenz Attractor: for = 28, = 10, = 8/3 It is derived it from the simplified equations of convection rolls arising in the equations of the atmosphere. FIGURE 3 The Lorenz Attractor
This circuit happens to be the same Large Operational Amplifier system shown in Figure 2.C, except Figure 4 is in a simplified form The non-linear element g(V1) is a lumped component which is simulated and replaced in the actual circuit by the High and Low Gain stages of the more complex system in Figure 2.C. Likewise, the inductor is modeled and replaced by the upper High Gain circuit, to replace an impractically large inductance. Circuits demonstrating Chaotic behavior must have: - at least three operational state variables - at least three unique equations, and - contain non-linear terms The non-linear equations for the Tank resonator are as follows:

The State Variables are voltages V1 and V2 and current I. C1 & C2 are capacitor values in micro-Farads, L is an inductance in micro-Henries. The circuit parameters are first defined which have a non-linear interaction with the non-linear elements: o The non-linear element is a variable resistance g(V1), which is a function of the State Variable voltage V1 across the element. o Small r is just the inverse of g, i.e. r = 1/g

JSHO

Vol. 4 No.14-15, Jan-April 2012

VR1 100%

g(V1) Non-Linear Element

L1 C2 C1

FIGURE 4

A resonant tank circuit interacting with a non-linear element g(V1).

Single State Stability Point Attractor

Fixed Oscillation Between V1 & V2

OUT

N1

Non-Linear g(v)

Selecting as the system dynamical variables: V1: the voltage across capacitor C1 (and the nonlinear resistance) V2: the voltage across capacitor C2 (and the voltage across the inductor) I : the current through the inductor. g(V1) is the nonlinear current-voltage characteristic this becomes a negative resistance

Chaotic Tornado Strange Attractor

FIGURE 5.A R = 2.1 K Ohms

FIGURE 5.B R = `1.82 K Ohms

FIGURE 5.C R = 1.60 K Ohms

Double Scroll Strange Attractor

Limit Cycle Boundary Behavior

FIGURE 5.D R = 1.52 K Ohms

FIGURE 5.E R < = 1.50 K Ohms

JSHO

Vol. 4 No.14-15, Jan-April 2012

Results:

The so-defined interactive parameters become the adjustable system elements, yielding sensitivity of the resultant electronic system states to the initial values of the parameters i.e. sensitivity to the initial conditions. o Resistor R is the practical parameter to adjust the interaction of g and V1. The number of system states available in any configuration is a result of the number of possible solutions to the mathematical equations. The Phase Space plot is 2-dimensional for practical measurement and is measured on an Oscilloscope; o The Phase Space consists of state variable V2 (voltage across C2) as the Vertical axis and the state variable V1 (voltage across C1) as the horizontal axis. The number of solutions Bifurcate as the system parameter R is adjusted. o See Figure 6 as an example of Bifurcating solutions of the values for a state variable.

As R is varied from an initial value of 2.1 Kilo-Ohms to a final value of less than 1.5 kilo-Ohms (simply a 600 Ohm range) massive system state changes are observed. At R = 2.1 K ohms, the system finds a stable operational state with, V1 and V2 and current I all reaching constant values. At R = 1.82 K Ohms, the system goes into an oscillation between voltages V1 and V2 o This represents a Bifurcation of solutions, and the crossing of an instability point. At R = 1.60 K Ohms, the system states pass through several multiple compounding oscillation frequencies until a Tornado Attractor is witnessed. o Tornado traces on oscilloscope represent trajectories through Phase Space of V1 and V2 magnitudes, which are completely unpredictable as to when any coordinate point on the screen will be manifest. o The only predictable aspects of the system behavior is that a Tornado pattern will appear! o The pattern is overwritten upon the screen thousands of times a second, demonstrating the recursive nature of the states, and that the same ordered pairs of states (V2 , V1) are revisited cyclically in an anharmonic manner, and in this global perspective, the states which will be visited are deterministic. o The Bifurcation states have reached the Chaotic region, which by example of Figure 6 are the gray bands of multiple solutions. At R = 1.52 K Ohms, the single Tornado now branches or bifurcates into dual regions known as the Double Scroll Attractor, in which again the trajectories demonstrate that the system states are revisited perpetually to maintain the pattern. o The left and right vortices represent where V1 oscillates between negative and positive, respectively. At R < 1.50 K Ohms, the Double Scroll Attractor Vanishes with just a change of 200 Ohms, and the Limit Cycle is reached, in which the Boundary values of the system variables (their maximum values which can be achieved in ordered pairs) is reached in a cyclic pattern.

We have demonstrated that a very simple system with only 3 state variables, of which only 2 of those were mapped, has revealed surprisingly complex behavior between state variable parameters, which not only were unpredictable with regard to what pairs of systems states were being manifest at a given instant, but also demonstrated extreme order and symmetry between the patterns of energy exchange between those state variables. We have also witnessed how deterministic equations have led to unpredictable results, so defining the concept of Deterministic Chaos.
JSHO Vol. 4 No.14-15, Jan-April 2012

19

Now what does the imagination conjure up when one attempts to consider a large ensemble of state variables, of the order of Avogadros number, being operated upon by internal and external forces, energy and mass crossing the system boundaries, and complex non-linear laws at work, driving areas of the system between stability and instability far-from-equilibrium?
THE NUMBER OF SOLUTIONS CONTINUES TO BIFURCATE AT KEY POINTS OF A PARAMETERS VALUES. THESE ARE POINTS OF INSTABILITY. HERE, SOLUTIONS OF X BIFURCATE AT KEY VALUES OF THE VARIABLE PARAMETER r. THE PHASE SPACE REGIONS IN WHICH THE NUMBER OF SOLUTIONS EXPLODES ARE REGIONS OF CHAOTIC BAHAVIOR.

FIGURE 6 Arbitrary Example of a State Variable Bifurcation

The New Science


Recall that our previous discussions of thermodynamics up to the first half of this century focused upon developmental incongruencies between the single particle, or microscopic, dynamics of classical mechanical trajectory description and the ensemble behavior of many-particle, or macroscopic large-scale, system dynamics. To enumerate some of the inadequacies in bridging these areas of study, we note a few pertinent to spirit of this dissertation: A. There exists time-reversibility of both the classical Newtonian dynamics equations, and the quantum mechanical dynamics of the semi-classical Schrdinger wave equation, which are in fundamental conflict with the appearance of irreversibility (as well as the relativistic Dirac equations, which is mentioned only for comprehensiveness) B. Recall that the conclusions of the Poincare Recurrence theorem implied that if observations were continued for an immeasurably long time, that all systems based upon Newtonian trajectories would eventually appear reversible! This held true for any system based upon individual particle trajectories, and the concept of a trajectory-based system was fundamentally incompatible with irreversibility from mathematical principles. C. There were inadequacies of the extant theory and tools that had been developed by the first twothirds of the last century (20th), and were inadequate to bridge the transitional development of dynamical systems from equilibrium to non-equilibrium, or from reversible to non-reversible, while maintaining a theory self-consistent within a range of scale (micro thru macro) for observable phenomena. The consequences of these above problems forced some inquisitors of the physical sciences toward integration of known observations with some dramatic conclusions. As a first step:

JSHO

Vol. 4 No.14-15, Jan-April 2012

20

A new science had to be developed in lieu of the undeniable presence of many observable processes, such as Life forms, which appear to demonstrate a true irreversibility in complex processes, at least for finite lifetimes, both within the microscopic and macroscopic realm. This leads to the acceptance that irreversible processes are as real as reversible ones, dismissing the model that they are simply macroscopic illusions of short-term probability statistics, B. Irreversible processes play the fundamental role in the physical world, and that traditional trajectories (whether of classical or quantum origin) are simple idealizations only, and not representative of the fundamental processes of matter behavior. This approach appears much more valid as almost everything in our conscious world is composed of irreversible complex ensembles,24 C. With irreversibility as a fundamental physical principle, classical/quantum dynamics must be somehow embedded within a greater formalism. Within this formalism, irreversibility begins where the simplistic idealizations cease to be observable. D. In the evolution from the idealizations to the irreversible, one evolves from equilibrium to nonequilibrium. This transformation requires the injection of instability to effect this transition, and through the new formalism the instability must break the time symmetry of the mathematics (recall inserting t where t had been) and clearly force the equations into a time-unidirectional form. The roots of this new formalism are less obvious, and represents the core of the contributions of Prigogine and his contemporaries in Austin Texas and Brussels Belgium. In order to realize irreversible processes as the fundamental, there existed a necessity of supplanting the trajectory/wave function descriptions of idealized constituents with Distributions as the fundamental processes of matter, and employing this philosophy in developing the foundations for a new sought-for science. Distribution theory, developed by Gibbs, was an insightfully intuitive and mathematical way of representing population dynamics in physics by using an ensemble approach. Without speaking of the unique conceptual distinctions, it is noteworthy to present the picture of a distribution as a cloud or density of points in a phase space the space of positional coordinates, momentum, mass, temperature, pressure, time, etc. instead of just the Cartesian space of position.25 (i.e. x-y-z) It must be noted that the ensembles we are speaking of contain huge numbers of particles, which would be impractical to employ individual dynamic equations: of the order of magnitudes of Avogadros number ( > 6.023 x 1023 particle systems!). The successes of ensemble methods of analysis have been striking for classical thermodynamics.26 Distribution theory had for decades been seen as simply another useful representation of statistical theories, and such processes were even expanded into quantum mechanics. However their usage was only viewed as another useful perspective a tool which aided in solution approaches to some types of physical problems. The Thermodynamics of Complex Far-from-Equilibrium Systems, as spawned by the groundbreaking work of Nobel Laureate Ilya Prigogine and his contemporaries, ultimately provided mathematical links between microscopic process dynamics with macroscopic irreversibility. However, a major key to the successes of this mathematical formulation , which spanned several decades, was not the fundamental treatment of systems as trajectories or wave functions, but fundamentally as distributions! Likewise, the inclusion of Stability theory into the nature and evolution of distributions, became critical to understanding the thresholds at which systems began to pass from reversible to the irreversible. Introducing system instability through the energetic injection of specific stimuli was seen to drive stable systems (or systems in or near equilibrium) into new coherent states which demonstrate great symmetry and sustained organized integrity.

A.

JSHO

Vol. 4 No.14-15, Jan-April 2012

21
Through the stimuli of the injected energies which are of interest here, called Poincare Resonances, the system acts as an amplifier in migration away from stability toward instability. The instability region is the non-linear transition region, which leaves behind the source state (perhaps near-equilibrium or another nonequilibrium state) and evolves it into a new system state of possibly even higher structure and symmetry, and thereby a state of less entropy! This resulting structural integrity of lower entropy was labeled by Prigogine as the creation of Dissipative Structures, so labeled as he believed them to be organizations of a finite lifetime constantly dissipating energy in the maintenance of their existence. Once created, the dissipative structure and its surroundings will then demonstrate overall system control of the many external constraints and ongoing internal fluctuations in order to maintain its structural integrity for as long as the hierarchal processes remain intact. As a result of this research, new quantum mechanical-like Operators were derived which were applicable to both the classical domain and the quantum realm. These Operators were able to describe the evolution of Entropy from the microscopic (the Entropy Operator), and define distinctive parametric properties under the perspectives of stability and instability conditions, which ultimately track the threshold-occurrence of macroscopic irreversibility! Through a complex series of operator transformations, and through a mathematically rigorous and quantifiable science, the research has demonstrated a threshold evolutionary boundary between the reversible and irreversible, and as such derives terms which have properties that place the mathematics outside the Hilbert space of traditional quantum mechanics 27, and bear the remarkable resemblance to the Boltzmann equation having a mathematical corresponding symmetry, term-by-term, to the flow and collision terms!28 The new equation is however distinct as having its structure based in distribution theory, stability theory, and operator mathematics, and signifies a richer base of interwoven sophistication from several fields of study. This remarkable similitude, however, vindicates Boltzmanns great insight into the necessary elements to describe irreversibility, taken over a century ago. The new theory demonstrates the actual existence of true irreversibility as a viable state of large-ensemble matter, in direct contradiction to the more entrenched interpretation of irreversibility as an illusion of ensemble probability statistics. These derivations create connections with the mathematical physics of our day to long-known and established observations (as the existence of Life!), and offers foundations in analytic tools for the description and prediction of complex phenomena. Likewise, the view of Time had to necessarily change in this new perspective, as irreversibility became an established mathematical consequence of ensemble matter behavior, and the Arrow of Time took on a unidirectional nature. A new quantum mechanical-like operator (the Time Operator) was developed, making a distinction between microscopic Time and macroscopic Time! The time we are familiar with in our macroscopic world is dependent upon a large-ensemble average of microscopic times, all of which distribute over irreversible processes to create the phenomenon of aging.29 In this view, the phenomenon of aging has the direct correlation to the aging we understand in living organisms, but its more general extension to physics is the aging of complex structures. This correlates the apparent Arrow of Time, which we all recognize as unidirectional and irreversible in our every day lives. Thus, there is a finite lifetime to irreversible complex structures, whether they be of the nature of a chemical reaction process, a hydrodynamic pattern, a structured weather pattern, or a living micro/macro organism!30 In the words of Ilya Prigogine: Indeed ensemble theory was considered to be an approximation, whereas the basic [fundamental] theory was in terms of trajectories or wave functions [i.e. Schrdinger or Dirac representations].It is the

JSHO

Vol. 4 No.14-15, Jan-April 2012

22
description in terms of bundles of trajectories, [i.e.] in distribution functions, that becomes basic [fundamental]; no further reduction to individual trajectories or wave functions can be performed. 31

Concept Integration
To summarize the results and consequences from another interesting perspective, thereby enhancing the readers appreciation, in the Old paradigm trajectories/wave functions were primary; the trajectory approach added unpredictable randomness when large ensembles of trajectories were collected, thereby also yielding the illusion of irreversibility. Now in the New paradigm Distributions are fundamental, and trajectories are unrealistic idealizations; trajectories can still be simulated in distributions inherently unstable states, and thereby unlikely - when all phase points collapse to one ordered N-tuple of the degrees of freedom within the distribution; now irreversibility is not derived from randomness they are only cousins32 Now we attempt to explain the inherent processes which led to the above developments. On a subunit level, particle interactions such as charged particle interaction through adjacent potential fields or classical collisions of neutral particles all demonstrate short range interactions which ultimately create divergences from simple trajectories, and this is how the apparent randomness (unpredictability) of any given particle ensues. But in large ensemble systems that are driven far-from-equilibrium, a coherence of system particle behavior may occur which are not immediately adjacent, and actually separated by large distances within the system, such that short range interactions are not causally involved for explaining observable cohered behavior! This is referred to as long-range coherence. The long-range coherence in large ensemble systems on a microscopic scale is analogous to (but not necessarily equal to) the microscopic alignment of spin orbits in a ferromagnetic material after heating to the Curie temperature. A coherence of microscopic phenomenon sum to yield a macroscopically detectable magnetism. Another analogy is the Lasing of light in a structured material. The continuous self-stimulated excitation and re-emission of photons throughout the whole medium at a fundamental optical frequency of self-stimulation creates a global resonant condition that produces a macroscopic coherent light source. Intriguingly, this long-range coherence represents the existence of collective decisions being made within the system, based upon local information available to each agent however it is noted the distinct absence of any form of a locally definable central control which could thereby be responsible for propagating a linear sequence of commands from a centralized control to its wide-spread constituents! Within a complex system that is composed of many subunits, global traits embracing the entire system begin to observably form distinctive patterns with distinctive behavior, which in no way are reducible to the properties of the constituent components. This phenomena is referred to as emergence.33 We have employed the term complexity and complex systems throughout this paper. It is now timely to describe complexity through the lens of the developed concepts: Complexity, as related to dynamical processes of physical systems, expresses the extent that the system constituents engage in organized structured interactions. Such systems demonstrate mixtures of both order and disorder, and also demonstrate a great capacity for generating emergent phenomena. Emergence demonstrates a completely non-reductionist character. This trait is dependent upon the creation and sustainability of hierarchal structures in which the disorder and randomness existing at the local level are controlled by new structured processes. These structured processes result in higher states of order and long-range coherence. This entire phenomena is referred to as Self-Organization.34

JSHO

Vol. 4 No.14-15, Jan-April 2012

23
Self-Organization is synonymous with Prigogines Dissipative Structure, and is believed now to be the primal process phenomena in organizing biological systems, as well as many other realms of study. These stable or stationary system states may last seconds, minutes, hours, days or years before decaying into thermodynamic disorder. The gradual or eventual decay is the phenomena of aging we previously touched upon! These self-organized structures are far-from-equilibrium processes, which have resulted from transformations from previous states of their recent past (a relative term) which demonstrated greater entropy, greater homogeneity of state, and whose remote past (again a relative term) could always be theoretically postulated to originate from a fundamental equilibrium state of origination. Something happens to these previous origination states via the introduction of new matter crossing the system boundaries, the injection of specific energies into the system, and/or the change of surrounding environmental conditions which together create an evolution of transforming processes. The threshold of these transformation processes have been analyzed through the motive dynamics of what creates instabilities in the system, pulling the system or localized regions of the system farther from equilibrium. The instabilities may throw the system into a non-linear region, in which the degrees of freedom are no longer independent of one another, and demonstrate a coupling, mathematically representable as a denominator of the difference between two terms derived from the degrees of freedom which becoming too close in their difference value result in large inverse numerical values corresponding to physical disruption properties of a resonant nature. Resonances such as these disrupt the simplicity of a dynamic motion, and correspond to the transfer for large amounts of energy or momentum from one degree of freedom to another. This transfer process occurs through collisions and correlations within the system itself, and has no analogue in trajectory theory. These new processes break the symmetry of time, and introduce the evolution toward the macroscopic behavior of system irreversibility.35 In this irreversibility, transference of energy to localized degrees of freedom within these open systems, along with the additional possible introduction of new matter, and the presence of guiding structured forces usually electromagnetic in nature, provide the components necessary for the formation of dissipative structures through self-organization. Self-organization may be enhanced through the presence of chaotic attractors in the dynamics, which allow the simultaneous presence of regions of order and disorder. Over the boundaries of these regions, constituents of the system may cross back and forth, becoming energized through resonances as they migrate into the realms of stability and order of the attractors, or give up their broad coherence as they pass into regions of randomness and disorder, perhaps by the same attractors. These new states demonstrate a level of inherent stability as stationary states, and will maintain their structure and structured processes as long as the surrounding system conditions allow perpetuity of its structure. We now begin to see how all of previously discussed process events are absolutely necessary ingredients to the recipe of complex irreversible system formation, at states far-fromequilibrium. We now touch on the nature of Topology, and discuss its intrinsic value to the nature of the studies herein. A simplistic and non-rigorous presentation is made, with the elements being representative, if not complete. Topology is not equivalent to geometry, though geometric structures can be contained within a topology. Geometry is concerned with relationships dealing with the size, shape, and relative position of figures and, in some more sophisticated applications, the properties of spaces. Geometries, however, do not have an invariance under types of deformation that may occur upon the space they are inscribed within. For example, one can picture triangles or polygons inscribed onto a 2- dimensional fabric. But if the fabric were stretched, the geometric figures would distort and the relationships between vertices, lines and angles would change. The properties of the objects would not be invariant under the deformation. Topology is an area of mathematics concerned with spatial properties that are preserved (invariant) under continuous deformations of regions of spaces, i.e. the previous example of stretching. As a discipline, it

JSHO

Vol. 4 No.14-15, Jan-April 2012

24
derives itself from conceptual developments in geometry and set theory, such as space, dimension, and transformations. The principles underlying the topology may be considered more primitive, or better yet, more fundamental than geometric descriptions, which actually would represent additional specific local constraints upon a region of the topology. Unlike geometry, topology is a global, therefore non-local, description of the space under study. A topological description of a space relates how its global regions behave synergistically within itself, with its environment, and with its boundaries. Interestingly, the concept of coherence is inherently a topological concept in that it corresponds to interactive engagement between constituents not at the same point.36 We are specifically interested in the application of modern Cartan Topology. Let us consider Electric Field (EF), Magnetic Field (MF),and Pulsed Electromagnetic Field (PEMF) phenomena superposed within biological media. The field patterns of electromagnetics have a mathematical description which occupy all points within the region. Likewise, currents formed within the bio-plasma, whether of ionic or singular charge nature, can be seen as continuous charge plasmas (here used in the electromagnetic sense) and in the thermodynamic limit37 represent a contiguous phenomena that fills the region of space within which they propagate. Such elements are of the inherent nature to be described as a topology, or in this case as a topological space of electromagnetic origin. Likewise, thermodynamics being the study of macroscopic system behavior, without the specific regard to the underlying dynamics of individual particles, the contiguous proximity of a systems internal constituents in vast numbers led to the use of distributions for their description. This thereby opens the door for direct correspondence to a topological description, which remarkably represent a point of departure in starting with a topological (not statistical) formulation of dynamics, which can furnish a universal foundation for the Partial Differential Equations of non-equilibrium hydrodynamics and electrodynamics.38 As a surprising adjunct, the advent of Cartan Topology to thermodynamic analysis has, as well, been very successful. Recent advances in the mathematics of Cartan Topology have manifested rigorous representations of thermodynamic states without the resort to statistical models, and have also modeled the dynamic transitions from system equilibrium/reversibility to systems of non-equilibrium/irreversibility through the additional analysis tool of Continuous Topological Evolution. Continuous Topological Evolution is the process of employing the mathematics of Cartans exterior differential forms39 in analyzing the transformation from one topological structure into another, subsequent to the introduction of energetic stimuli, which thereby transform the topology in a specific way. Therefore such transformations are of interest in the specific way the transformation processes occurs, and thereby determine the resultant new topologies. These newly resultant topologies may become inherently stable, with their own topological properties, existing for as long as the surrounding system conditions sustain them. The energetic stimuli create regions of a topology which are non-homogeneous with respect to the whole system. These regions are referred to as topological defects and, contributing to the deformation of the local topology. Under proper conditions, these defects become synonymous with the resonances in our earlier discussions associated with coupling the degrees of freedom. It is through these defects in which the invocation and evolution into dissipative structures may occur, manifesting self-organization onto a region of the topology. This is a simple overview of a very complex mathematical representation, however retains the essences of crucial importance. In the following discussion, a precise topological property called the Pfaff (pronounced faf) dimension is introduced as a descriptive tool that will be employed as a rule of thumb. Though rigorous in its mathematical formalism, assume it presently to be an empirical indicator of the complex structure of a topological property upon a topological region assigned an integer value the greater the integer value the higher the structural complexity.

JSHO

Vol. 4 No.14-15, Jan-April 2012

25
To give an idea of this unfolding process, envision a topology at or near equilibrium and assign this system structure a Pfaff dimension of 2. This system may not be at equilibrium, however it is still reversible. A topological defect is then manifest through external stimuli crossing the boundaries of a local region. This topology is now defined by a Pfaff dimension 4, and is considered turbulent. As conditions allow, attractors and limit cycles may manifest self-organization, and a sub-region settles into a stationary state of Pfaff dimension 3, yet far-from-equilibrium and irreversible. Now a dissipative structure topology, a stationary state, (of Pfaff dimension 3) resides upon the turbulent topology which created it (of Pfaff dimension 4), and both topologies may be within a larger surrounding environment near equilibrium (of Pfaff dimension 2). The dissipative structure topology (of Pfaff dimension 3) will have a lifetime, and will eventually decay back to turbulence, providing the turbulent topology still exists; or it may eventually evolve back to equilibrium when sustaining conditions of the turbulence are removed. Topologies may be numerous within a region, overlaid, and of different properties. An illustration of this process is given Figure 7. The Topology of Cartan through exterior differential forms also brings to light useful concepts that are recast into a higher formalism or were here-to-fore absent from previous mathematical descriptions. Two of these important concepts are Spinors and Torsion. We cannot go into deeply into these structures of these, so it shall suffice to mention only the skeletal structures. Spinors were originated by lie Joseph Cartan (1869 1951), and have been employed in matrix form, previously by Dirac and Wolfgang Pauli (1900 1958) in defining the famous Dirac and Pauli Matrices found in quantum mechanics. Such Spinors were symmetric matrices. However their structure is expanded in an anti-symmetric form for topology purposes, and without such Spinors there would be no irreversible turbulence.

FIGURE 7 The Emergence of Non-equilibrium and Equilibrium States by a Continuous Topological Evolution43

JSHO

Vol. 4 No.14-15, Jan-April 2012

26

Torsion is a concept that has derived from topology itself. Torsion has its roots in the hydrodynamic description of velocity and vorticity. Torsion is a higher structured concept that contains more information, and is necessary for the understanding of turbulence and chaotic flow. Interweaving electromagnetism into the present discussions, when applied to electromagnetism, thermodynamic concepts of stability, irreversibility, and equilibrium begin to take on specific meaning. For example, an atom in its ground state is considered in its stable state; excitation implies crossing the boundaries with energy and may on a grand scale represent a topological defect which stabilizes onto a coherent local structured topology of an excited atom; subsequent emission of an individual excited atom is an irreversible process, and may trigger other local topologies of excited atoms to emit photons as they go back to their ground state such a triggered emission is lasing; the lasing effect now demonstrates a greater global topology which, with the influx of power across the boundaries, sustains the global topology of lasing. The practical marriage of all of these process tools is nothing less than exciting. It would be easy to digress into philosophical divergences of the grand potentialities of these approaches. However simply maintaining the observational perspicacity of the success and descriptive power of the work presented so far - from the approaches of Prigogine and contemporaries to modern topology in electromagnetism and thermodynamics - it is quite apparent that new frontiers await. As Dr. Robert Kiehn mentions in his series Adventures in Applied Topology, few physicists understand these techniques and practically no engineers. The authors of this paper would venture to say even far fewer bio-physicists or electromedicine investigators. These techniques and discoveries are still in their infancy and require a multi-disciplined approach to their successful implementation. A summarizing quotation of these ideas comes from Cartan topologist Dr. Robert Kiehn:40 These topologically coherent, stationary states" far from equilibrium, ultimately will decay, but only after a substantial "lifetime". Analytic solutions and examples of these processes of Continuous Topological Evolution give credence, and a deeper understanding, to the general theory of self-organized states far from equilibrium, as conjectured by I. Prigogine. Based upon Cartans theory of exterior differential forms, many of the mysteries of non-equilibrium thermodynamics, irreversible processes, and turbulent flows, can be resolved. In addition, the nonequilibrium methods can lead to many new processes and patentable devices and concepts [emphasis added by authors]. Such non-equilibrium systems may be biological systems, political systems, economic systems, as well as mechanical systems. They all have common characteristics of emergence (conception and growth to maturity), and irreversible non-equilibrium interactions with their Open environment . Maturity implies that the system has reached a slowly decaying, almost stationary, Closed state of relatively long lifetime, often followed by a rapid decay to Equilibrium state, or death. The topological methods give a formal basis to Prigogines conjectures of self organization by means of dissipative processes. There are applications going on of Prigogines work and of non-linear chaotic approaches to understanding biological processes, particularly brain neurological development. Applied techniques for understanding neural development and coherent function are employing stability/instability analysis, topological models, logistic maps, and non-linear dynamics.41 The ever increasing demand of applying a multidisciplined approach in anatomical performance research is summarized clearly in a quotation by a major researcher in brain dynamics.

JSHO

Vol. 4 No.14-15, Jan-April 2012

27
Quoting Dr. Walter Freeman, renowned brain researcher with 50 years of pioneering research behind him:42 It is a truism to say that no one understands how brains work. The solution to the problem of brain function will be stranger than anyone now thinks or can imagine. Success in the future of brain research will hinge on the introduction of new techniques and concepts from other fields, principally chemistry, physics and mathematics. We neuroscientists cannot and should not expect mathematicians, physicists and chemists to solve the problem for us. Their models will not apply directly to neuropil [the grey matter of the brain]. Instead, their solutions to their problems can provide us with the analogies, metaphors, and "what-if's" that can open for us new avenues of thought.

JSHO

Vol. 4 No.14-15, Jan-April 2012

References
1 [Authors Note]: A dissipative structure can best be defined as a sustainable pattern within a complex medium that can be formed and stably maintained as long as the system constraints supporting that formation remain present. In essence, the large-ensemble systems are sometimes able to deviate from thermodynamic disorder, and transform some of the energy from the surrounding environmental reservoir into an ordered behavioral structure or pattern. Such structures have the properties of attractors in Chaos theory. (see G. Nicolis & I.Prigogine, Exploring Complexity , W. H. Freeman & Co., New York, 1989, for a broad presentation on dissipative systems and their properties.) As will be stated later, new research indicates that these selforganizing structures, originally labeled by Prigogine, may actually be non-dissipative as far as energy is concerned, as definable within topology. 2 B. J. Hunt, The Maxwellians, Cornell University Press. An engrossing expository for the development of Maxwells theories in the subsequent two decades following his untimely death. The primary physicists who could be considered Maxwellians were George F. Fitzgerald in Ireland, Oliver Lodgeof England, and Oliver Heaviside in London. Intermittent and ancillary support over the decades came also came from Lord Kelvin (William Thomson), J. J. Thomson, and Joseph. Larmor in England, Heinrich Hertz in Germany and others, though these last mentions were not completely of the Maxwellian mind-set. 3 International Electrotechnical Commission, web site posting of letters of Giovanni Giorgi to Oliver Heaviside; letters Dating: 25th Feb, 1901; 9th Feb, 1902; 11th March, 1902; 29th March, 1902; and two undated letters. http://iec.ch/about/history/documents/documents_giovanni.htm. 4 See Stopes-Roe, H.V. Essential Failure of Giorgi systems of Electromagnetism, and a Basic Error by Sommerfeld. Nature. Vol. 224 p. 579-581. 1969 There are also systems which have practical significance. There are also systems of equations, which are not of the Giorgi form, but which are appropriate for use with SI units, namely the SI-Gaussian and SI-Electric. It is the purpose of this communication is to show that the Giorgi conditions leads essentially to confusion, and in practice to error. Steppes-Roe reference is to Sommerfeld, A. Lectures on Theoretical Physics: III Electrodynamics. Academic Press, N.Y. 1952 5 The great American physicist and thermodynamicist Josiah Gibbs (1839 1903), who helped create and establish the new Vector Analysis at the time (1890s) was sharp a debater against the proponents of quaternions. See The Scientific Papers of J. Willard Gibbs, Vol II Longmans, Green, & Co., 1906, Chapters V thru X pp. 118 181. 6 See International Electrotechnical Commission, 1901-2001 Celebrating the Centenary of SI - Giovanni Giorgi's Contribution and the Role of the IEC, 2001, October 1. 7 [Authors note] For those interested in a general overview of basic discrepancies, see T. W. Barrett, Topological Foundations of Electromagnetism, World Scientific Publishing Co., 2008, Chapter 1: Electromagnetic Phenomena Not Explained by Maxwell's Equations; For a straight-forward, yet illuminating list with additional references, see : T. E. Bearden, Lt. Col, Retired, Flaws in Classical EM Theory, http://www.cheniere.org/misc/flaws_in_classical_em_theory.htm, 1999, Cheniere website; For specific failures in common analysis, see Eberly-Moon & Spencer, A New Electrodynamics, Journal of the Franklin Institute, May 1954, Vol 257, Issue 5, pp 369-382. 8 T. W. Barrett, Topological Foundations of Electromagnetism, World Scientific Publishing Co., 2008, p10 9 Ibid, p9 10 J. B. Kuipers, Quaternions and Rotations Matrices, Princeton University Press, 1999, p41 11 T. W. Barrett, Topological Foundations of Electromagnetism, World Scientific Publishing Co., 2008, p2-3 12 D. J. P. Morris, D. A. Tennant, S. A. Grigera, et al, Dirac Strings and Magnetic Monopoles in the Spin Ice Dy2Ti2O7, Science 16 October 2009 326: 411-414; published online 3 September 2009, http://www.sciencedaily.com /releases/2009/09/090903163725.htm and S. T. Bramwell, S. R. Giblin, S. Calder, R. Aldus, D. Prabhakaran & T. Fennell, Measurement of the charge and current of magnetic monopoles in spin ice, Nature, 2009; 461 (7266): 956 DOI; published online Oct 15, 2009, www.sciencedaily.com/releases/2009/10/091015085916.htm. The existence of magnetic monopoles as instantons, or emergent states of matter, appears in these preliminary experiments i.e. topological constructs that exist within the material complexity, but not outside of it.

JSHO

Vol. 4 No.14-15, Jan-April 2012

29

13 [Authors note] The development of the steam engine and combustion engines significantly preceded the formal development of a thermodynamic science that could theoretically reconcile and predict the behavior and successful design of such machines. A similar precedent was set by the development of the light bulb: it was developed long before the atomic theory of atoms was fully developed, and even before the discovery of electrons as the basic elemental medium of electric currents !. 14 Clausius, R. (1865), "ber die Wrmeleitung gasfrmiger Krper", Annalen der Physik 125: 353400 Clausius, R. (1865). The Mechanical Theory of Heat with its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst, 1 Paternoster Row. MDCCCLXVII, as referenced in D. Lindley, Boltzmanns Atom, The Free Press, 2001, p73. [Authors Note]: Though Clausisus is attributed with the first mathematical treatment of entropy and the author of its name, 19th century physicists William Thomson and William Rankin (to name only a few) were well aware that another principle had to be present to describe the behavior of reversible/non-reversible systems. See Reference 20. 15 D. Lindley, Boltzmanns Atom, The Free Press, 2001, p73 16 Ilya Prigogine, From Being To Becoming, W. H. Freeman & Co., 1980, p77-78 17 Johan Josef Loschmidt (1821-1895), an Austrian scientist who performed groundbreaking work in chemistry, physics. 18 D. Lindley, Boltzmanns Atom, p84 19 Ibid 20 Example: C. Cohen-Tanoudji, B. Diu, F. Lalo, Quantum Mechanics, Volume I, p150, p222. Any good reference on quantum mechanics will do as a superficial treatment. Total understanding does not come so easily. The quantum complimentarity was actually first presented in a paper by Werner Heisenberg (1901 - 1976) as a strange empirical relationship between position and momentum, without any fundamental understanding. Its deeper roots were soon discovered by P.A.M Dirac (1902 1984) to be a derivative of the classical commutation relations called Poisson Brackets. Candidates for complimentarity of parameters in QM are the canonically conjugate variables of classical theory position x and momentum p {see R. C. Tolman, Principles of Statistical Mechanics, p27, p193}. Complimentary parameters inability to be simultaneously measured with precision is expressed in the non-zero result of their commutator relation i[x , p ] = i when m = k {where is Plancks constant and i = (-1) } . The non-zero result is also indicative of an m k irreversible process! 21 Richard C. Tolman, The Principles of Statistical Mechanics, The International Series of Monographs in Physics, Oxford University Press, Amen House, London E.C.4, 1959 reprint of 1938 First Edition, see Introduction p.8. Quoting, It is first shown that the principle of dynamic reversibility holds also in the quantum mechanics in appropriate form, indicating that quantum theory supplies no new kind of element for understanding the actual irreversibility in the macroscopic behavior of physical systems. 22 See Ilya Prigogine, The End of Certainty, The Free Press, NY, 1997, pp 61-62, Prigogine discusses the hostility he encountered in 1946 when he organized the first Conference on Statistical Mechanics and Thermodynamics, under the auspices of the International Union for Pure and Applied Physics (IUPAP). He had presented his own lecture on irreversible thermodynamics, and as Prigogine stated, ...the greatest expert in the field of thermodynamics made the following comment: I am astonished that this young man is so interested in the non-equilibrium physics. Irreversible processes are transient, Why not wait and study equilibrium as everyone else does? I was so amazed at this response that I did not have the presence of mind to answer: But we are all transient. Is it not natural to be interested in tour common human condition? . 23 From Miriam-Websters New Collegiate, 1979, G & C Merriam Company, Springfield, Massachusetts. 24 Leon Rosenfelds quotation: Every theory is based on physical concepts expressed through mathematical idealizations. They are introduced to give and adequate representation of physical phenomena. No physical concept is sufficiently defined without the knowledge of the domain of its validity. Quoted by Prigogine in The End of Certainty, The Free Press,NY, 1997, p 29. From L. Rosenfeld, Unphilosophical Considerations on Causality in Physics, in Selected Papers by Leon Rosenfeld, ed R.S. Cohen & J. J. Stachel, Boston Studies in the Philosophies of Science, Vol 21, Dordrecht: Reidel, 1979, pp 666-690.

JSHO

Vol. 4 No.14-15, Jan-April 2012

30

25 Denoting this distribution function by the Greek letter Rho, (q,p,t), its meaning becomes the probability of finding the System in the nth State denoted by the (q ,p ,t ). The distribution is a function for the entire ensemble of particles in the n n n System. If we had 1 Mole of like molecules in the System (6.023 x 10^23 particles), and the probability n of finding the System in a region of phase space that represented an Area of the phase space, then we would mathematically Integrate the density distribution over the area: Probability = .. (q,p,t)dqdpdt to obtain the probability of finding the System in that region of phase space and time. If,

say, one obtained a Probability = 0.0002 from this integral, then 1 Mole x 0.0002 would also indicate the fraction of molecules which would be expected to be found in that region. 26 See statements attributed to Poincar in book by Ilya Prigogine, The End of Certainty, The Free Press, New York, 1997, p35, as referenced to H. Poincar, The Value of Science, Dover Publishing, New York, 1958. 27 Ilya Prigogine, From Being To Becoming, W. H. Freeman & Co., 1980, pp184-185. The Star-Unitary transformation lies outside Hilbert space. 28 For mathematical curiosity sake see Ilya Prigogine, From Being To Becoming: pp181 187, in Section A New Transformation Theory. 29 Ilya Prigogine, The Arrow of Time, Inaugural Lecture of the Workshop on The Chaotic Universe delivered at Pescara, Italy, in honorarium of citizenship bestowed upon Prigogine. Prigogine recollected the refutation by some legends within Physics to acknowledge that there was anything more to understand about Time and its role in Physics: On the other hand, when I asked physicists when I was young [regarding the meaning of time in physics], I asked Pauli, I asked Bohr, they smiled and said, The problem of time has been solved by Newton, with some changes introduced by Einstein. There is no point for a young man to enter into the study of time. " 30 Ilya Prigogine, From Being To Becoming, W. H. Freeman & Co., 1980, pp188-191; pp206-210. Both of these new operators, the Entropy and Time Operators, demonstrate quantum complimentarity with the Liouville Operator, which is the operator of distribution function dynamics it replaces the role of the Hamiltonian as in wave function dynamics. Germane to the present discussion, is the a Liouville operator L in commutation with either the entropy operator M { i(LM ML) = D, where D is the time-rate-of-change of M } or the microscopic time operator T { i(LT TL) = 1, where 1 is the identity operator and again i = (-1) }. In each case the M & T operators are complimentary with the Liouville L operator, and in these examples demonstrate the presence of an irreversible process at work when the operators do not commute. 31 Ibid, p207 32 Ibid, pp176-177 33 G. Nicolis, C. Nicolis, Foundations of Complex Systems: Non-Linear Dynamics, Statistical Physics, Information & Prediction, World Scientific Publishing, 2007 , Chapter 1, section 1.2. 34 Ibid. 35 See Ilya Prigogine, The End of Certainty, The Free Press, NY, 1997, pp123-124 36 See Dr. R. M. Kiehn, Non-Equilibrium Systems and Irreversible Processes Adventures in Applied Topology, Vol 1, Non-Equilibrium Thermodynamics, Lulu Enterprises Inc, p406 for discussion. 37 See Ilya Prigogine, The End of Certainty, Glossary, p205; Thermodynamic Limit is defined as the procedure of considering the number N of particles and the size V of a system becoming arbitrarily large, while the concentration N/V remains finite or constant. 38 See Dr. R. M. Kiehn, Non-Equilibrium Systems and Irreversible Processes Adventures in Applied Topology, Vol 4, Plasmas and Non-Equilibrium Electrodynamics, Lulu Enterprises, Inc , p22 for a description of the properties of a thermodynamic system and the correlations to being a topology. 39 Paraphrasing (accurately) the language of Dr. Robert Kiehn: Non-equilibrium Thermodynamics and Continuous Topological Evolution, which are based upon Cartans theory of exterior differential forms.[are methods that assume] physical systems can be encoded in terms of exterior differential systems, and that irreversible processes can be studied in terms of contravariant vector direction fields, V Dr. R. M. Kiehn, Non-Equilibrium Systems and Irreversible Processes Adventures in Applied Topology, Vol 4, Plasmas and Non-Equilibrium Electrodynamics, Lulu Enterprises, Inc, p12

JSHO

Vol. 4 No.14-15, Jan-April 2012

31

40 Dr. R. M. Kiehn, Prigogines Thermodynamic Emergence and Continuous Topological Evolution, Professor Emeritus, University of Houston, White Paper downloadable from web site www.cartan.pair.com. (date of origin unknown) 41 The work of Walter Freeman in the area of neurological study through EEG. See Mass Action In The Nervous System; Examination of the Neurophysiological Basis of Adaptive Behavior through the EEG Academic Press, NY, 1st printing 1975, 2nd printing 2004. A comprehensive treatment spanning decades of personal research and full of references for ancillary research. Likewise, see also Dr. Walter Freeman, Mesoscopic Brain Dynamics, Springer-Verlag London Limited 2000. 42 Dr. Walter Freeman, Keynote Lecture, Conference on "Brain Network Dynamics" 26-27January 2007, University of California at Berkeley. 43 A Symplectic domain, or Symplectic Manifold is a topological space which is differentiable, and structured with a closed non-degenerate 2-form (a rank 2 skew-symmetric covariant tensor field). The readers indulgence is again requested, not expecting full comprehension, but the authors desire is to be true to the reproduction and stimulate the readers sense of a greater extant set of tools.

JSHO

Vol. 4 No.14-15, Jan-April 2012

Вам также может понравиться