Вы находитесь на странице: 1из 168

compilation of degeneracy publications from 2009 to 2011 by James M Whitacre 1. Whitacre J.M., Atamas S.

The Diversity Paradox: Resolving Darwins Unspoken Dilemma Biology Direct (in press) 2. Whitacre, J. M., Rohlfshagen, P., Bender, A, and Yao, X. Evolutionary Mechanics: new engineering principles for the emergence of flexibility in a dynamic and uncertain world, Journal of Natural Computing - Special Issue on Emergent Engineering (in press) 3. Whitacre J. M. and Bender A., Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems Theoretical Biology and Medical Modelling, vol. 7(20), 2010 http://www.tbiomed.com/content/7/1/20 4. Whitacre J. M., Degeneracy: a link between evolvability, robustness and complexity in biological systems Theoretical Biology and Medical Modelling, vol. 7(6), 2010 http://www.tbiomed.com/content/7/1/6 5. Whitacre J. M. and Bender A., Degeneracy: a design principle for robustness and evolvability Journal of Theoretical Biology, 263(1): 143-153, 2010 (http://arxiv.org/ftp/arxiv/papers/0907/0907.0510.pdf) 6. Whitacre, J. M., Evolution-Inspired Approaches for Engineering Emergent Robustness in an Uncertain Dynamic World, In ALIFE XII 2010, pp. 559-561 (Odense, Denmark, Aug 19-23) 7. Whitacre, J. M., Genetic and Environment-Induced Innovation: Complementary Pathways to Adaptive Change that are Facilitated by Degeneracy in Multi-Agent Systems, In ALIFE XII 2010, pp. 431-433 (Odense, Denmark, Aug 19-23) 8. Whitacre J. M. and Bender A., "Degenerate neutrality creates evolvable fitness landscapes," In WorldComp 2009 (Las Vegas, Nevada, USA, July 13-16). 9. Whitacre, J. M., Rohlfshagen, P., Yao, X. and Bender, A. The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems in Dynamic Environments, R. Schaefer et al. (Eds.): PPSN XI, Part I, LNCS 6238, pp. 284293, 2010. (Krakow, Poland, Sept 11-15) 10. Frei R., Whitacre J.M. Degeneracy and Networked Buffering: principles for supporting emergent evolvability in agile manufacturing systems Journal of Natural Computing - Special Issue on Emergent Engineering (in press) 11. Tianhai Tian, Sarah Olson, James M. Whitacre, Angus Harding., The Origins of Cancer Robustness and Evolvability Integrative Biology, 3, 17-30, 2011 http://pubs.rsc.org/en/content/articlelanding/2011/ib/c0ib00046a

The Diversity Paradox: How Nature Resolves an Evolutionary Dilemma


James M. Whitacre*, and Sergei P. Atamas

From the *CERCIA Computational Intelligence Lab, University of Birmingham, U.K., and Departments of Medicine and Microbiology & Immunology, University of Maryland School of Medicine, and Baltimore VA Medical Center, Baltimore, MD, U.S.A.

Corresponding author

James M. Whitacre, PhD, School of Computer Science, University of Birmingham, Edgbaston, B15 2TT, United Kingdom; E-mail: jwhitacre79@gmail.com Sergei P. Atamas, MD, PhD, University of Maryland School of Medicine, 10 South Pine St., MSTF 834, Baltimore, MD 21201, U.S.A.; E-mail: satamas@umaryland.edu

1. Abstract
Background: Adaptation, through selection, to changing environments is a hallmark of biological systems. Diversity in traits is necessary for adaptation and can influence the survival of a population faced with environmental novelty. In habitats that remain stable over many generations, effective stabilizing selection removes trait diversity from populations and should limit adaptability to environmental change. Paradoxically, field studies have documented numerous populations under long periods of stabilizing selection and evolutionary stasis that can rapidly evolve within new environments. Presentation of the hypothesis: Recent studies have indicated that cryptic genetic variation (CGV) may facilitate rapid evolution in response to environmental novelty. Here we propose that CGV also resolves the diversity paradox by allowing populations under stabilizing selection to gradually accumulate hidden genetic diversity that reveals itself through trait differences as environments change. CGV represents a broader phenomenon, degeneracy, known to support genetic and non-genetic adaptation at many levels of biological organization. By integrating these paradigms, we propose that degeneracy fundamentally underpins evolvability at multiple levels of biological organization. Implications of the hypothesis: The conditions that facilitate environment-induced adaptation are of great importance to the preservation of species and the eradication of evolvable pathogens. In conservation biology for instance, CGV revealed under predicted stress conditions may provide a more adaptively significant measure of intraspecific biodiversity for conservation efforts. Similar explorationexploitation conflicts arise throughout sociotechnical systems and we consider how degeneracy principles can be applied in these contexts to improve flexibility and resilience. Testing the hypothesis: Instead of conflicting, this hypothesis suggests that environmental stasis supports CGV accumulation and actually enables adaptation in novel environments. Moreover, degeneracy theory describes a more general resolution between the conflicting forces of short-term exploitation and longer-term exploration. We discuss molecular-based experimental systems, simulation studies, principles from population genetics, and field experiments for exploring the validity of these claims.

2. Background
Various factors can contribute to the maintenance of heritable trait differences in a population, including balancing selection (e.g., frequency dependent selection), gene flow between populations, assortative mating, and mutation-selection balance. Despite the contributions from these factors, most populations remain relatively homogeneous in most traits, i.e. the variance in quantitative traits is small compared to the change in mean trait values that can occur following periods of rapid adaptation. In habitats that remain stable over many generations, stabilizing selection drives many populations to converge towards the most adaptive trait values. Such phenotypic homogeneity should limit the ability of these populations to successfully adapt to novel environmental stresses. While many habitats appear stable, all habitats eventually change.1 There is an inherent conflict between the stable habitat conditions that drive a population to converge to the most adaptive traits on the one hand, and the need to maintain diversity and display heritable trait differences when faced with environmental novelty on the other hand. Stated more abstractly, there is an evolutionary conflict between the immediate need to exploit current conditions and the longer-term need to diversify (bethedging) in order to adapt to future unexpected events. Similar exploration-exploitation conflicts arise within different systems of social and economic relevance that are subjected to modification and selection in a dynamic environment. For instance, analogous challenges are studied within operations research [1], strategic planning [2], systems engineering [2], and peer review [3]. Exploration-exploitation conflicts are possibly best illustrated within population-based dynamic optimization research [4]. Within this sub-field of operations research, evolutionary algorithms are employed where solution parameters to an optimization problem represent a genotype and the corresponding solution performance on an objective function represents fitness. Using a population of these solutions, the more fit solutions are preferentially mated (recombination of solution vectors) and mutated (perturbation of solution vector within solution space) to generate new offspring solutions that are selectively bred in the next generation [5]. With a static objective function, populations consistently converge over many generations to eventually display low variance in population fitness and low genetic diversity [1]. However the more that the population converges, the slower it adapts to objective function changes, thereby limiting algorithm performance on dynamic optimization problems. This conflict between exploiting current problem conditions and maintaining diversity to prepare for future problem changes has been addressed in this field using numerous tools that subvert fitness-biased selection or otherwise encourage trait diversity to be maintained. However, by enforcing diversity maintenance, this limits the speed and extent that the simulated population can adapt to the current problem definition, thus revealing a fundamental tradeoff between short-term and long-term algorithm performance. Understanding the mechanisms by which natural populations are able to resolve this diversity paradox could provide insights for reconciling similar conflicts in operations research [1]. Moreover, and as we elaborate on later in the article, these issues are widely relevant to analogous challenges that arise in many disciplines including ecological conservation efforts, the eradication of evolvable pathogens, and potentially even the design of flexible engineered systems and adaptable sociotechnical systems.
1

Factors such as migration and habitat tracking reduce the frequency, but not the inevitability, of experiencing this change.

We propose a hypothesis that cryptic genetic variation is a case of a broader adaptive strategy known as degeneracy, and that it plays a key role in resolving the conflict between the opposing needs to exploit stable conditions and become phenotypically homogenous while remaining prepared for rapid heritable phenotypic diversification should environmental changes occur.

3. Presentation of the hypothesis


Cryptic Genetic Variation: Natural populations appear to resolve the diversity paradox through a phenomenon known as cryptic genetic variation (CGV). In a stable environment, CGV is hidden, or cryptic, due to the ability of organisms phenotypes to remain unchanged, despite genetic mutation [16]. Such mutational robustness the stability of the phenotype to mutations was originally predicted to impede evolution [6, 7] because it lowers the number of distinct heritable phenotypes that are mutationally accessible from a single genotype and it reduces selective differences within a genetically diverse population [7]. There is still confusion about the role of CGV in evolution, due to long-standing difficulties in understanding relationships between robustness and adaptation of traits. Only in the last decade have arguments been put forth to explain how evolution can be supported by mutational [7-9] and environmental [10] forms of robustness. Mutational robustness is believed to support evolution in two ways: (i) in a stable environment it establishes fitness-neutral regions in fitness landscapes from which large numbers of distinct heritable phenotypes can be sampled (via genetic mutations that lead to genotypes that are not members of the neutral set) [7, 9] and (ii) it allows for cryptic genetic differences to accumulate in a population with subsequent trait differences revealed in an environment-dependent manner [10]. Under stabilizing selection, individuals become phenotypically similar, yet can accumulate (through selectively neutral mutations) genetic differences that have the potential to be revealed as trait diversity should the environment change and directional selection emerge. CGV thus resolves the diversity paradox by providing the heritable trait diversity that is necessary for adaptation under new stressful environments, while bypassing the negative selection that limits phenotypic variability under stable conditions. Although environment-induced CGV has long been implicated as a pathway for adaptation [11-14], its role in resolving tensions between stabilizing and directional selection is rarely discussed. Moreover, by enabling the accumulation of CGV, stable environments may actually support a populations ability to rapidly adapt in new environments. For evolution in a meta-stable environment, CGV is not simply an option for how heritable adaptive change might occur but instead is an essential precondition for evolvability and the persistence of life in a changing environment. While there is no direct experimental evidence to support or refute these ideas, a recent study of ribozyme evolution has demonstrated for the first time that CGV does support rapid evolution [13].

Degeneracy: CGV is an example of a more general phenomenon known as degeneracy. Degenerate, as opposed to diverse or redundant, ensembles appear functionally redundant in certain environmental contexts but functionally diverse in others [15, 16]. Such context-dependent similarity in functions/traits among diverse units of an ensemble, and, reciprocally, context-dependent dissimilarity of redundant units, is common in biology. It is well documented at the molecular and cellular levels in gene regulation, proteins of every functional class (e.g. enzymatic, structural, or regulatory) [17], protein complex assemblies [18]; also in ontogenesis (see page 14 in [19]), the nervous system [20], metabolic pathways [21], and in cell signaling [22].Defined more precisely, degeneracy is a property of ensembles

of components/units/individuals. Degeneracy can be defined relationally to diversity and redundancy, by comparing the behavior of units under various environmental contexts. For degeneracy to exist, the units being studied must exhibit functional versatility which, depending on the context, may manifest as enzymatic substrate ambiguity, multiple ligand-receptor crossreactivity, protein moonlighting, or multiple use of organs, e.g., use of fins for swimming and crawling. Functional versatility implies that the units can change their behavior in a manner that is revealed by changes in the local environment. Then, if degeneracy exists amongst two functionally versatile units, this means that there will be contexts in which the two units will display behaviors that appear to be functionally the same and other contexts where the two will appear to be functionally distinct [23]. In other words, individual units must be able to manifest diverse, yet functionally overlapping, behaviour, some of which may become beneficial in the right environment. As we highlight through selected examples, functional versatility fundamentally underpins degeneracy and evolvability in complex biological systems. CGV is a case of adaptive degeneracy: Degeneracy is also observed in natural populations or demes. Physiological, immunological, cognitive, behavioral, and even morphological traits demonstrate performance versatility over numerous environmental backgrounds. Furthermore, these traits can appear very similar across a population that is placed in its native environment, and yet selectively relevant differences in these traits can be revealed when the population is presented with novel stresses. The heritable component of such cryptic trait differences is aptly referred to as cryptic genetic variation (CGV). Gibson defines CGV as standing genetic variation that does not contribute to the normal range of phenotypes observed in a population, but that is available to modify a phenotype that arises after environmental change [24]. Thus, by definitions of CGV and degeneracy, CGV represents a heritable form of phenotypic degeneracy in populations, and as proposed here, exemplifies a more general phenomena for resolving exploitation-exploration conflicts in natural, social and technological systems. It is important to note that CGV describes a relationship between genotypic and phenotypic variation and is not solely a genetic property such as genetic diversity. In populations with CGV, the interactions between individual genotypes and the environment (GxE interactions) are such that organisms can appear phenotypically similar in some environments but phenotypically distinct in others. Importantly, GxE and epistatic interactions provide statistical descriptions of the relationship between genotype, phenotype and environment. In contrast, degeneracy describes an innate versatility in the characters that make up a phenotype and whose selective value can be revealed in the right environment. In this way, degeneracy and CGV provide bottom-up and top-down (respectively) descriptions of the same biological phenomena. Degeneracy and evolution: Degeneracy facilitates adaptation at many levels of biological organization [16, 17, 20, 25]. In some cases, degeneracy facilitates adaptive robustness through the provision of functional redundancy [22, 26-28]. In many other cases, degeneracy provides functional diversity and enables adaptive phenotypic change as seen with CGV in natural populations. Another illuminating example is found in the adaptive immune response of T cells [29, 30]. Nave T cells display similar (inactive) phenotypes under normal conditions (devoid of antigens) but reveal large phenotypic differences when presented with novel (antigen) environments that drive rapid clonal expansion. Because environment-dependent trait differences are heritable through genetic differences in alpha/beta chain segments of the T cell receptor (TCR), this cell population is thereby poised to rapidly evolve

using cryptic genetic variation. It would seem then that CGV adequately explains this adaptive response and there is no need to confuse the discussion by mentioning degeneracy. However, proper functioning of the adaptive immune system is vitally dependent on binding ambiguity between TCR and MHC-antigen complexes found on the surfaces of professional Antigen Presenting Cells (APCs). This promiscuity leaves TCR with sufficient affinity to drive T cell activation in response to numerous distinct antigens. Without this promiscuity, impossibly large TCR repertoires (and T cell populations) would be needed in order to effectively cover the antigenic space [31]. On the other hand, partial overlap in TCR affinity allows the T cell repertoire to invoke very similar responses under conditions devoid of antigens. In particular, similarities facilitate non-trivial cell-cell adhesion events between T cells and APCs that enable scanning for cell surface antigens with inactivation under normal conditions. In short, it is the conditional similarities and differences (degeneracy) in the TCR repertoire that enable rapid adaptation within a changing antigenic environment. More generally, degeneracy facilitates rapid adaptation because some of the elements in the repertoire are already pre-adapted to some degree and can be elaborated upon over many generations to enhance functionality and fitness. If elements in the repertoire exhibited extreme functional specificity that relied on precise environmental conditions then adaptation would be highly unlikely and could only occur by chance. These basic relationships between versatility and exaptation (cf [32]) are not limited to the adaptive immune response but are fundamental attributes of adaptive responses throughout biological [20] and technological systems [33].We argue that degeneracy is thus an important evolutionary principle and should be recognized as an integral part of the Darwinian principles of heritable variation and selection [16, 20, 34].

4. Testing the hypothesis


Instead of being in conflict, our hypothesis predicts that evolution in stable environments enables the accumulation of cryptic genetic variants and supports rapid evolution in populations exposed to novel environments. This hypothesis would be proven false if, for instance, the adaptive diversification of populations was shown to be initiated solely by the discovery and fixation of novel alleles or if populations presented with stable habitat conditions were shown to be less evolvable than populations under fluctuating selection. Although several confounding factors make it challenging to determine whether CGV helps to resolve conflicts between stabilizing and directional selection, experiments are uncovering support for CGV driven evolution [13, 14] with some indications that environment-exposed CGV is fixed in populations at least as often as adaptations initiated by novel alleles [35]. The most straightforward evidence to support our hypothesis can be found in [14]. In that study, they selected a species that underwent rapid speciation when local populations were introduced to a new habitat. These adaptations were associated with readily observable trait changes that could be directly linked to survival and reproductive success within the new habitat. In their experiments, individuals from the original species were bred under original and new habitat conditions. When raised in the original habitat, offspring developed with similar traits, while in the new habitat trait variations were observed that corresponded with those that are beneficial to the new species. Because both populations were bred in artificially controlled homogeneous environments, the observed trait variations could readily be attributed to an environmentexposed release of CGV. In other words, a species in evolutionary stasis with few observable trait differences was shown to undergo rapid evolution using CGV.

There are also experimental and analytical approaches that can test for evolution from standing genetic variation and provide tools that are relevant to the testing of this hypothesis, (reviewed in [36]). For instance, alleles fixed from standing genetic variation should have a different signature of selection compared to de novo mutations. In particular, since multiple copies of a selectively neutral allele can grow in a population through genetic drift, recombination will reduce the strength of genetic hitchhiking (lower linkage disequilibrium) at all but the nearest loci creating a narrower valley of low polymorphism that surrounds a CGV locus in comparison to selective sweeps on de novo mutations, see [36]. Confounding factors, such as population growth and non-random mating, can, however, provide alternative explanations for these selection signatures [36]. An alternative approach is to infer CGV-driven adaptation by looking for beneficial alleles in a new environment that are present as standing variation in the ancestral population. The phylogenetic history of alleles provides similar evidence for CGV-driven adaptation events [36] and other approaches may also be possible [37]. Although these approaches may determine whether standing genetic variation was a primary contributor to adaptation, they have not addressed the hide and release of the corresponding traits. To demonstrate a specific role for CGV, time intensive experiments where environmental manipulations reveal previously hidden adaptive phenotypes appear to remain an important experimental tool, e.g. see [14]. Degeneracy as a precondition for rapid evolution Degeneracy arises amongst versatile components that modify their function in response to a variety of contexts. Unlike high specificity components whose functionality is limited to a very specific context, degenerate components are likely to exhibit a change in function that is differentially expressed across a degenerate repertoire. Many of these changes in function are likely to be maladaptive; however, those that are not provide useful information about where additional beneficial adaptations are likely to be found. This is best described at the level of protein evolution where proteins can maintain one functionally relevant conformation while occasionally sampling other conformational structures that can be co-opted for useful functions in new environments [38, 39]. Functional versatility and degeneracy principles are well characterized at a molecular level, however these same principles are more generally applicable and for instance can equally describe environment-revealed differences in high level traits involving adaptive foraging, nest-building, and predator avoidance. Hypothetical populations with high trait diversity but no degeneracy will not harbor characters that flexibly respond to environmental change, and thus any beneficial trait discovered must constitute a chance encounter between a highly specific genotype-environment pairing. In theoretical biology, these differences can be related to rugged and smooth fitness landscapes. In rugged landscapes, adaptations occur by chance alone while in the latter, genotypic and fitness changes are correlated such that the benefit of a genotype suggests a non-negligible likelihood of more beneficial genotypes nearby. Confirmation of these ideas in situ or in vivo can be difficult, however it should be possible to test some of these ideas using artificial molecular systems such as modified polymerase chain reaction (PCR). In previous work we have shown that PCR primers bind to the target templates degenerately and compete for targets [40]. Primers could be designed and tested that are different in DNA sequence but bind to similar targets, modeling CVG. Subsequent experiments could test amplification of new targets by these primers, thereby modeling a previously static population placed in a new environment. If the hypothesis

presented here is correct, primers with broader degeneracy in annealing to diverse templates would amplify targets faster compared to more specific templates, unless primers with narrow specificities happen, by chance, to be highly specific for new templates. In the latter case, adaptation of primers to new templates would happen due to diversity, not degeneracy of the primer population. The main obstacle for such experiments is posed by the technical difficulty of precise quantitative tracking of the offspring (amplified targets) of numerous primers.

5. Implications of the hypothesis


The diversity paradox exists because natural selection in stable environments appears to continually remove the very heritable diversity in traits that is later needed when changes in selection take place. The phenotypic degeneracy of populations might resolve this paradox because diverse cryptic changes can accumulate in a population without noticeable phenotypic effects yet can ultimately provide the trait differences necessary for adapting to future environmental novelty. Since degeneracy is ubiquitous at all levels of biological organization [20], and because of its heritability through CGV, we argue that degeneracy is likely to be a major contributor and universal facilitator of evolution in a meta-stable environment. These basic principles may afford new insights for disciplines where evolution is important. For instance, the relevance of degeneracy to evolutionary ecology is not widely appreciated. In conservation biology, intraspecific genetic diversity is a well known factor in extinction risk. However, only genetic variants that reveal trait differences under the environmental stresses encountered can mitigate extinction risks. It should be possible to quantify GxE interactions for environmental stresses that endangered species are expected to encounter with greater frequency and magnitude in the future, e.g. by selecting stresses based on predictive models of regional climate change. Degeneracy that is revealed under such conditions should provide a more adaptively significant measurement of biodiversity than existing measures of intraspecific genetic variation and thus a more valuable metric for determining what individuals are important to species conservation efforts. With limited conservation budgets, preserving this much smaller CGV footprint should provide a more realistic and realizable conservation goal. Similar principles can be extended to transform existing measures of species richness into more contextual measures of species response diversity [41] that are anticipated to be most relevant to future ecological resilience. The same principles for preserving populations can also be used in their eradication. In highly evolvable infections and diseases such as aggressive cancers, the accumulation of CGV is likely to play an important role in the rapid evolution of therapy resistance that is seen in the latest generation of targeted therapies [32]. Importantly, oncologists have yet to consider strategies for reducing tumor evolvability by eliminating CGV. Our hypothesis suggests that environment stability facilitates CGV accumulation in tumors and plays an important role in the evolution of therapy resistance. Therapeutic strategies that rapidly change the administered therapy over time should help to eliminate CGV and thereby provide a novel strategy for reducing the evolutionary potential of cancers that contain high levels of genetic diversity [42]. The persistence and spreading of technological and cultural artifacts through time with variation is analogous to variation with heritability in that both are examples of what Darwin called descent with modification [43]. Biased (non-random) selection can be imposed on both types of processes and consequently both can undergo a form of evolution. While nature appears to have largely resolved

conflicts between short-term (stabilizing) and long-term (directional) selection, this cannot be said of the short and long-term objectives that arise in the planning and operations of evolving sociotechnical systems or in systems engineering. While CGV is a purely biological phenomenon, the broader concept of degeneracy captures system properties that can be clearly articulated and defined for any system comprised of functionally versatile elements. Currently, there are several research programs that are exploring how the degeneracy concept can be translated into design principles for the realization of more flexible and resilient systems in several disciplines [1, 2, 44, 45]. For instance, in Defense capability studies, we have shown using simulations that fleets of land field vehicles with high degeneracy in task capabilities can improve operational robustness within anticipated mission scenarios yet at the strategic level provides exceptional design and organizational adaptability for responding to unforeseen future challenges [2]. More than just an academic exercise, this role of degeneracy in adaptation has attracted the interest and financial support of Australian Defense. Consequently, these principles will be presented this year at NATOs strategic planning conference as a proposal for mitigating strategic uncertainty [46]. Because the degeneracy concept is itself very versatile, we are looking at how this concept can also be translated in the design of more flexible manufacturing and assembly systems [44], and for better performance in population-based dynamic optimization [1]. Still others are using these concepts to understand some of the weakness of contemporary peer review processes [3] and the requisite conditions for embodied [47] and simulated artificial life [48-50]. As the similarities between degeneracy and CGV become better appreciated and their fundamental role in evolution becomes more widely accepted, we anticipate that these principles will transform how we think about evolutionary processes and the origins of innovation.

6. List of abbreviations
CGV, cryptic genetic variation; GxE, genotype-environment interactions.

7. Competing interests
The authors declare that they have no competing interests.

8. Acknowledgements
We are grateful for insightful comments and suggestions from Andreas Wagner, Axel Bender, Angus Harding, and the reviewers from the Biology Direct editorial board.

9. Authors contributions
Both authors contributed to the concept and manuscript preparation.

10.

Funding

Dr. Atamas research is funded by the NIH R21 HL106196 and VA Merit Review Award Dr. Whitacres research is funded by an Australian DSTO grant

11.

References

1. Whitacre JM, Rohlfshagen P, Yao X, Bender A: The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments. In PPSN XI; 11-15 September; Krakow, Poland. 2010: 284-293. 2. Whitacre JM, Rohlfshagen P, Bender A, Yao X: Evolutionary Mechanics: new engineering principles for the emergence of flexibility in a dynamic and uncertain world (http://arxiv.org/pdf/1101.4103). Nat Computing (in press). 3. Lehky S: Peer Evaluation and Selection Systems: Adaptation and Maladaptation of Individuals and Groups through Peer Review. BioBitField Press; 2011. 4. Branke J: Evolutionary optimization in dynamic environments. Kluwer Academic Pub; 2002. 5. Whitacre JM: Adaptation and Self-Organization in Evolutionary Algorithms. University of New South Wales, 2007. 6. Frank S: Maladaptation and the paradox of robustness in evolution. PLoS One 2007, 2:1021. 7. Wagner A: Robustness and evolvability: a paradox resolved. Proc R Soc Lond, Ser B: Biol Sci 2008, 275:91-100. 8. Ciliberti S, Martin OC, Wagner A: Innovation and robustness in complex regulatory gene networks. Proc Natl Acad Sci USA 2007, 104:13591-13596. 9. Whitacre JM, Bender A: Degeneracy: a design principle for achieving robustness and evolvability. J Theor Biol 2010, 263:143-153. 10. Whitacre JM: Genetic and environment-induced pathways to innovation: on the possibility of a universal relationship between robustness and adaptation in complex biological systems (http://www.springerlink.com/content/0577136586542281/) Evol Ecol (In press). 11. Waddington CH: Canalization of Development and the Inheritance of Acquired Characters. Nature 1942, 150:563. 12. Waddington CH: The Strategy of the Genes: A Discussion of Some Aspects of Theoretical Biology. Allen & Unwin; 1957. 13. Hayden EJ, Ferrada E, Wagner A: Cryptic genetic variation promotes rapid evolutionary adaptation in an RNA enzyme. Nature 2011, 474:92-97. 14. McGuigan K, Nishimura N, Currey M, Hurwit D, Cresko WA: Cryptic Genetic Variation and Body Size Evolution in Threespine Stickleback. Evolution 2011, 65:1203-1211. 15. Atamas SP: Self-organization in computer simulated selective systems. BioSystems 1996, 39:143-151. 16. Whitacre JM: Degeneracy: a link between evolvability, robustness and complexity in biological systems. Theoretical Biology and Medical Modelling 2010, 7. 17. Atamas S: Les affinits lectives. Pour la Science 2005, 46:3943. 18. Kurakin A: Scale-free flow of life: on the biology, economics, and physics of the cell. Theoretical Biology and Medical Modelling 2009, 6. 19. Newman SA: Generic physical mechanisms of tissue morphogenesis: A common basis for development and evolution. Journal of Evolutionary Biology 1994, 7:467-488.

20. Edelman GM, Gally JA: Degeneracy and complexity in biological systems. Proc Natl Acad Sci USA 2001, 98:13763-13768. 21. Csete M, Doyle J: Bow ties, metabolism and disease. Trends Biotechnol 2004, 22:446-450. 22. Ozaki K, Leonard WJ: Cytokine and cytokine receptor pleiotropy and redundancy. J Biol Chem 2002, 277:29355. 23. Atamas S, Bell J: Degeneracy-Driven Self-Structuring Dynamics in Selective Repertoires. Bulletin of Mathematical Biology 2009, 71:1349-1365. 24. Gibson G, Dworkin I: Uncovering cryptic genetic variation. Nature Reviews Genetics 2004, 5:681-690. 25. Atamas SP: Hasard et dgnrescence. Sciences et avenir, Hors-srie Octobre/Novembre 2003. 26. Beverly M, Anbil S, Sengupta P: Degeneracy and Neuromodulation among Thermosensory Neurons Contribute to Robust Thermosensory Behaviors in Caenorhabditis elegans. The Journal of Neuroscience 2011, 31:11718. 27. Mellen NM: Degeneracy as a substrate for respiratory regulation. Respiratory physiology & neurobiology 2010, 172:1-7. 28. Nelson MD, Zhou E, Kiontke K, Fradin H, Maldonado G, Martin D, Shah K, Fitch DHA: A Bow-Tie Genetic Architecture for Morphogenesis Suggested by a Genome-Wide RNAi Screen in Caenorhabditis elegans. PLoS Genet 2011, 7:e1002010. 29. Tieri P, Castellani G, Remondini D, Valensin S, Loroni J, Salvioli S, Franceschi C: Capturing degeneracy in the immune system. In Silico Immunology 2007:109-118. 30. Tieri P, Grignolio A, Zaikin A, Mishto M, Remondini D, Castellani GC, Franceschi C: Network, degeneracy and bow tie integrating paradigms and architectures to grasp the complexity of the immune system. Theor Biol Med Model 2010, 7:32. 31. Cohn M: Degeneracy, mimicry and crossreactivity in immune recognition. Mol Immunol 2005, 42:651-655. 32. Gould SJ, Vrba ES: Exaptation-a missing term in the science of form. Paleobiology 1982:4-15. 33. Bonifati G: Exaptation, Degeneracy and Innovation. Department of Economics 2010. 34. Sol RV, Ferrer-Cancho R, Montoya JM, Valverde S: Selection, tinkering, and emergence in complex networks. Complexity 2002, 8:20-33. 35. Palmer A: Symmetry breaking and the evolution of development. Science 2004, 306:828. 36. Barrett RDH, Schluter D: Adaptation from standing genetic variation. Trends Ecol Evol 2008, 23:38-44. 37. Gibson G: The environmental contribution to gene expression profiles. Nature Reviews Genetics 2008, 9:575-581. 38. Aharoni A, Gaidukov L, Khersonsky O, Gould SMQ, Roodveldt C, Tawfik DS: The'evolvability'of promiscuous protein functions. Nat Genet 2005, 37:73-76. 39. Tokuriki N, Tawfik DS: Protein dynamism and evolvability. Science 2009, 324:203. 40. Atamas SP, Luzina IG, Handwerger BS, B. W: 5 -degenerate 3 -dideoxy-terminated competitors of PCR primers increase specificity of amplification. BioTechniques 1998, 24:445-450. 41. Elmqvist T, Folke C, Nystrom M, Peterson G, Bengtsson J, Walker B, Norberg J: Response diversity, ecosystem change, and resilience. Front Ecol Environ 2003, 1:488-494. 42. Tian T, Olson S, Whitacre JM, Harding A: The origins of cancer robustness and evolvability. Integrative Biology 2011, 3:17-30. 43. Nehaniv CL, Hewitt J, Christianson B, Wernick P: What Software Evolution and Biological Evolution Don? t Have in Common. 2006.

44. Frei R, Whitacre JM: Degeneracy and Networked Buffering: principles for supporting emergent evolvability in agile manufacturing systems. Journal of Natural Computing - Special Issue on Emergent Engineering (in press). 45. Randles M, Lamb D, Odat E, Taleb-Bendiab A: Distributed redundancy and robustness in complex systems. Journal of Computer and System Sciences 2010, 77:293-304. 46. Bender A, Whitacre JM: Emergent Flexibility as a Strategy for Addressing Long-Term Planning Uncertainty. In NATO Risk-Based Planning Conference 3-5 October; Salisbury, UK. 2011 47. Fernandez-Leon JA: Behavioural robustness and the distributed mechanisms hypothesis. PhD Thesis. University of Sussex, 2011. 48. Clark E, Nellis A, Hickinbotham S, Stepney S, Clarke T, Pay M, Young P: Degeneracy Enriches Artificial Chemistry Binding Systems. ECAL, Paris 2011. 49. Kerkstra S, Scha IR: Evolution and the Genotype-Phenotype map. Masters Thesis. University of Amsterdam, 2008. 50. Mendao M, Timmis J, Andrews PS, Davies M: The Immune System in Pieces: Computational Lessons from Degeneracy in the Immune System. Foundations of Computational Intelligence, 2007 FOCI 2007 IEEE Symposium on 2007:394-400.

Evolutionary Mechanics: new engineering principles for the emergence of flexibility in a dynamic and uncertain world
James M. Whitacre1, Philipp Rohlfshagen2, Axel Bender3, and Xin Yao1 1 CERCIA, School of Computer Science, University of Birmingham Birmingham B15 2TT, United Kingdom, {j.m.whitacre, x.yao}@cs.bham.ac.uk 2 School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, United Kingdom, prohlf@essex.ac.uk 3 Land Operations Division, Defence Science and Technology Organisation Edinburgh SA 5111, Australia, axel.bender@dsto.defence.gov.au

12. Abstract
Engineered systems are designed to deftly operate under predetermined conditions yet are notoriously fragile when unexpected perturbations arise. In contrast, biological systems operate in a highly flexible manner; learn quickly adequate responses to novel conditions, and evolve new routines and traits to remain competitive under persistent environmental change. A recent theory on the origins of biological flexibility has proposed that degeneracy the existence of multi-functional components with partially overlapping functions is a primary determinant of the robustness and adaptability found in evolved systems. While degeneracys contribution to biological flexibility is well documented, there has been little investigation of degeneracy design principles for achieving flexibility in systems engineering. Actually, the conditions that can lead to degeneracy are routinely eliminated in engineering design. With the planning of transportation vehicle fleets taken as a case study, this paper reports evidence that degeneracy improves the robustness and adaptability of a simulated fleet towards unpredicted changes in task requirements without incurring costs to fleet efficiency. We find that degeneracy supports faster rates of design adaptation and ultimately leads to better fleet designs. In investigating the limitations of degeneracy as a design principle, we consider decision-making difficulties that arise from degeneracys influence on fleet complexity. While global decision-making becomes more challenging, we also find degeneracy accommodates rapid distributed decision-making leading to (near-optimal) robust system performance. Given the range of conditions where favorable short-term and long-term performance outcomes are observed, we propose that degeneracy may fundamentally alter the propensity for adaptation and is useful within different engineering and planning contexts. Keywords: degeneracy, evolvability, robustness, redundancy, dynamic optimization, complex systems engineering, dynamic capabilities, strategic planning Running Title: Evolutionary Mechanics

1. Introduction
Engineering involves the design and assemblage of elements that work in specific ways to achieve a predictable purpose and function [1] [2]. Engineering, planning, and science in general have historically taken a reductionist approach to problem solving; aiming to decompose a complicated problem into more manageable and well-defined sub-problems that are largely separable or modular (cf Section 3.1 [3], [4]). A reductionist problem decomposition reduces the degrees of freedom that are considered at any one time. It is methodical, conceptually intuitive, and it can help individuals understand the most relevant determinants of each sub-systems behaviour. If sub-systems truly represent modular building blocks of the larger system, then there are (by definition) relatively few ways in which the sub-system will interact with its surroundings. This often permits the operating environment to be defined with precision and accuracy. When these conditions are met, engineers and planners have historically been able to systematically design components/subsystems with reliable functioning that translates into reliable performance at the system-level [5]. The application of reductionist principles also results in a hierarchical decomposition of a system that contributes to system-level transparency and benefits global forms of decision-making, trouble-shooting, planning, and control. While the reductionist paradigm is logically sound, it is only loosely followed in practice because many of the systems within which engineering (or reengineering) takes place cannot be neatly decomposed. This is partly due to the factors that have shaped these systems over time including distributed decisionmaking during development, multiple conflicting objectives, bounded rationality, historical contingency (path dependency), and environmental volatility. However, purist views of planning and engineering often prevail when attempting to understand failure in complex systems. When failure occurs, it is common and logical to highlight the precise points where a system has failed with the implicit assumption that performance could have been sustained had a relatively small set of modifications been made during the design stage. Because a narrative description of failure can be achieved through careful investigation, it is often assumed that the precise contingency of failure sufficiently captures the origins of a systems design flaws. That these failures might be symptomatic of wider issues related to innate system properties is sometimes suggested but has been notoriously difficult to address in practice [6] [7] [8]. A further limitation to the classic reductionist paradigm is that it assumes the capacity to anticipate beforehand the conditions a system will experience and the precise manner in which the system should respond to those conditions [3], i.e. it assumes a stable environment or it requires precognition [9]. While predicting plausible future conditions is often a useful exercise, several factors can limit prediction accuracy and lead to uncertainty [5]. The origins of this uncertainty are varied, however it is a general rule of thumb that for complex systems operating in a dynamic environment, we are limited in our ability to develop standard operating procedures or contingency plans that can accurately account for the various plausible conditions that we might encounter [10] [11] [12]. Thus, it becomes important that a system be adaptable to conditions that were not anticipated in its design stage. From a classic engineering and planning perspective, such a design goal is ambiguous and vague and it may seem that planning for the unexpected is an oxymoron. In contrast to the traditional engineering approach to problem solving, the creation and maintenance of biological functions occurs in a non-reductionist manner that is exceptionally effective at accommodating and exploiting novel conditions. There are a number of systems engineering and organization science studies that have proposed ways to exploit properties associated with biological

robustness and adaptability [13] [14] [15] [16]. While a better appreciation of these biological properties has been useful to decision makers and planners, many of these propertiesincluding loose coupling, distributed robustness, and adaptabilityare hard to apply because there is little understanding of their origins or mechanistic basis and few guidelines for their realization. One of the key contributions of the work presented in this paper is the description and validation of design principles that can be defined at a component level and that can lead to the emergence of system properties such as flexibility, distributed robustness and adaptability. By distilling out basic working principles that lead to biological robustness and adaptability, we will gain useful insights that can be used within engineering and planning contexts. In Section 13 we focus particularly on one design principledegeneracythat is suspected to play a fundamental role in assisting complex biological systems to cope with environmental novelty [17] [18] [19]. Because degeneracy design principles correspond with measurable properties that can be realized in an engineering context, we are able to validate our most important claims through simulations involving engineered systems that undergo design optimization. In Section 14, we motivate a strategic planning problem involving the development of a military vehicle fleet capability, and introduce a simulation environment for evaluating fleet robustness across a range of future plausible operating scenarios. We use evolutionary computation to simulate incremental adaptations that are aimed at improving the robustness of a fleets design. In Section 15, we explore several robustness and adaptability properties of the fleets as they are exposed to different classes of environmental change and we find evidence supporting the hypothesis that degeneracy provides broad advantages within some classes of environments; most notably environments that are complex and occasionally unpredictable. Section 16 comments on the potential relevance of these findings and we make concluding remarks in Section 17. Given the multi-disciplinary nature of our topic, the introduction in Section 13 relies on abstract terminology that can be universally understood by biologists, engineers, planners, and decision-makers. Later on in the Case Study and Discussion sections (Sections 14 and 16), we establish links between these abstract concepts and their counterparts within particular disciplines.

13. Lessons from Biology


Robustness
Functional Redundancy: Engineered systems typically comprise elements designed for a single and well-specified purpose. However in biological systems there is no pre-assignment of a one-to-one mapping between elements and traits. Instead, at almost every scale in biology, structurally distinct elements (e.g. genes, proteins, complexes, pathways, cells) can be found that are interchangeable with one another or are otherwise compensatory in their contributions to system functions [20] [18] [17] [21]. This many-to-one mapping between components and functions is referred to as functional redundancy. Pure redundancy is a special case of many-to-one mapping and it is a commonly utilized design tool for improving the robustness of engineered systems. In particular, if there are many copies of an element that perform a particular service then the loss of one element can be compensated for by others; as can variations in the demands for that service. However, maintaining diversity amongst functionally similar elements can lead to additional types of stability. If elements are somewhat different but partially

overlap in the functions they perform, they are likely to exhibit different vulnerabilities: a perturbation or attack on the system is less likely to present a threat to all elements at once. Alternatively, we might say that a system gains versatility in how a function can be performed because functionally redundant elements enhance the diversity of conditions under which a particular demand can be satisfied [17] [22] [23]. Functional Plasticity: Whether elements are identical (i.e. purely redundant) or only functionally redundant, the buffering just described always requires an excess of resources, and this is often viewed in engineering as a necessary but costly source of inefficiency [24] [25] [7]. What is less appreciated however is that simple trade-offs between robustness and efficiency do not necessarily arise in biological systems. Not only are different components able to perform the same function (a many-to-one mapping), many of these components are also multi-functional (one-to-many mapping), with the function performed depending on the context; a behavior known as functional plasticity [18] [26] [27] [28]. Functionally plastic elements that are excluded from participating in a particular function (e.g. due to demands for that service already being met) will switch to other functions [29]. Functional plasticity thus alters the tradeoff between efficiency and robustness because excess resources are shared across multiple tasks. Degeneracy: Functional plasticity and functional redundancy are observed at all scales in biology and we have described simple and generic situations where these properties contribute to trait stability through local compensatory effects. In biological systems, it is common to observe functionally plastic components that appear functionally redundant in some contexts but functionally distinct in other contexts. The observation of both functional redundancy and functional diversity within the same components is referred to as degeneracy. In the literature, there is an extensive list of documented cases where degeneracy has been found to promote trait stability [20] [17] [30] [18]. Degeneracy has also been shown to create emergent system properties that further enhance trait stability through distributed compensatory effects. The origins of this additional and less intuitive form of robustness have been described in the networked buffering hypothesis [17]. Using genome:proteome simulations, Whitacre and Bender found evidence that networked buffering can roughly double the overall robustness potential of a system for each of the perturbation classes tested. An important conclusion from those studies is that even small amounts of excess functional resources can have a multiplicative effect on system-level flexibility and robustness when degeneracy is prevalent in a system [30] [17]. This is of considerable relevance to this study because these distributed forms of robustness were also found to substantially enhance a systems ability to adapt to novel conditions.

Adaptation
Accessing novelty: In both biology and engineering, the discovery of an improved component design necessitates the exploration of new design variants. Theoretically, degeneracy should enhance a systems access to design novelty because functionally redundant elements retain unique structural characteristics. Structural differences afford a multiplicity of design change options that can be tested, and thus provide more opportunities for new innovative designs to be discovered [18] [30] [31] [32]. Transforming novelty into innovation: The availability of distinct design options is an important prerequisite for innovation, however new opportunities often come with new challenges. To transform a local novelty into an exploited innovation, a system must be flexible (e.g. structurally, behaviorally) to accommodate and utilize a modified component effectively. For instance, design changes in a device

sometimes require new specifications for interaction, communication, operating conditions, etc. However, a system must accommodate these new conditions without losing other important capabilities or sacrificing the performance of other core system processes. In other words, the propensity to innovate is enhanced in systems that are robust in many of their core functions yet flexible in how those functions are carried out [32]. Facilitating unexpected opportunities: Because design novelty is not predictable, we propose that the flexibility needed to exploit design novelty cannot be entirely pre-specified based on the anticipation of future design changes. To support innovation, it appears that this robust yet flexible behavior would need to be a property that is pervasive throughout the system. Yet within the wider context of a systems development where each incremental design change involves a boundedly rational and ultimately myopic decision it also seems that this flexibility must be a property that, while intimately tied to the history of the systems development, can readily emerge without foresight. In contrast, if flexibility only arises in the places where we perceive future need, then a systems ability to accommodate novel design conditions will be limited by our foresight, e.g. our ability to predict plausible future environments.

We have established a theoretical basis explaining how degeneracy can support these prerequisites for adaptation in an efficient manner and we have recently accumulated some evidence from simulation studies that supports these conjectures. For instance, we have found evidence that degeneracy considerably enhances access to design novelty [30] [33]. We have also demonstrated that these novelties can be utilized as positive adaptations [34] [35] and can sometimes afford further opportunities when presented with new environments [36]. In attempting to understand how novelties are transformed into adaptations, we have shown in [17] that high levels of degeneracy lead to the emergence of pervasive flexibility in how a system can organize its resources and thus allows for a decoupling between the preservation of important functions and the accommodation of new ones. These experimental findings support each of our stated conjectures regarding how degeneracy provides a mechanistic basis for robustness and adaptability in biological systems [20] [30] [18].

Conflicts between robustness and adaptability


The requisite conditions that we have outlined above suggest a relationship between robustness and adaptability that we argue is rarely observed in engineered systems and may even be entirely absent in systems that are designed entirely using classic reductionist design principles. Some evidence supporting this view is found by comparing the relationships between robustness and adaptability in humandesigned evolutionary optimization algorithms with that observed in natural evolution. In agreement with theories on neutral evolution [37] [38] [39], simulations of gene regulatory networks and models of other biological systems have indicated that increasing mutational robustness may increase a systems propensity to adapt [40] [30]. Taking cues from the original theories, research into nature-inspired optimization has looked at designing mutational robustness into an optimization problems representation and has almost exclusively done so through the introduction of gene redundancy, e.g. polyploidy. The result has been a negligible or sometimes negative influence on the adaptive capabilities of individuals and populations [41] [42] [43] [44] [45] [46] [47]. More recently we have investigated simulations where evolution is given the option to create robustness through degeneracy. We have found evidence that this leads to the selective growth of degeneracy, substantial improvements in evolving robustness towards environmental volatility, and better adaptive capabilities in responding to environmental novelty [34] [35]. Thus, positive relationships between robustness and adaptability have

historically been absent in evolutionary simulations but can be established when designed redundancy is replaced by the evolution of degeneracy.

14. Case Study


In this paper, we consider a simple but well-defined framework that allows us to study fully the mechanistic basis for robustness and adaptability in engineered systems. The case study that we investigate falls into the category of strategic planning problems.

Motivations for a Strategic Planning Case Study


In strategic planning problems, uncertainty arises from the long time horizons in which planning goals are defined and the properties of the systems and environments in which planning takes place. In particular, strategic planning almost invariably deals with the manipulation of multi-scaled complex systems, i.e. systems comprising many heterogeneous elements whose actions and interactions translate into emergent system functions or capabilities that are observable over several distinct timescales. A number of well-known contemporary issues are exemplars of strategic planning problems and include financial market regulation, strategies for responding to climate change and natural disasters, assistance to developing countries, defence planning, and strategic planning for nations and large organizations. Uncertainty within strategic planning can be characterized in different ways [48] [49] [50] [51]. For instance, uncertainty can be classified into known unknowns (conscious ignorance), "unknown knowns" (tacit knowledge) and "unknown unknowns" (meta ignorance) [52]. Different classes of uncertainty pose different challenges, and traditional approaches to planning have mostly been developed to address conscious ignorance, e.g. through prediction modelling. Tacit knowledge and meta-ignorance cannot be explicitly planned for and require the development of system properties that facilitate adaptation at different levels within a system. For instance, tacit knowledge is predominantly revealed during plan execution and exploiting this knowledge requires plans to be responsive (behaviourally adaptive). On the other hand, meta-ignorance often gives rise to shocks or surprises and knowledge about the unknown unknowns only reveals itself after the fact. Dealing with metaignorance therefore requires adaptable design and behaviour that allow for innovation to germinate. Past experiences only provide very limited useful insights. The existence of meta-ignorance means that we are never entirely certain what future events might be encountered or how current decisions will shape future events or ourselves [53]. When we dont know what we dont know, we are prevented from formulating our uncertainty based on the likelihood of possible future states; a common assumption used in robust optimization and control. Even the articulation of plausible states can be difficult due to the emergence of new phenomena. While many aspects of our world remain conserved over time (described in the literature as historical contingency, path dependency, or frozen accidents [54] [55] [56] [8], [32]), other aspects are not predictable due to, for instance, unanticipated regime shifts in the physical environment, paradigm shifts in knowledge and culture, and disruptive technological innovations that introduce new opportunities as well as new challenges. Under these circumstances, desirable solutions or strategies should not only be robust to expected variability; they should also have an innate propensity to adapt to novel conditions. More generally, it is important to understand what design principles, routines, and system properties will determine a systems capacity to function under high variability and facilitate adaptations to unexpected novelty. Because many long-term planning problems are appropriately modeled using semi-autonomous

and multi-functional agents (each of which are also complex systems), this class of problems provides a suitable domain for exploring the biologically inspired principles we presented in the previous section.

Model Overview
We now introduce a strategic planning problem involving investment decisions in a fleet of military field vehicles. Over short (operational) timescales, fleets must operate within a volatile environment; having the capacity to rapidly deal with both anticipated and unanticipated missions. Over longer timescales, environments change more dramatically and it is important that new fleet architectures can be developed that can adequately respond to these new realities. Our model is a realistic representation of a strategic resource planning problem. It captures the most important dynamics, namely: changes in the assignment of vehicles to tasks, changes in the composition of vehicles that make up a fleet, changes in the tasks that must be executed during fleet operations, and changes in the composition of tasks at a timescale comparable to the timescale for changes in fleet composition. The model consists of a fleet of vehicles and each vehicle corresponds to a specific vehicle type that is capable of carrying out two specific task types. Each vehicle may devote its resources (e.g., time) to either or both tasks: a vehicle that is capable of performing tasks A and B, for instance, could devote 30% and 70% of its time to task A and B respectively. The fleet is exposed to a number of environmental conditions (referred to as scenarios), each of which specifies a set of demands for the specific task types. It responds to each environment by re-distributing its resources using a local decision making process. In particular, each vehicle may distribute its own resources amongst the two tasks it can perform, based on global feedback about the current priority level for fulfilling each task. The problem can thus be stated as finding a suitable fleet composition that allows the fleet to respond to a volatile environment through the redistribution of its resources. In our experiments, this fleet composition (or architecture) is evolved, using a genetic algorithm, to maximize fleet robustness towards this environmental volatility. In order to compare and evaluate different fleet architectures, constraints are imposed on the evolutionary process so that degeneracy is allowed to emerge in some but not all cases. Vehicle Fleet & Task Types The problem is formally defined as follows: There are a total of m task types and vehicles are constrained in what types of tasks they can perform. This constraint emulates restrictions that may arise naturally in real-world scenarios where specific tasks have specific hardware requirements that may either be in conflict with one another or simply too expensive to be combined within a single vehicle. In particular, we constrain task-type combinations by defining a symmetric neighborhood for any task type 0< as ( ) = | | | , } where is a predefined radius, set as r=2 in all experiments. We assume the set of task types to have periodic boundaries such that the neighborhood wraps around the extreme values. The design of a fleet of vehicles is characterized by the matrix = 0,1} : If vehicle can carry out task-type , then = 1. Otherwise, the matrix entry is 0. As any vehicle is capable of exactly two types of tasks, it follows that = 2, . Furthermore, task combinations in any vehicle are constrained such that for any and for which = = 1, it follows that ( ). The matrix V thus fully specifies a valid fleet architecture where each vehicle is constrained in (a) the number of task types it may execute and (b) the relationship between the task types (i.e., the radius). While this definition of a vehicle fleets design may seem highly abstract, it actually is a realistic description of

how vehicles are assigned to tasks, e.g. in military vehicle schedule planning problems. The only contrived aspect of our model is the constraint that vehicle types are capable of performing exactly two tasks. As our current research shows, weakening this constraint does not alter the findings and insights presented in the next section. A second matrix = : 0 10} is used to indicate the degree to which each vehicle distributes its resources. Each element specifies how much of its resources vehicle has assigned to task type . Any entry may be non-zero only if vehicle is able to perform task (i.e, = 1). We assume that a vehicle allocates all of its resources such that = 10 , . The alteration of elements in thus specifies the distribution of tasks across the fleet of vehicles. Redundancy and Degeneracy The matrix specifies the task types each vehicle in the fleet can carry out. We may thus use to compute a degree of redundancy or degeneracy found in the fleet. Two vehicles are redundant with respect to each other if they are able to carry out precisely the same two task types. If two vehicles have exactly one task type in common, they are considered degenerate. In order to calculate redundancy and degeneracy in a fleet, we consider all ( )/2 unique pair-wise comparisons of vehicles and count the number of vehicle pairs that have identical task capabilities, partially similar task capabilities and unique capabilities. We then normalize these measurements by the total number of pair-wise comparisons made. More specifically, vehicles and are redundant if = 2, degenerate if = 1 and unique with respect to one another if = 0. The fleets degree of degeneracy is then given by: | | = =1 (1)

where returns 1 if the containing statement is true and 0 otherwise. Environments At any moment in time, t, the fleet is exposed to a set of = 10 environmental scenarios = , , , } where 0 specifies the number of times task type needs to be executed in scenario at time . For each scenario , the complete satisfaction of task demands requires a fully utilized fleet2, i.e. = 10 , , . The scenarios are generated as follows: The first scenario in the set the seed scenario, is created by randomly sampling, with replacement, tasks from the task types. The remaining 1 scenarios are generated from the seed using two methods as described below. We distinguish between a decomposable case, where correlations are restricted to occur between predetermined pairs of tasks, making the problem separable, and a non-decomposable case where correlations between the variation of any two task types is avoided (Figure 1c). Decomposable set of scenarios

Assuming the architecture of the fleet is optimal with respect to the environment (i.e, the fleet can satisfy all demands given an optimal distribution of resources).

In order to generate the decomposable scenarios, the task types are randomly grouped into /2 pairings which are place into a set . A random walk is then used to create new scenarios based on . At each step of the random walk, a single task is selected and its demand incremented by 1. At the same time, the demand for the corresponding task , i.e. the task type for which ( , ) , is decremented by 1, ensuring that the total number of tasks required by the environment is always constant and equal to 10 (Figure 1d). The correlated increase/decrease is allowed as long as either task demand is within the bounds of 0 and . The volatility of is subsequently defined as the length of the random walk which always starts from . Non-decomposable set of scenarios In the non-decomposable case, each step of the random walk involves an entirely new and randomly selected pair of task types (i.e., there is no predetermined set ). This ensures that the number of correlated task types is minimised, making the problem non-decomposable. Validation scenarios For selected experiments, fleet robustness evolves for one set of environmental scenarios (training scenarios) and is then further evaluated on a new set of validation scenarios. The validation sets are generated in the same manner as the scenarios used during evolution but with the following additional consideration: In each validation set, one scenario is selected at random from the set used during evolution and is then used as the seed to generate the new set of scenarios. For validation set 1 (Validation I), the generation of the remaining scenarios is further constrained such that variations between the original and validation set remain decomposable, i.e. the same task type correlations (as of set ) are used. For validation set 2 (Validation II), the remaining scenarios are generated without such restriction, i.e. they are non-decomposable. This implies that the scenarios within the second validation set are not only non-decomposable with respect to one another but they are also non-decomposable with respect to the training scenarios. Evaluating fitness The fleet of vehicles is specified by the matrix and the distribution of resources is given by the matrix . The fitness of a fleet at time is determined by how well the fleet satisfies the demands imposed by the set of environments . For every scenario , the fleet may compute a new matrix . We denote as ( ) = the sum of elements across column in matrix , which corresponds to the total amount of resources the fleet devotes to task type . The fitness of the fleet with respect to environment is then calculated as follows: = 0, ( ) (2)

Fleets are exposed to scenarios at any one moment in time and the fitness of a fleet is simply the average over the individual fitness values: = It should be noted that fleet A is considered fitter than fleet B at time when ( )< ( ). (3)

The fitness of a fleet depends on its composition and the subsequent construction of . The resources are allocated using a local asynchronous decision making process that aims to optimize fitness: all vehicles are considered in turn and for each vehicle, its resource allocation is adjusted (using ordered asynchronous updating). We increment/decrement the value according to how this affects the fleets fitness (global feedback). Once no further improvements are possible, the next vehicle is considered. This process is repeated until no further improvements can be made.

Genetic Algorithm
To evaluate the merits and limitations of reductionist versus degenerate design principles, we define specific design options for a fleet such that certain structural properties can be maintained within the fleet architecture. The resource allocation of the fleet is computed on the fly (as tasks are introduced) but the fleets composition (i.e., its vehicle types) needs to be optimized towards a particular . In order to do so, we employ a genetic algorithm based on deterministic crowding. The fleet is represented as a vector of vehicle types (Figure 1a). During the evolutionary process, two parents are randomly selected from a population of 30 fleets (without replacement) and subjected to uniform crossover with probability 1: the crossover operator selects, with probability 0.5, for each locus along the chromosome, whether the corresponding gene (vehicle type) will be allocated to offspring 1 or 2. Each offspring is mutated (see below) and then replaces the genotypically more similar parent if its fitness is at least as good. The design options available to a fleet (i.e., redundancy or degeneracy) are controlled exclusively by the mutation operator: When reductionist design principles are enforced by the mutation operator, the fleet always maintains a fully decomposable architecture. This is illustrated in Figure 1b where fleets comprise vehicles that are either identical (purely redundant) or entirely dissimilar in their functionality. Unsurprisingly, fleets with this architecture acquire much of their robustness through vehicle redundancy and consequently are referred to as redundant fleets. When reductionism is not enforced by the mutation operator, some changes to vehicle designs may result in a partial overlap between the capabilities of vehicles (see right panel of Figure 1b). Fleets evolved under these conditions are referred to as degenerate fleets. In our experiments, the initial fleet is always fully redundant independent of the mutation operator used. The mutation operator plays a central role in the process of finding the best fleet design in a strategic planning problem in which possible future scenarios display different uncertainty characteristics (decomposable versus non-decomposable, see Section 2.2.3 above): it is used to restrict the fleet compositions that may evolve. Its design is thus crucial, especially as it is important to ensure that both fleet compositions, redundant and degenerate, are obtained due to selective pressure alone and not auxiliary effects such as differences in solution space size. The mutation operator has thus been designed with the following considerations in mind: (a) the search space is to be of the same size in all experiments; (b) in some experiments both redundancy and degeneracy can be selected for during the evolutionary process (as opposed to degeneracy emerging as a consequence of the specifications of the model). The mutation operator replaces exactly one randomly chosen vehicle in the fleet with a new vehicle type. For each locus in the chromosome, replacement options are predetermined and limited to /2 unique vehicle types. For experiments in which the fleet cannot evolve degenerate architectures, alleles for all loci are drawn from set S. It follows that a purely redundant fleet remains redundant after mutation. For experiments in which the fleet can evolve degenerate architectures, allele options are

almost the same as before, except that for half the alleles available at each locus, one task is changed to a new task type, thus allowing a partial overlap in functions to be possible between genes in the chromosome. In some of our experiments we compare the results obtained from evolving fleets (using the genetic algorithm) with un-evolved fleets that have the same composition characteristics as the evolved fleets. In constructing an un-evolved fleet, we proceed with an initial fleet as defined at the beginning of our evolutionary simulations, i.e., a randomly generated redundant fleet. We then proceed to iteratively implement the mutation operators associated with each fleet class (redundant, degenerate) with the mutated fleet replacing the original if its redundancy or degeneracy measurement becomes more similar to that of the evolved fleets. This mutation/selection procedure is iterated until redundancy or degeneracy differences are less than 5%.

Evaluating Robustness
Possibly the clearest definition of robustness is put forth in [57] which states: a [property] of a [system] is robust if it is [invariant] with respect to a [set of perturbations]. The square brackets are used to emphasize that measurements of robustness requires one to specify the system, the property, the set of perturbations, and some measure of invariance (e.g., relative to some norm). Within the context of the proposed case study, robustness is related to system fitness and is characterized as the maintenance of good fitness values within a set of distinct scenarios, e.g. an aggregation function of fitness values (Eq. 3) with < where describes some satisfactory fitness threshold. One drawback to such measurements arises when the threshold is high. Then complete and immediate failure is unlikely. While many fleets will survive or appear robust to most conditions, they may still exhibit substantial differences in their ability to maintain competitive performance with important subsequent effects to their long-term survival or growth prospects, i.e. there is a decoupling between robustness over short and long timescales. On the other hand, with tolerance thresholds too low, all fleets will fail to meet the fitness requirements and will appear equally poor. This poses a technical challenge to fleet optimization as it eliminates selectable differences if corrective steps are not taken, e.g. thresholds are continually changed over time to ensure distinctions are possible in comparing fleets that would otherwise be consistently below or above satisfactory levels. One alternative is to create a two-tiered selection criterion whereby fleets are first compared based on satisfying fitness thresholds, and when differences are not resolved by this comparison they are further compared by their average fitness over all scenarios. This is the approach that we take in the analysis of our experiments. When using this measurement in preliminary tests we found it was still necessary to tune the threshold (or tune scenario volatility) in order to resolve differences between fleets. While our findings might change when scenarios are assigned different priorities or when the number of scenarios is small (< 20), our tests indicate that reporting an average fitness generates qualitatively similar conclusions in comparison to the use of a tuned robustness measurement. Average fitness is not a commonly used measure of robustness, however it does provide some advantages in the current study. In particular, an average fitness allows for a direct observation of abrupt performance changes in a system that could otherwise be dismissed as minor differences attributable to the crossing of a threshold.

15. Results
Evolution in Environments with Decomposable Volatility
In the first set of experiments, we optimize fleets of vehicles to effectively handle a set of anticipated future scenarios. Each scenario defines a set of tasks that the fleet must perform and scenarios differ from one another in the frequency of task requirements. We proceed by investigating problems where environmental uncertainty is highly constrained and where reductionism in fleet design is expected to be favored. In particular, fleets are evolved in environments where, unbeknownst to the fleet optimization algorithm, the differences between scenarios exactly match vehicle capabilities within a fleet of a specific structural decomposition (i.e. a particular redundant fleet). We call these scenarios with decomposable variation (see Section 0). With environmental variations matching a decomposable architecture, any fleet with that same decomposition will find that these variations can be decomposed into a set of smaller independent subproblems that each can be addressed by a single class of vehicle and thereby solved in isolation from other vehicle types. This also means that any changes in fleet design that improve performance for one scenario will not result in lowered performance in other scenarios, i.e. performance across scenarios is correlated and evolution proceeds within a unimodal fitness landscape. These features provide ideal conditions for reductionist engineering design and we see in Figure 2a that such design approaches can generate maximally robust fleets. Notice that according to our definition of fleet fitness, in Eq. 3 a fleet is considered more robust than another fleet when its robustness measure is smaller. Maximum robustness is achieved when our robustness measure approaches 0. Allowing fleet architectures to deviate from a pure decomposition should not be beneficial for these types of predictable problems. While the environment varies in an entirely decomposable manner, available fleet responses do not. During fleet evolution, we see in Figure 2a that fleets with degeneracy initially evolve quickly and can learn how to respond to most variations within the environment, although evolution does not discover a maximally robust architecture in every problem instance (see Training results in Figure 2b).

Robustness in Environments with Uncertainty


We next evaluate how the evolved fleets respond to various forms of environmental uncertainty. In particular, fleets evolved under the previous set of environments (now called training scenarios) are reevaluated in a new set of environments (validation scenarios). Uncertainty is characterized by the differences between training scenarios used during evolution and the new scenarios that are defined within the validation set. We present results for two validation tests that allow us to highlight some of the strengths and weaknesses that we have observed in the two fleet architectures. Validation I: As described in Section 0, the first validation set is generated by selecting a scenario from the training set and using its position in scenario space as a seed for generating a new set of scenarios, via short random walks. The intuition behind this validation test is that actual future conditions are often partially captured by our expectations such that some expectations/scenarios are invalidated while other regions of scenario space become more relevant and are expanded upon. We impose the following additional constraints on validation set I: differences between scenarios must be decomposable when comparing scenarios within the validation set as well as across the training and validation sets.

While redundant fleets are not guaranteed to respond optimally to every new scenario, the constraints imposed on the validation set should place these fleets at a considerable advantage. It is seen in Figure 2b that redundant fleets often maintain optimal robustness; however, they also occasionally exhibit poor performance outcomes. We found bimodal validation responses were a common phenomena in redundant fleets. In particular, redundant fleets tended to provide highly effective responses towards scenarios that fit within their design limits but once these limits were exceeded, their performance degraded markedly. In contrast, the degenerate fleets are not designed in a manner that is biased towards the validation sets decomposition. In many validation instances the fleets robustness is degraded as its repertoire of responses fails to match the new requirements (Figure 2b). However, performance losses were attenuated, i.e. degenerate fleets were less sensitive to environmental demands that deviated from optimal conditions than were redundant fleets. Validation II: The next set of validation tests utilize similar conditions as before except that nondecomposable variation is now permitted in the set of validation scenarios. More precisely, differences between scenarios are decomposable when comparing scenarios within the training set, but not decomposable for comparisons within the validation set or for comparisons across sets. This is a harder validation test to satisfy, firstly because the validation scenarios do not meet the assumptions implicit in the training data. Secondly, decomposition requirements restrict scenarios to compact regions of scenario space and in removing this restriction, the scenarios can now be more distinct from one another, even if their distance from the seed position remains unchanged. Under these validation conditions, we see that the robustness of redundant fleets considerably degrades. Degenerate fleets also suffer degradation in robustness, however they display substantially better responses towards this type of novelty compared to redundant fleets. Our findings vary somewhat depending on the parameter settings that are used to generate the validation tests. For both validation sets, as the distance between the training seed and validation seed grows, robustness decays in both types of fleets although this is more rapid for the redundant fleets. As the volatility is reduced in the training set, the robustness advantage of degenerate systems also decreases; in the limit where there is only a single member in the set of training scenarios, both fleet classes always evolve to optimal robustness.

Evolution in Environments with Non-Decomposable Volatility


In strategic planning, it is unlikely that anticipated future states (e.g. training scenarios) or the actual future states (e.g. validation scenarios) will involve compartmentalized variations. In the context of our model, this means that task requirements are more likely to change such that the covariance in task frequencies cannot be decomposed into a set of isolated relationships or sub-problems. The second set of validation scenarios evaluated in Figure 2b were formulated to capture this type of non-decomposable uncertainty (see Section 0) as they allow correlations to arise between the frequencies of any pair of task types. For these reasons, we focus our attention on the robustness and design adaptation characteristics for fleets that evolve under Validation II type non-decomposable environments. By tracking fleet robustness during the course of evolution, we now see in Figure 3a that degenerate fleets are becoming

substantially more robust and do so more quickly than fleets evolved using reductionist design principles. In our view, there are two factors that primarily contribute to these observed differences: i) evolvability of the fleet architecture and ii) differences in the robustness potential of the system. Conceptually, evolvability is about discovering design improvements. In Figure 3d, we record the probability distribution for the time it is taking the optimization algorithm to find new improvements to the fleet design. As can be seen, degenerate architectures are allowing for a more rapid discovery of design improvements than redundant architectures. A further analysis of fleet evolution confirms that design adaptation rates predominantly account for this divergence in robustness values. In Figure 3b we track degeneracy levels within the fleet and find that, when permitted, degeneracy readily emerges in the fleet architecture. We will show that the growth of degeneracy during evolution is a key factor in both the robustness and design adaptation of these fleets.

Adaptation to Novel Conditions


One of the claimed advantages of degeneracy is that it supports a system in dealing with novelty; not only in system design but also in environmental conditions. To investigate this claim, we expose the fleets to an entirely new set of scenarios and allow the fleet designs to re-evolve (Figure 3a, generations 3000-6000). When high levels of degeneracy have already evolved within the fleet architecture, we see that the speed of design adaptation increases. When only redundancy is permitted, design adaptation after shock exposure at generation 3000 appears marginally worse. In our analysis of these results, we determined that the degraded adaptation rates in redundant fleet (re)evolution is an artifact of the optimization procedure and can readily be explained based on the following considerations: i) we use a population-based optimization algorithm, i.e. many fleet designs are being modified and tested concurrently, ii) the population of fleets is converging over time, i.e. differences between fleets or diversity get lost (Figure 3c), and iii) it is generally true, in both biology and population-based optimization, that a loss of diversity negatively affects a populations adaptive response to abrupt environmental change. We can confirm the population diversity effect by initializing populations at low diversity or more simply by reducing the population size, which results in redundant fleet robustness profiles that are indistinguishable when comparing evolution in old (generations 0-3000) and new (generations 3000-6000) environments. In contrast, degenerate fleets evolve improved designs more rapidly in the new environment (Figure 3a) and this effect is not eliminated by altering the population size. However, when we restrict the total amount of degeneracy that is allowed to arise in the fleet (Figure 3b), the fitness profiles before and after shock exposure become increasingly similar (Figure 3a). Together, these findings support the hypothesis that degeneracy improves fleet design evolvability under novel conditions.

Innate and Evolved Differences in the Robustness Potential of Degenerate Fleets


In our introductory remarks, we proposed that degeneracy may allow for pervasive flexibility in the organization of system resources and thereby afford general advantages for a systems robustness. To evaluate this proposal, in Figure 3d we introduce fleets that are not evolved but instead are artificially constructed (see Section 0) so that they have levels of degeneracy and redundancy that are similar to their evolved counterparts from the experiments in Figure 3a. Figure 4 reports the robustness of these

(un-evolved) fleets against randomly generated scenarios. We see that innate advantages in robustness are associated with degenerate fleet architectures. We contend that this finding is not intuitively expected based on a reductionist view of vehicle capabilities given that: both degenerate and redundant fleets have the same number of vehicles (and the fleet design spaces are constrained to identical sizes), fleets are presented with the same scenarios, and vehicles have access to the same task capabilities. It is interesting to ask whether this innate robustness advantage can explain entirely our previous results or whether the manner in which degeneracy evolves and becomes integrated within the fleet architecture is also important, as is proposed in the networked buffering hypothesis [17]. To explore this question, in Figure 4 we show robustness results of the final evolved fleets from Figure 3a but now robustness is reevaluated against randomly generated scenarios. Here we see that, when degeneracy evolves, a fleets robustness to novel conditions becomes further enhanced, indicating that the organizational properties of degeneracy play a significant role in a systems robustness and capacity to adapt. Importantly, it suggests that the flexibility is tied to the history of the systems evolution yet remains pervasive and can emerge without foresight, thus satisfying our third stated requirement for supporting innovation. In contrast, the capacity to deal with novel conditions is unaffected by evolution in redundant fleets and remains generally poor for both evolved and randomly constructed fleet designs.

Costs in the Design and Redesign of Fleets


So far, we have found that degenerate fleet designs can be broadly robust as well as highly adaptable; enabling these fleets to exploit new design options and to respond more effectively to novel environments. Only when environments are stable and characterized to favor reductionist principles did we find redundant architectures to slightly outperform degenerate architectures. While these preliminary findings are promising, there are additional factors that should be considered when weighing the pros and cons of degeneracy design principles. Depending on how the planning problem is defined, it may be of interest to know how the fleet composition changes in response to a new set of environmental demands. For instance, if we considered the initial conditions in our experiments to represent a pre-existing fleet, then it would be important to know the number of vehicles that are being replaced, as this would clearly influence the investment costs. We would expect that, without having explicit cost objectives in place, degenerate fleets would diverge rapidly from their original conditions. Interestingly, we find in Figure 5 that divergence is not substantially greater in comparison to reductionist design conditions and that more favorable costbenefit trade-offs appear to exist for evolution under non-decomposable environments, cters paribus. For example, in decomposable environments an optimally redesigned redundant fleet was typically found by replacing 15% of the vehicles while degenerate fleets achieved near optimal performance when approximately 20% of the fleet is redesigned (Figure 5a). In complex environments, no fleet typically is able to discover an optimal redesign, however, the increased propensity to discover adaptive design options is shown to confer degenerate fleets with a performance advantage that becomes more pronounced as larger investments are considered (Figure 5b). However, such conclusions do not factor in additional costs from the development of new vehicle designs or reduced costs that may come from economies of scale during the manufacturing and purchase

of redundant vehicles. While such costs depend on the planning context, these factors will influence the advantages and disadvantages from degeneracy and warrant further exploration. Having noted the potential costs from degeneracy, one should not immediately conclude that fleet scaling places degeneracy design principles at a disadvantage. For instance, in Figure 5 we consider fleet evolution (under non-decomposable environments) at different sizes of our model. Here we see that the robustness advantage from degeneracy has a non-linear dependence on fleet size. Moreover, because degeneracy is a relational property, modifying the design of a small number of vehicles can have disproportionate influence on degeneracy levels within a fleet. For instance, in one of the data sets in Figure 3a only 20% of the fleet is permitted to deviate from its initially redundant architecture yet we observe considerable improvements in degeneracy and fleet robustness. In short, the careful redesign of a small subset of the fleet could provide considerable advantages, particularly for large fleet sizes.

Multi-Scaling Effects
In our fleet fitness evaluation, it was necessary to determine how vehicle resources can be matched to the task requirements for each scenario. While decision support tools might view this as a simulation [51], in operations research this simulation would be formulated as a resource assignment problem. Regardless of how the fitness evaluation is viewed conceptually, architectural properties of the fleet are likely to have non-trivial consequences in the assignment of vehicles to tasks. As formally described in Section 14, we approximate task assignment by a local decision-making heuristic. While this heuristic simplifies our experiments, it also maintains a rough analogy to how these decisions are made in reality, i.e. by vehicle owners (management) instead of optimization algorithms. The owner of a vehicle distributes the vehicles resources (time) over suitable tasks. However, as other owners make their own decisions, the relative importance of remaining unfulfilled tasks can change and owners may decide to readjust their vehicles tasks, thus (indirectly) adapting to the decisions of others. In the following, we investigate how a fleets architecture influences the amount of time required to complete this task assignment procedure and similarly, how placing restrictions on the time allotted to the procedure, i.e. changing the maximum simulation runtime, influences the performance results for different fleet architectures. In Figure 6a, we evolve fleets with different settings for the maximum simulation runtime and record how this alters the final robustness achieved after evolution. In the limiting case where vehicle task assignments are never updated, a fleet has no capacity to respond to changes in the environment and performance is entirely genetically determined, i.e. vehicles are not functionally plastic. In this case the problem collapses to one that is equivalent to optimizing for a single scenario (i.e. the mean task requirements of the scenario set) and the two types of fleets evolve to the same poor performance. When fitness evaluation is extended to allow short simulation times (i.e. a few decision adjustments), degenerate architectures display modestly better matches between vehicles and tasks and this continues to improve as simulation time is increased and robustness converges to near optimal values (Figure 6a). The experiments reported earlier (in Sections 3.1 to 3.6) did not impose restrictions on the simulation runtime, i.e. task assignment heuristics were run until no decisions could be readjusted. To evaluate the task assignment efforts that took place in these experiments, Figure 6c plots the frequency distribution for the number of decision adjustments that occur as a fleet responds to a scenario. As can be seen the discovery of a stable assignment can take considerably longer for degenerate fleets than for redundant fleets.

However, we also have seen in Figure 6a that constraining the assignment runtime had only modest repercussions to performance when the fleets are forced to evolve under these constraints. We have found two factors that contribute to these seemingly contradictory findings. Firstly, the largest fitness improvements that result from decision adjustments predominantly occur during the early stages of a simulation. This is shown in Figure 6b where fitness changes are recorded as decision adjustments are made. While some fleet-scenario combinations potentially require many decision adjustments, the fleet still provides decent performance if the simulation is terminated early. We also speculate that a second contributing factor could be that evolution under restricted runtimes influences how the architecture evolves. With simulation runtime restricted, evolution may prefer fleet architectures that find decent assignments more quickly. To investigate this conjecture, we evolve fleets with simulation runtime restricted to at most ten decision adjustments. We then remove this restriction and record the total number of decision adjustments as the fleets respond (without runtime restriction) to new scenarios (Figure 6d). Compared to the results in Figure 6c, we see a large reduction in runtime distributions indicating that fleets have been designed to be more efficient in task assignment. In addition higher quality vehicle assignments are found more quickly for degenerate fleets optimized under these constrained conditions (Figure 6b). In summary, an exploration of the underlying resource assignment problem indicates that important interactions exist in fleet performance characteristics as they are observed at different timescales of our planning problem. However, in the context of the distributed decision-making heuristics that were implemented, we have generally found that degeneracy based design can result in high robustness without incurring large implementation costs. Importantly however, satisfactory cross-scale relationships depend on whether short-timescale objectives (i.e. simulation runtime) are being accounted for during fleet planning.

16. Discussion
Fleet degeneracy transforms a trivial resource allocation problem that, in principle, can be solved analytically into a problem with parameter interdependencies that is not analytically tractable for large systems. From a traditional operations research perspective, degeneracy should thus be avoided because it increases the difficulty of finding a globally optimal assignment of resources. However, despite increased complexity in centralized resource control, our results in Section 0 indicate that simple distributed decision-making heuristics can provide excellent resource allocation assignments in degenerate fleets. The superior fleet design easily lends itself to a more effective allocation of resources compared with the globally best solutions for fleets designed based on reductionist principles. By exploring this conflict between design and operational objectives (Sections 0 and 0), our study has identified conditions that support but also limit the benefits that arise from degeneracy. These conditions appear to be in agreement with observations of distributed control in biological systems where trait robustness generally emerges from many localized and interrelated decisions [20]. How these local actions translate to system traits is not easily understood by breaking down the system into its respective parts. The lack of centralized control thus has put into question whether biological principles are compatible with engineering and socio-technical systems where centrally defined global objectives play an important role in assessing and integrating actions across an organization. However, and in contrast to common intuition, we have found some evidence to suggest that difficulties in centralized control do

not preclude the possibility of coherent system behaviors which are locally determined yet also driven by globally defined objectives. Reductionist design principles should be well-suited for addressing planning problems that can be readily separated into sub-problems and where variations in problem conditions are bounded and predictable. On the other hand, we propose that degeneracy should be better suited for environments that are not easily decomposed and that are characterized by unpredictable novelty.

Implications for Complex Systems Engineering


As a design principle, degeneracy could be applicable to several domains, in particular in circumstances where
1) distributed decision-making is achievable (e.g. in markets, open-source software, some web 2.0 technologies, human organizations and teams with a flat management structure); 2) agent motivations can be aligned with system objectives; and, 3) agents are functionally versatile while also following protocols for reliable agent-agent collaboration. When these conditions are met, designing elements with partially overlapping capabilities can dramatically enhance system resilience to new and unanticipated conditions as we have shown. On the other hand, if there is a need to maintain centralized decision-making and system-level transparency, or in cases where historical bias favoring reductionism is deeply ingrained, implementing degeneracy design principles is likely to prove difficult and possibly unwise.

There is a growing awareness of the role played by degeneracy in biological evolution. However, these insights have not yet been taken into account in the application of nature-inspired principles to solve real-world problems. We propose that our findings can shed new light on a diverse range of research efforts that have taken insights from biology and complexity science to address uncertainties arising in systems engineering and planning [13] [58] [15] [16]. Degeneracy is a system property that:
1) based on both empirical evidence and theoretical arguments, can facilitate the realization of other important properties in complex systems, including multi-scaled complexity [3] [59], resilience under far from equilibrium dynamics [23], robustness within outcome spaces [60], the establishment of solution rich configuration spaces, and evolvability [3] [32] [61]; 2) depends on the presence of other basic system properties that were implemented in our experiments such as distributed decision-making [60], feedback [48] [62], modularity/encapsulation [61], protocols/interfaces/tagging [24] [48], weak/loose coupling [61] [63], exploration [32], agent agility [14] [62], and goal alignment within a multi-scaled system [3] [62].

Reengineering in Complex Systems


Our introductory remarks regarding systems engineering intentionally emphasized distinctions between designed and evolved systems, however engineering in practice can take on features of both. Engineering is not simply drawing up a plan and implementing it. Human-constructed systems typically coevolve with their environment during planning, execution, and even decommissioning activities. While biological terminology is not common to this domain, several attributes related to degeneracy are found in socio-technical systems and are sometimes intentionally developed during planning.

For instance, functional redundancy is a common feature in commercial/social/internal services when a sustained function under variable conditions is critical to system performance or safety, e.g. communications and flight control for commercial aircraft. Functional redundancy is rationally justified in these cases based upon its relationship to service reliability (see Section 1) which is analogous to the principles underpinning economic portfolio theory, i.e. risks are mostly independent while returns are additive. However if nothing else, this study and other recent studies have shown that a significant amount of the robustness derived from degeneracy has origins that extend beyond simple diversity or portfolio effects and is conferred instead as an emergent system property. For instance, previous work has found evidence that the enhanced robustness effects from degeneracy originate from distributed compensatory pathways or so called networked buffers [17]. Moreover, it is not simply reliability in function but also the relationship between robustness and innovation that makes degeneracy especially important to systems engineering. To better understand this relationship between degeneracy, robustness, and innovation, we have taken an interdisciplinary approach that combines systems thinking, biological understanding and experimentation using agentbased simulations. In ongoing research, we are using this approach to explore how degeneracy might facilitate successful organizational change within an organization as well as successful reengineering for systems that display functional interdependencies in the mapping between agent actions and system capabilities.

17. Conclusions
In this paper, we have investigated the properties of degenerate buffering mechanisms that are prevalent in biological systems. In comparison to their engineering counterparts, these buffering mechanisms were found to afford robustness to a wider range (both in type and magnitude) of perturbations and do so more efficiently due to the manner in which these buffers interact and cooperate. While seemingly paradoxical, we also hypothesized how the same mechanisms that confer trait stability can also facilitate system adaptation (changes to traits) under novel conditions. With the design of transportation fleets taken as a case study, we reported evidence supporting this hypothesis, demonstrating that the incremental growth of degeneracy results in fundamentally different robustness and design adaptation properties within fleets. The theories that underpin these ideas are conceptually straightforward [20] [18] yet also operationally useful as they position degeneracy as a mechanistic facilitator of robustness and adaptability that, in principle, can be applied outside biological contexts [18]. In looking to tackle real-world problems using inspiration from biology, it is important to determine whether sufficient parallels exist between the problem and the biological system of interest. Here we have proposed that the investment decisions and subsequent operation of complex engineered systems consisting of versatile semi-autonomous agents provide a general domain where these requisite conditions are met and where degeneracy design principles could prove advantageous. There are a number of systems that can be characterized in this manner, with strategic planning for field vehicle fleets provided as one illustrative example. Other suitable domains may include particular applications of agile manufacturing [64] and applications of swarm robotics. There are however additional challenges that arise in these other domains because

humans play a smaller role in the communication and control of agent-agent interactions. For instance, the design and management of protocols for agent-agent communication and co-regulation were ignored in our case study due to the central role of humans in managing military vehicle operations but are important issues in agile manufacturing, swarm robotics and similar systems. Our future research will continue to investigate the influence of degeneracy in systems that comprise both humans and hardware assets, however we will increasingly focus on the social dimension of such problems. For instance, some of our future studies will investigate how degeneracy in the skill mix of military and emergency response teams can influence team flexibility when such teams have to deal with surprises and novel threats. Team flexibility may become particularly important in situations where individuals are unexpectedly required to take on new roles in a crisis.

18. Acknowledgements
This work was partially supported by a DSTO grant on Fleet Designs for Robustness and Adaptiveness and an EPSRC grant (No. EP/E058884/1) on Evolutionary Algorithms for Dynamic Optimisation Problems: Design, Analysis and Applications.

19. Figures

Figure 1: Top panel) Fleet encoding in genetic algorithm. The genetic algorithm evolves a population of N individuals, each of ing which represents a complete fleet of vehicles: at each locus, an individual/fleet has a specific vehicle type as specified by two distinct task types. Middle panel) A simple example highlighting the differences between sets of redundant and degenerate vehicles: given four vehicles (each vehicle represented as a pair of connected nodes) and four task types A, B, C, D , the fleet on the left-hand , left side consists of two sets of identical vehicles. The fleet on the right hand side consists of four unique vehicles with partial overlap right-hand between their functionalities (i.e., task types); in both cases, each task is represented to the same degree. Bottom Left Panel) An illustration of task-type frequency correlations that characterize the variation in decomposable and non-decomposable type decomposable environments. Bottom Right Panel) Environmental scenarios differ from one another in the frequency of task types (differentiated by color) but not the total number of tasks (vertical axis).

Figure 2 Panel a: Robustness profiles of degenerate and redundant fleet designs evolved in environments with decomposable variation (training scenarios). Panel b: Robustness is then reevaluated within new validation scenarios. Training shows robustness values of all 30 fleets at the end of evolution against the training scenarios of Panel a. Validation I and Validation II show robustness values for the 30 evolved fleets against two different types of validation scenarios (described in main text). The insert shows the same results as the main panel but on a linear (not a logarithmic) robustness scale.

Figure 3 Panel a: Robustness profiles of degenerate and redundant fleet designs evolved in environments with non-decomposable variation and exposed to a shock (i.e. an entirely new set of scenarios) at generation 3000. Characteristic results are also shown for experiments where only a predefined percentage of vehicles can be designed to no longer conform to the initial redundant fleet architecture and thus allow for some restricted level of partially overlapping capabilities within the fleet. Panel b: Degeneracy is measured (for fleets where it can emerge) during the course of evolution. Panel c: Gene diversity during the evolution of Panel a; measured as the proportion of vehicles that have changed in pair-wise comparisons of fleets within the population. Panel d: histogram for the number of offspring sampled before an improvement is found (time length). Sampling is restricted to the evolution of fleets throughout the first 3000 generations of Panel a.

Figure 4 Comparisons of the robustness of evolved and un-evolved fleets towards randomly generated scenarios. Un-evolved degenerate fleets were constructed with levels of degeneracy as large as evolved fleets.

Figure 5 Panel a: Robustness of evolved fleets plotted against the proportion of vehicles that have changed when comparing an evolved fleet with its original design (at gen=0). Evolution takes place within a decomposable environment. Panel b: Same as Panel a but with evolution taking place in non-decomposable environments. Panel c: Comparisons of the evolved fleet robustness for degenerate and redundant architectures at different fleet sizes. In these experiments the fleet size, the number of task types T, random walk size, and maximum generations are all increased by the same proportion.

Figure 6 Panel a: Robustness of fleets that have been evolved with different restrictions on maximum simulation runtime. Panel b: Fitness of a fleet is evaluated in a single scenario where fitness is recorded during a simulation, i.e. as vehicle decision adjustments are made. The results are shown for degenerate fleets that have evolved in conditions where the maximum simulation runtime is 100 and 10 readjustments. Panel c: Actual runtime distribution for fleets evolved under unrestricted runtime conditions Panel d: Actual runtime distribution for fleets evolved under a maximum simulation runtime of 10, but where the distribution is being evaluated with these restrictions removed.

20. References
[1] [2] [3] [4] [5] [6] [7] [8] J. Fromm, "On engineering and emergence," Arxiv preprint nlin/0601002, 2006. J. Ottino, "Engineering complex systems," Nature, vol. 427, pp. 399-399, 2004. A. Minai, et al., "Complex engineered systems: A new paradigm," in Complex Engineered Systems, ed: Springer, 2006, pp. 121. L. Beckerman, "Application of complex systems science to systems engineering," Systems Engineering, vol. 3, pp. 96-102, 2000. C. Calvano and P. John, "Systems engineering in an age of complexity," Systems Engineering, vol. 7, pp. 25-34, 2003. P. Bak and M. Paczuski, "Complexity, Contingency, and Criticality," Proceedings of the National Academy of Sciences, USA, vol. 92, pp. 6689-6696, 1995. J. M. Carlson and J. Doyle, "Complexity and robustness," Proceedings of the National Academy of Sciences, USA, vol. 99, pp. 2538-2545 2002. S. A. Kauffman, "Requirements for evolvability in complex systems: orderly components and frozen dynamics," Physica D, vol. 42, pp. 135152, 1990.

[9] [10] [11]

[12]

[13] [14]

[15]

[16] [17]

[18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]

S. D. Gribble, "Robustness in Complex Systems," The Eighth Workshop on Hot Topics in Operating Systems (HotOS VIII), 2001. J. Mogul, "Emergent (mis) behavior vs. complex software systems," in European Conference on Computer Systems, Leuven, Belgium, 2006, pp. 293-304. G. Polacek and D. Verma, "Requirements Engineering for Complex Adaptive Systems: Principles vs. Rules," in 7th Annual Conference on Systems Engineering Research 2009 (CSER 2009), Loughborough University, 2009. J. Lane and B. Boehm, "System of systems lead system integrators: Where Do they spend their time and what makes them more or less efficient?," Systems Engineering, vol. 11, pp. 81-91, 2007. S. A. Sheard and A. Mostashari, "Principles of complex systems for systems engineering," Systems Engineering, vol. 12, pp. 295-311, 2008. A.-M. Grisogono and M. Spaans, "Adaptive Use of Networks to Generate an Adaptive Task Force," in Proceedings of 13th International Command and Control Research and Technology Symposium (ICCRTS), Washington, 2008. A. Ilachinski, Land Warfare and Complexity, Part II: An Assessment of the Applicability of Nonlinear Dynamics and Complex Systems Theory to the Study of Land Warfare: Center for naval analyses, Alexandria, VA, 1996. A. Ryan, "A multidisciplinary approach to complex systems design," Department of Mathematics (thesis), University of Adelaide, Adelaide, 2007. J. M. Whitacre and A. Bender, "Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems," Theoretical Biology and Medical Modelling vol. 7, 15 June 2010 2010. J. M. Whitacre, "Degeneracy: a link between evolvability, robustness and complexity in biological systems," Theoretical Biology and Medical Modelling, vol. 7, 2010. J. M. Whitacre and A. Bender, "Degeneracy: a design principle for achieving robustness and evolvability," Journal of Theoretical Biology, vol. 263, pp. 143-153, 2010. G. M. Edelman and J. A. Gally, "Degeneracy and complexity in biological systems," Proceedings of the National Academy of Sciences, USA, vol. 98, pp. 13763-13768, 2001. A. Kurakin, "Scale-free flow of life: on the biology, economics, and physics of the cell," Theoretical Biology and Medical Modelling, vol. 6, 2009. S. Carpenter, et al., "From metaphor to measurement: resilience of what to what?," Ecosystems, vol. 4, pp. 765-781, 2001. C. Holling, "Engineering resilience versus ecological resilience," in Engineering within ecological constraints, ed: National Academy Press, 1996, pp. 31-43. M. E. Csete and J. C. Doyle, "Reverse Engineering of Biological Complexity," Science, vol. 295, pp. 1664-1669, 2002. J. Stelling, et al., "Robustness of Cellular Functions," Cell, vol. 118, pp. 675-685, 2004. S. Atamas and J. Bell, "Degeneracy-Driven Self-Structuring Dynamics in Selective Repertoires," Bulletin of Mathematical Biology, vol. 71, pp. 1349-1365, 2009. N. N. Batada, et al., "Still stratus not altocumulus: further evidence against the date/party hub distinction," PLoS Biol, vol. 5, p. e154, 2007. A. Kurakin, "Self-organization versus Watchmaker: ambiguity of molecular recognition and design charts of cellular circuitry," Journal of Molecular Recognition, vol. 20, pp. 205-214, 2007.

[29] [30] [31] [32] [33] [34] [35] [36]

[37] [38] [39] [40] [41]

[42]

[43] [44] [45] [46] [47] [48] [49] [50]

A. Kurakin, "Order without Design," Theoretical Biology and Medical Modelling, vol. 7, p. 12, 2010. J. M. Whitacre and A. Bender, "Degeneracy: a design principle for achieving robustness and evolvability," Journal of Theoretical Biology, vol. 263, pp. 143-53, Mar 7 2010. A. Wagner, "Robustness and evolvability: a paradox resolved," Proceedings of the Royal Society of London, Series B: Biological Sciences, vol. 275, pp. 91-100, 2008. M. Kirschner and J. Gerhart, "Evolvability," Proceedings of the National Academy of Sciences, USA, vol. 95, pp. 8420-8427, 1998. J. M. Whitacre and A. Bender, "Degenerate neutrality creates evolvable fitness landscapes," in WorldComp-2009, Las Vegas, Nevada, USA, 2009. J. M. Whitacre, et al., "The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments," in PPSN XI, Krakow, Poland, 2010, pp. 284-293. J. M. Whitacre, "Evolution-inspired approaches for engineering emergent robustness in an uncertain dynamic world," in Artificial Life XII, Odense, Denmark, 2010, pp. 559-561. J. M. Whitacre, "Genetic and environment-induced innovation: complementary pathways to adaptive change that are facilitated by degeneracy in multi-agent systems," in Artificial Life XII, Odense, Denmark, 2010, pp. 431-433 M. Kimura, "Solution of a Process of Random Genetic Drift with a Continuous Model," Proceedings of the National Academy of Sciences, USA, vol. 41, pp. 144-150, 1955. M. Kimura, The Neutral Theory of Molecular Evolution: Cambridge University Press, 1983. T. Ohta, "Near-neutrality in evolution of genes and gene regulation," Proceedings of the National Academy of Sciences, USA, vol. 99, pp. 16134-16137, 2002. S. Ciliberti, et al., "Innovation and robustness in complex regulatory gene networks," Proceedings of the National Academy of Sciences, USA, vol. 104, pp. 13591-13596, 2007. W. Banzhaf, "Genotype-Phenotype-Mapping and Neutral Variation-A Case Study in Genetic Programming," in Parallel Problem Solving from Nature PPSN III. vol. 866, ed, 1994, pp. 322-332. R. E. Keller and W. Banzhaf, "Genetic programming using genotype-phenotype mapping from linear genomes into linear phenotypes," presented at the Proceedings of the first annual Conference on Genetic Programming Stanford, California, 1996. J. D. Knowles and R. A. Watson, "On the Utility of Redundant Encodings in Mutation-Based Evolutionary Search," Lecture Notes in Computer Science, pp. 88-98, 2003. V. K. Vassilev and J. F. Miller, "The Advantages of Landscape Neutrality in Digital Circuit Evolution," in Evolvable systems: from biology to hardware, ed: Springer, 2000. T. Yu and J. F. Miller, "Neutrality and the Evolvability of Boolean Function Landscape," in Proceedings of the 4th European Conference on Genetic Programming, 2001, pp. 204-217. T. Smith, et al., "Neutrality and ruggedness in robot landscapes," in Congress on Evolutionary Computation, 2002, pp. 1348-1353. F. Rothlauf and D. E. Goldberg, "Redundant Representations in Evolutionary Computation," Evolutionary Computation, vol. 11, pp. 381-415, March 13, 2006 2003. A. M. Grisogono and A. Ryan, "Designing complex adaptive systems for defence," in Systems Engineering Test and Evaluation Conference, Canberra, Australia, 2003. P. K. Davis, et al., "Enhancing Strategic Planning with Massive Scenario Generation: Theory and Experiments," RAND Technical Report2007. P. K. Davis, "New paradigms and new challenges," Proceedings of the conference on Winter simulation, pp. 1067-1076, 2005.

[51] [52] [53] [54] [55] [56] [57]

[58] [59] [60] [61] [62]

[63] [64]

P. K. Davis, "Strategic planning amidst massive uncertainty in complex adaptive systems: the case of defense planning," Inter Journal: Complex Systems section, 2002. A. Kerwin, "None too solid: medical ignorance," Science Communication, vol. 15, pp. 166-185, 1993. R. J. Lempert, et al., Shaping the Next One Hundred Years: New Methods for Quantitative, Long-Term Policy Analysis: RAND Technical Report, 2003. S. Gould, Wonderful life: The Burgess Shale and the nature of history: WW Norton & Company, 1990. M. Gell-Mann, "What is complexity," Complexity, vol. 1, pp. 1-9, 1995. J. P. Crutchfield and E. Van Nimwegen, "The evolutionary unfolding of complexity," in DIMACS Workshop, Princeton, 2002. D. L. Alderson and J. C. Doyle, "Contrasting views of complexity and their implications for network-centric infrastructures," Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on, vol. 40, pp. 839-852, 2010. G. Levchuk, et al., "Normative design of organizations-part II: organizational structure," IEEE Transactions on Systems Man and Cybernetics Part A, vol. 32, pp. 360-375, 2002. G. Tononi, et al., "Measures of degeneracy and redundancy in biological networks," Proceedings of the National Academy of Sciences, USA, vol. 96, pp. 3257-3262, 1999. M. L. Kuras and B. E. White, "Engineering enterprises using complex-system engineering," in INCOSE Symposium, 2005, pp. 10-14. E. Fricke and A. Schulz, "Design for changeability (DfC): Principles to enable changes in systems throughout their entire lifecycle," Systems Engineering, vol. 8, 2005. A.-M. Grisogono, "The Implications of Complex Adaptive Systems Theory for C2," in Proceedings of 2006 Command and Control Research and Technology Symposium (CCRTS), Washington, 2006. P. Csermely, Weak links: Stabilizers of complex systems from proteins to social networks: Springer Verlag, 2006. R. Frei and J. M. Whitacre, "Degeneracy and Networked Buffering: principles for supporting emergent evolvability in agile manufacturing systems" Journal of Natural Computing - Special Issue on Emergent Engineering, (in press).

Degeneracy: a link between evolvability, robustness and complexity in biological systems


James M Whitacre School of Computer Science, University of Birmingham, Edgbaston, UK

Abstract
A full accounting of biological robustness remains elusive; both in terms of the mechanisms by which robustness is achieved and the forces that have caused robustness to grow over evolutionary time. Although its importance to topics such as ecosystem services and resilience is well recognized, the broader relationship between robustness and evolution is only starting to be fully appreciated. A renewed interest in this relationship has been prompted by evidence that mutational robustness can play a positive role in the discovery of future adaptive innovations (evolvability) and evidence of an intimate relationship between robustness and complexity in biology. This paper offers a new perspective on the mechanics of evolution and the origins of complexity, robustness, and evolvability. Here we explore the hypothesis that degeneracy, a partial overlap in the functioning of multi-functional components, plays a central role in the evolution and robustness of complex forms. In support of this hypothesis, we present evidence that degeneracy is a fundamental source of robustness, it is intimately tied to multi-scaled complexity, and it establishes conditions that are necessary for system evolvability.

Keywords: complex adaptive systems, degeneracy, evolvability, evolution theory, fitness landscapes, highly optimized tolerance, neutrality, robustness, universal Darwinism

1. Introduction
Complex adaptive systems (CAS) are omnipresent and are at the core of some of societys most challenging and rewarding endeavours. They are also of interest in their own right because of the unique features they exhibit such as high complexity, robustness, and the capacity to innovate. Especially within biological contexts such as the immune system, the brain, and gene regulation, CAS are extraordinarily robust to variation in both internal and external conditions. This robustness is in many ways unique because it is conferred through rich distributed responses that allow these systems to handle challenging and varied environmental stresses. Although exceptionally robust, biological systems can sometimes adapt in ways that exploit new resources or allow them to persist under unprecedented environmental regime shifts. These requirements to be both robust and adaptive appear to be conflicting. For instance, it is not entirely understood how organisms can be phenotypically robust to genetic mutations yet also can generate the range of phenotypic variability that is needed for evolutionary adaptations to occur. Moreover, on rare occasions genetic changes can result in increased system complexity however it is not

known how these increasingly complex forms are able to evolve without sacrificing robustness or the propensity for future beneficial adaptations. To put it more distinctly, it is not known how biological evolution is scalable [1]. A deeper understanding of CAS thus requires a deeper understanding of the conditions that facilitate the coexistence of high robustness, growing complexity, and the continued propensity for innovation or what we refer to as evolvability. This reconciliation is not only of interest to biological evolution but also to science in general because variability in conditions and unprecedented shocks are a challenge faced across many facets of human enterprise. In this opinion paper, we explore and expand upon the hypothesis first proposed in [2] [3] that a system property known as degeneracy plays a central role in the relationships between these properties. Most importantly, we argue that only robustness through degeneracy will lead to evolvability or to hierarchical complexity in CAS. An overview of our main arguments is shown in Figure 7 with Table 1 summarizing primary supporting evidence from the literature. Throughout this paper, we refer back to Figure 7 so as to connect individual discussions with the broader hypothesis being proposed. For instance, we refer to Link 6 in the heading of Section 21 in reference to the connection between robustness and evolvability that is to be discussed and also that is shown as the sixth link in the graph in Figure 7. The remainder of the paper is organized as follows. We begin by reviewing the paradoxical relationship between robustness and evolvability in biological evolution. Starting with evidence that robustness and evolvability can coexist, in Section 21 we present arguments for why this is not always the case in other domains and how degeneracy might play an important role in reconciling these conflicting properties. Section 22 outlines further evidence that degeneracy is causally intertwined within the unique relationships between robustness, complexity, and evolvability in CAS. We discuss its prevalence in biological systems, its role in establishing robust traits, and its relationship with information theoretic measures of hierarchical complexity. Motivated by these discussions, we speculate in Section 23 that degeneracy may provide a mechanistic explanation for the theory of natural selection and particularly some more recent hypotheses such as the theory of highly optimized tolerance.

21.

Robustness and Evolvability (Link 6)

Phenotypic robustness and evolvability are defining properties of CAS. In biology, the term robustness is often used in reference to the persistence of high level traits, e.g. fitness, under variable conditions. In contrast, evolvability refers to the capacity for heritable and selectable phenotypic change. More thorough descriptions of robustness and evolvability can be found in Box1. Robustness and evolvability are vital to the persistence of life and their relationship is vital to our understanding of it. This is emphasized in [4] where Wagner asserts that, understanding the relationship between robustness and evolvability is key to understand how living things can withstand mutations, while producing ample variation that leads to evolutionary innovations. At first, robustness and evolvability appear to be in conflict as suggested in the study of RNA secondary structure evolution by Ancel and Fontana [5]. As an illustration of this conflict, the first two panels in Figure 8 show how high phenotypic robustness appears to imply a low production of heritable phenotypic variation [4]. These graphs reflect common intuition that maintaining developed functionalities while at the same time exploring and finding new ones are contradictory requirements of evolution.

Resolving the robustness-evolvability conflict


However, as demonstrated in [4] and illustrated in panel c of Figure 8, this conflict is unresolvable only when robustness is conferred in both the genotype and the phenotype. On the other hand, if the phenotype is robustly maintained in the presence of genetic mutations, then a number of cryptic genetic changes may be possible and their accumulation over time might expose a broad range of distinct phenotypes, e.g. by movement across a neutral network. In this way, robustness of the phenotype might actually enhance access to heritable phenotypic variation and thereby improve long-term evolvability. The work by Ciliberti et al [6] represents a useful case study for understanding this resolution of the robustness/evolvability conflict, although we note that earlier studies arguably demonstrated similar phenomena [7] [8]. In [6], the authors use models of gene regulatory networks (GRN) where GRN instances represent points in genotype space and their expression pattern represents an output or phenotype. Together the genotype and phenotype define a fitness landscape. With this model, Ciliberti et al find that a large number of genotypic changes to the GRN have no phenotypic effect, thereby indicating robustness to such changes. These phenotypically equivalent systems connect to form a neutral network NN in the fitness landscape. A search over this NN is able to reach nodes whose genotypes are almost as different from one another as randomly sampled GRNs. The authors also find that the number of distinct phenotypes that are in the local vicinity of NN nodes is extremely large, indicating a wide variety of accessible phenotypes that can be explored while remaining close to a viable phenotype. The types of phenotypes that are accessible from the NN depend on where in the network that the search takes place. This is evidence that cryptic genetic changes (along the NN) eventually have distinctive phenotypic consequences. In short, the study presented in [6] suggests that the conflict between robustness and evolvability is resolved through the existence of a NN that extends far throughout the fitness landscape. On the one hand, robustness is achieved through a connected network of equivalent (or nearly equivalent) phenotypes. Because of this connectivity, some mutations or perturbations will leave the phenotype unchanged, the extent of which depends on the local NN topology. On the other hand, evolvability is achieved over the long-term by movement across a neutral network that reaches over truly unique regions of the fitness landscape. Robustness and evolvability are not always compatible A positive correlation between robustness and evolvability is widely believed to be conditional upon several other factors, however it is not yet clear what those factors are. Some insights into this problem can be gained by comparing and contrasting systems in which robustness is and is not compatible with evolvability. In accordance with universal Darwinism [9], there are numerous contexts where heritable variation and selection take place and where evolutionary concepts can be successfully applied. These include networked technologies, culture, language, knowledge, music, markets, and organizations. Although a rigorous analysis of robustness and evolvability has not been attempted within any of these domains, there is anecdotal evidence that evolvability does not always go hand in hand with robustness. Many technological and social systems have been intentionally designed to enhance the robustness of a particular service or function, however they are often not readily adaptable to change. In engineering

design in particular, it is a well known heuristic that increasing robustness and complexity can often be a deterrent to flexibility and future adaptations. Similar trade-offs surface in the context of governance (bureaucracy), software design (e.g operating systems), and planning under high uncertainty (e.g. strategic planning). Other evidence of a conflict between robustness and evolvability has been observed in computer simulations of evolution. Studies within the fields of evolutionary computation and artificial life have considered ways of manually injecting mutational robustness into the mapping of genotype to phenotype, e.g. via the enlargement of neutral regions within fitness landscapes [10] [11] [12] [13] [14]. Adding mutational robustness in this way has had little influence on the evolvability of simulated populations. Some researchers have concluded that genetic neutrality (i.e. mutational robustness) alone is not sufficient. Instead, it has been argued that the positioning of neutrality within a fitness landscape through the interactions between genes will greatly influence the number and variety of accessible phenotypes [15] [16]. Assessing the different domains where variation and selection take place, it is noticeable that evolvability and robustness are often in conflict within systems derived through human planning. But how could the simple act of planning change the relationship between robustness and evolvability? As first proposed by Edelman and Gally, one important difference between systems that are created by design (i.e. through planning) and those that evolve without planning is that in the former, components with multiple overlapping functions are absent [2]. In standard planning practices, components remain as simple as possible with a single predetermined functionality. Irrelevant interactions and overlapping functions between components are eliminated from the outset, thereby allowing cause and effect to be more transparent. Robustness is achieved by designing redundancies into a system that are predictable and globally controllable [2]. This can be contrasted with biological CAS such as gene regulatory networks or neural networks where the relevance of interactions can not be determined by local inspection. There is no predetermined assignment of responsibilities for functions or system traits. Instead, different components can contribute to the same function and a component can contribute to several different functions through its exposure to different contexts. While the functionalities of some components appear to be similar under specific conditions, they differ under others. This conditional similarity of functions within biological CAS is a reflection of degeneracy.

22.

Degeneracy

Degeneracy is a system property that requires the existence of multi-functional components (but also modules and pathways) that perform similar functions (i.e. are effectively interchangeable) under certain conditions, yet can perform distinct functions under other conditions. A case in point is the adhesins gene family in A. Saccharomyces, which expresses proteins that typically play unique roles during development, yet can perform each others functions when expression levels are altered [17]. Another classic example of degeneracy is found in glucose metabolism, which can take place through two distinct pathways; glycolysis and the pentose phosphate pathway. These pathways can substitute for each other if necessary even though the sum of their metabolic effects is not identical [18]. More generally, Ma and Zeng argue that the robustness of the bow-tie architecture they discovered in metabolism is largely derived through the presence of multiple degenerate paths to achieving a given

function or activity [19] [20]. Although we could list many more examples of degeneracy, a true appreciation for the ubiquity of degeneracy across all scales of biology is best gained by reading Edelman and Gallys review of the topic in [2]. Box 2 provides a more detailed description of degeneracy, its relationship to redundancy, and additional examples of degeneracy in biological systems.

The role of degeneracy in adaptive innovations (Links 1 & 3)


In [3], we explored whether degeneracy influences the relationship between robustness and evolvability in a generic genome:proteome model. Unlike the studies discussed in Section 21, we found that neither size nor topology of a neutral network guarantees evolvability. Local and global measures of robustness within a fitness landscape were also not consistently indicative of the accessibility of distinct heritable phenotypes. Instead, we found that only systems with high levels of degeneracy exhibited a positive relationship between neutral network size, robustness, and evolvability. More precisely, we showed that systems composed of redundant proteins were mutationally robust but greatly restricted in the number of unique phenotypes accessible from a neutral network, i.e. they were not evolvable. On the other hand, replacing redundant proteins with degenerate proteins resolved this conflict and led to both exceptionally robust and exceptionally evolvable systems. Importantly, this result was observed even though the total sum of protein functions was identical between each of the system classes. From observing how evolvability scaled with system size, we concluded that degeneracy not only contributes to the discovery of new innovations but that it may be a precondition of evolvability [21] [3]. Degeneracy and distributed robustness (Link 1) As discussed in [2], degeneracys relationship to robustness and evolvability appears to be conceptually simple. While degenerate components contribute to stability under conditions where they are functionally compensatory, their distinct responses outside of those conditions provide access to unique functional effects, some of which may be selectively relevant in certain environments. Although useful in guiding our intuition, it is not clear whether such ideas are applicable to larger systems involving many components and multiple traits. More precisely, it is not clear why accessibility to functional variation would not act as a destabilizing force within a larger system. However in [3], we found that the robustness of large degenerate genome:proteome systems was not degraded by this functional variation and instead was actually greater than that expected from local compensatory actions. In the following, we present an alternative conceptual model that aims to account for these findings and illustrates additional ways in which degeneracy may facilitate robustness and evolvability in complex adaptive systems. The model comprises agents that are situated within an environment. Each agent can perform one task at a time where the types of tasks are restricted by an agents predetermined capabilities. Tasks represent conditions imposed by the local environment and agents act to take on any tasks that match their functional repertoire. An illustration of how degeneracy can influence robustness and evolvability is given using the diagrams in Figure 9, where each task type is represented by a node cluster and agents are represented by pairs of connected nodes. For instance, in Figure 9a an agent is circled and the positioning of its nodes reflects that agents (two) task capabilities. Each agent only performs one task at a time with the currently executed task indicated by the darker node.

In Figure 9b, task requirements are increased for the bottom task group and excess resources become available in the top task group. With a partial overlap in agent task capabilities, agent resources can be reassigned from where they are in excess to where they are needed as indicated by the arrows. From this simple illustration, it is straightforward to see how excess buffering related to one type of task can support unrelated tasks through the presence of degeneracy. If this partial overlap in capabilities is pervasive throughout the system then there are potentially many options for reconfiguring resources as shown in Figure 9c. In short, degeneracy may allow for cooperation amongst buffers such that localized stresses can invoke a distributed response. Moreover, excess resources related to a single task can be used in a highly versatile manner; although interoperability of components may be localized, at the system level extra resources can offer huge reconfiguration opportunities. The necessary conditions for this buffering network to form do not appear to be demanding (e.g. [3]). One condition that is clearly needed though is degeneracy. Without a partial overlap in capabilities, agents in the same functional grouping can only support each other (see Figure 9d) and, conversely, excess resources cannot support unrelated tasks outside the group. Buffers are thus localized and every type of variability in task requirements requires a matching realization of redundancies. This simplicity in structure (and inefficiency) is encouraged in most human planning activities. Degeneracy and Evolvability (Link 3) For systems to be both robust and evolvable, the individual agents that stabilize traits must be able to occasionally behave in unique ways when stability is lost. Within the context of distributed genetic systems, this requirement is reflected in the need for unique phenotypes to be mutationally accessible from different regions of a neutral network. The large number of distinct and cryptic internal configurations that are possible within degenerate systems (see Figure 9c) are likely to expand the number of unique ways in which a system will reorganize itself when thresholds for trait stability are eventually crossed, as seen in [3]. This is because degenerate pathways to robust traits are reached by truly distinct paths (i.e. distinct internal configurations) that do not always respond to environmental changes in the same manner, i.e. they are only conditionally similar. Due to symmetry, such cryptic distinctions are not possible from purely redundant sources of robustness. However, in [3] degenerate systems were found to have an elevated configurational versatility that we speculate is the result of degenerate components being organized into a larger buffering network. This versatility allows degenerate components to contribute to the mutational robustness of a large heterogeneous system and, for the same (symmetry) reasons as stated above, may further contribute to the accessibility of distinct heritable variation. In summary, we have presented arguments as well as some evidence that degeneracy allows for types of robustness that directly contribute to the evolvability of complex systems, e.g. through mutational access to distinct phenotypes from a neutral network within a fitness landscape. We have speculated that the basis for both robustness and evolvability in degenerate systems is a set of heterogeneous overlapping buffers. We suggest that these buffers and their connectivity offer exceptional canalization potential under many conditions while facilitating high levels of phenotypic plasticity under others.

23.

Origins of complexity

Complexity
There are many definitions and studies of complexity in the literature [22] [23] [24] [25] [26] [27] [28]. Different definitions have mostly originated within separate disciplines and have been shaped by the classes of systems that are considered pertinent to particular fields of study. Early usage of the term complexity within biology was fairly ambiguous and varied depending on the context in which it was used. Darwin appeared to equate complexity with the number of distinct components (e.g. cells) that were organized to generate a particular trait (e.g. an eye). Since then, the meaning of complexity has changed however nowadays only marginal consensus exists on what it means and how it should be measured. In studies related to the theory of highly optimized tolerance (HOT), complex systems have been defined as being hierarchical, highly structured and composed of many heterogeneous components [29] [30]. The organizational structure of life is now known to be scale-rich (as opposed to scale-free) but also multi-scaled [31] [29] [30]. This means that patterns of biological component interdependence are truly unique to a particular scale of observation but there are also important interactions that integrate behaviors across scales. The existence of expanding hierarchical structures or systems within systems implies a scalability in natural evolution that some would label as a uniquely biological phenomenon. From prions and viruses to rich ecosystems and the biosphere, we observe organized systems that rely heavily on the robustness of finer-scale patterns while they also adapt to change taking place at a larger scale [32]. A defining characteristic of multi-scaled complex systems is captured in the definition of hierarchical complexity given in [33] [34]. There, complexity is defined as the degree to which a system is both functionally integrated and functionally segregated. Although this may not express what complexity means to all people, we focus on this definition because it represents an important quantifiable property of multi-scaled complex systems that is arguably unique to biological evolution.

Degeneracy and Complexity (Link 2)


According to Tononi et al [33], degeneracy is intimately related to complexity, both conceptually as well as empirically. The conceptual similarity is immediately apparent: while complex systems are both functionally integrated and functionally segregated, degenerate components are both functionally redundant and functionally independent. Tononi et al also found that a strong positive correlation exists between information theoretic measurements of degeneracy and complexity. When degeneracy was increased within neural network models, they always observed a concomitant large increase in system complexity. In contrast, complexity was found to be low in cases where neurons fired independently (although Shannon entropy is high in this case) or when firing throughout the neuronal population was strongly correlated (although information redundancy is high in this case). From these observations, Tononi et al derived a more generic claim, namely that this relationship between degeneracy and complexity is broadly relevant and could be pertinent to our general understanding of CAS.

Robustness and Complexity (Link 5)


System robustness requires that components can be utilized at the appropriate times to accommodate aberrant variations in the conditions to which a system is exposed. Because such irregular variability can be large in both scale and type, robustness is limited by the capabilities of extant components. Such limitations are easily recognizable and commonly relate to limits on utilization rate and level of multifunctionality afforded to any single component. As a result of these physical constraints, improvements in robustness can sometimes only occur from the integration of new components and new component types within a system, which in turn can add to a systems complexity. While the integration of new components may address certain aberrant variations in conditions, it can also introduce new degrees of freedom to the system which sometimes leads to new points of accessible fragility, i.e. new vulnerabilities. As long as the frequency and impact of conditions stabilized is larger than those of the conditions sensitised, such components are theoretically selectable by robustness measures.3 By this reasoning, a sustained drive towards increased robustness might be expected to correspond occasionally with growth in system complexity. At sufficiently long time scales, we thus might expect a strong positive correlation to emerge between the two properties. Such a relationship between robustness and complexity is proposed in the theory of Highly Optimized Tolerance (HOT) [29] [35] [30]. HOT suggests that the myopic nature of evolution promotes increased robustness to common conditions and unknowingly replaces these with considerably less frequent but still potentially devastating sensitivities. Proponents of HOT argue that evidence of this process can be found in the properties of evolving systems, such as power law relations observed for certain spatial and temporal properties of evolution, e.g. extinction sizes. In support of HOT, some researchers have used examples from biology, ecology, and engineering to demonstrate how increased robustness often simultaneously leads to increased system complexity [29] [35] [30]. From another perspective, the persistence of a heterogeneous, multi-scaled system seems to necessitate robustness, at least to intrinsic variability that may arise, for instance, from process errors initiated by the stochasticity of internal dynamics. Without such robustness, small aberrant perturbations in one subsystem could spread to others, leading to broad destabilization of these subsystems and a potential collapse of otherwise persistent higher-scale patterns. Even if individual perturbations are unlikely, the frequency of perturbation events (e.g. at fine resolutions of the system) would greatly limit the overall number of distinct scales where coherent spatio-temporal patterns could be observed, if the system were not robust. Similar arguments have been used in explaining the relationship between multi-scaling phenomena and resilience within complex ecosystems [36] [32] [37]. The role of degeneracy: Summarizing, it is apparent that robustness and complexity are intimately intertwined and moreover that robustness is a precondition for complexity, at least for multi-scaled systems. However, not all mechanisms for achieving robustness necessarily lead to multi-scaled complexity nor are they all viable options for CAS. For instance, in [33] Tononi et al found that highly redundant (non-degenerate) systems were naturally robust but never hierarchically complex. On the other hand, highly degenerate systems were simultaneously robust and complex. Assuming as Tononi et
3

This makes assumptions about the timescales of variation and selection. For instance, it assumes that the environment is not changing too rapidly. If current conditions do not reflect future conditions (to at least some degree) then any increases in robustness could only occur by chance and generally would not be observed.

al do that their findings extend to other CAS, this suggests that the relationship between robustness and complexity hypothesized in HOT is enabled by the presence of degenerate forms of robustness.

Evolution of complex phenotypes (Link 4)


The evolution of complex forms requires a long series of adaptive changes to take place. At each step, these adaptations must result in a viable and robust system but also must not inhibit the ability to find subsequent adaptations. Complexity, in the context of multi-scaled evolving systems, clearly demands evolvability to form such systems and robustness to maintain such systems at every step along the way. This connection between evolvability and complexity is famously captured within Darwins theory of natural selection. According to the theory, complex traits have evolved through a series of incremental changes and are not irreducibly complex. For highly complex traits to exist, growth in complexity cannot inhibit the future evolvability of a system. More precisely, the formation of complex traits is predicated on evolvability either being sustained or re-emerging after each inherited change. How evolving systems actually satisfy these requirements remains a true mystery. As reviewed by Kirschner and Gerhart, different principles in biological systems have been uncovered over the years (e.g. loose-coupling, exploratory behavior, redundancy, agent versatility) that strongly influence the constraint/deconstraint mechanisms imposed on phenotypic variation and thus contribute to robustness and evolvability of these systems [1]. Although these principles are undoubtedly of considerable importance, the examples of degeneracy provided in [2] strongly suggest that degeneracy underpins most constraint/deconstraint mechanisms. It is well-accepted that the exceptional properties of CAS are not a consequence of exceptional properties of their components [23]. Instead it is how components interact and inter-relate that determines: 1) the ability to confer stability within the broader system (robustness), 2) the ability to create systems that are both functionally integrated and functionally segregated (complex), and 3) the ability to acquire new traits and take on more complex forms (evolvable). It would seem that any mechanism that directly contributes to all of these organizational properties is a promising candidate design principle of evolution. In this paper we have reviewed new evidence, summarized in Table 1 and Figure 7, that degeneracy may represent just such a mechanism and thus could prove fundamental in understanding the evolution of complex forms. But if degeneracy is important to the mechanics of evolution as claimed in this paper, it is worth asking why it has been overlooked in theoretical discussions of biological evolution. As suggested in Box 3, we argue that this may be due to a longstanding reductionist bias in the study of biological systems.

24.

Concluding Remarks

Understanding how biological systems can be complex, robust and evolvable is germane to our understanding of biology and evolution. In this paper, we have proposed that degeneracy could play a fundamental role in the unique relationships between complexity, robustness, and evolvability in complex adaptive systems. Summarizing our arguments, we have presented evidence that degeneracy is a highly effective mechanism for creating (distributed) robust systems, that it uniquely enhances the long-term evolvability of a system, that it acts to increase the hierarchical complexity of a system, and that it is prevalent in biology. Using these arguments, we speculate on how degeneracy may help to directly establish the conditions necessary for the evolution of complex forms. Although more research is needed to validate some of the claims made here, we are cautiously optimistic that degeneracy is

intimately tied to some of the most interesting phenomena observed in natural evolving systems. Moreover, as a conceptual design principle, degeneracy is readily applicable to other disciplines and could prove beneficial for enhancing the robustness and adaptiveness of human-engineered systems.

Box 1: Robustness and Evolvability


In nature, organisms are presented with a multitude of environments and are occasionally exposed to new and slightly different environments. Under these variable conditions, organisms must on the one hand maintain a range of functionalities in order to survive and reproduce. Often, this means a number of important traits need to be robust over a range of environments. On the other hand, organisms must also be flexible enough to adapt to new conditions that they have not previously experienced. At higher levels in biology, populations display genetic robustness and robustness to moderate ecological changes yet at the same time are often able to adapt when conditions change significantly. This dual presence of robustness and adaptiveness to change is observed at different scales in biology and it has been responsible for the remarkable persistence of life over billions of years and countless generations.

Robustness
Despite the numerous definitions of robustness provided in the literature [38], there is fair conceptual agreement on what robustness means. In its most general form, robustness reflects an insensitivity of some functionality or measured state of a system when the system is exposed to a set of distinct environments or distinct internal conditions. To give robustness meaning, it is necessary to elaborate on what function or state of the system is being measured and to what set of conditions the system is exposed. Classes of Environmental and Biological Change: The conditions to which a system is exposed depend on its scale and scope but are generally broken down into internal and external sources. For instance, changes originating from within an organism include inherited changes to the genotype and stochasticity of internal dynamics, while sources of external (environmental) change include changes in culture, changes in species interactions and changes at various scales within the physical environment. Pathways toward robustness Biological robustness is typically discussed as a process of effective control over the phenotype. In some cases, this means maintaining a stable trait despite variability in the environment (canalization), while in other cases it requires modification of a trait so as to maintain higher level traits such as fitness, within a new environment (adaptive phenotypic plasticity) [19]. Both adaptive phenotypic plasticity and canalization involve conditional responses to change and their causal origins are generally believed to be similar [39].

Evolvability
Different definitions of evolvability exist in the literature (e.g. [4] [40] [41]), so it is important to articulate exactly what is meant by this term. In general, evolvability is concerned with the selection of new phenotypes. It requires an ability to generate distinct phenotypes and it requires that some of these phenotypes have a non-negligible probability of being selected by the environment. Because of the contextual nature of selection (i.e. its dependence on the environment), quantifying evolvability in a context free manner is only possible by employing a surrogate measurement. The

most common measurement used in the literature is the accessibility of distinct heritable phenotypes [1]. In this paper, as with others [6] [4] [42], we use this surrogate measure when evaluating evolvability.

Box 2: Degeneracy and Redundancy


Redundancy and degeneracy are two design principles that both contribute to the robustness of biological systems [2] [43]. Redundancy is an easily recognizable design principle in biological and man-made systems and means redundancy of parts. It refers to the coexistence of identical components with identical functionality and thus is isomorphic and isofunctional. In information theory, redundancy refers to the repetition of messages and is important for reducing transmission errors. It is a common feature of engineered or planned systems where it provides robustness against variations of a very specific type (more of the same variations). For example, redundant parts can substitute for others that malfunction or fail, or augment output when demand for a particular output increases. Redundancy is also prevalent in biology. Polyploidy, homogenous tissues and allozymes are examples of functional biological redundancy. Another and particular impressive example is neural redundancy, i.e. the multiplicity of neural units (e.g. pacemaker cells) that perform identical functions (e.g. generating the swimming rhythms in jellyfish or the heartbeat in humans). In biology, degeneracy refers to conditions where the functions or capabilities of components overlap partially. In a review by Edelman and Gally [2], numerous examples are used to demonstrate the ubiquity of degeneracy throughout biology. It is pervasive in proteins of every functional class (e.g. enzymatic, structural, or regulatory) [44] and is readily observed in ontogenesis (see page 14 in [45]), the nervous system [33] and cell signalling (crosstalk). Degeneracy differs from pure redundancy because similarities in the functional response of components are not observed for all conditions. Under some conditions the functions are similar while under others they differ. Origins of Degeneracy Degeneracy originates from convergent (constraint) and divergent (deconstraint) forces that play out within distributed systems subject to variation and selection. With divergence, identical components evolve in slightly distinct directions causing structural and functional differences to grow over time. The most well studied context where this occurs is gene duplication and divergence [46] [47] [48]. Degeneracy may also arise through convergent evolution, where structurally distinct components are driven to acquire similar functionalities. In biology, this may occur as a direct result of selection for a particular trait or it may alternatively arise due to developmental constraints (e.g. see [49]) that act to constrain the evolution of dissimilar components in similar ways. There are many documented examples of convergence (e.g. homoplasy) occurring at different scales in biology [50] [51]. While the origins of degeneracy are conceptually simple, the reasons it is observed at high levels throughout biology are not known and several plausible explanations exist. One possibility is that degeneracy is expressed at high levels simply because it is the quickest or most probable path to heritable change in distributed (genetic) systems. Another possibility is that it is retained due to a direct selective advantage, e.g. due to the enhanced robustness it may provide towards variability in the environment, e.g. see [3]. Other interesting explanations have been proposed that consider a combination of neutral and selective processes. For instance, the Duplication-DegeneracyComplementation (DDC) model [52] proposes that neutral drift can readily introduce degeneracy amongst initially redundant genes that is later fixated through complementary loss-of-function

mutations. Yet another possibility proposed in [3] is that the distributed nature of degenerate robustness (e.g. see Figure 9c) creates a relatively large mutational target for trait buffering that is separate from the degenerate gene. This large target may help to increase and preserve degeneracy over iterated periods of addition and removal of excess robustness within populations under mutation-selection balance (cf [3]). Similar to the DDC model, under this scenario degeneracy would be acquired passively (neutrally) and selectively retained only after additional loss-of-function mutations.

Box 3: The hidden role of degeneracy


If degeneracy is important to the mechanics of evolution as claimed in this paper, it is worth asking why it has been overlooked in theoretical discussions of biological evolution. In [2], Edelman and Gally suggest that its importance has been hidden in plain sight but that the ubiquity of degeneracy and its importance to evolutionary mechanics become obvious upon close inspection. We believe there may also be practical reasons degeneracy has been overlooked which originate from a long-standing reductionist bias in the study of biological systems. An illustrative example is given by the proposed relationship between degeneracy and robustness. As speculated in Figure 9, degeneracy contributes to robustness through distributed compensatory actions whereby: i) distinct components support the stability of a single trait and ii) individual components contribute to the stability of multiple distinct traits. However, the experimental conditions of biological studies are rarely designed to evaluate emergent and systemic causes of trait stability. Instead, biological studies often evaluate single trait stability or only evaluate mechanisms that stabilize traits through local interactions, e.g. via functional redundancy in a single specified context. This is evident within the many studies and examples of trait stability reviewed in [2]. Degeneracys influence on evolvability is also largely hidden when viewed from a reductionist lens. As already discussed, the (internal) organizational versatility afforded by degeneracy can allow many perturbations to have a neutral or muted phenotypic effect. When phenotypic innovations do eventually occur however, they are likely to be influenced by the many cryptic changes occurring prior to the final threshold crossing event, e.g. mutation [53] [54] [55]. While the single gene:trait paradigm has long been put to rest, studies investigating phenotypic variation still often rely on single gene knockout experiments and simple models of gene epistasis. Historically, studies have rarely been designed in a manner that could expose the utility of neutral/passive mechanistic processes in facilitating adaptive change [53]. Degenerate components often have many context-activated functional effects and frequent changes to context can cause a components influence to be highly variable over time. The prevalence of spatiotemporal variability in function has been well documented in the proteome where the most versatile of such proteins are labelled as date-hubs [56]. However, most biological data sets are obtained using timeaveraged measurements of effect size which can make versatile components appear to have weak interactions even though these interactions are relevant to trait stability. This limitation from timeaveraged measurement bias was first demonstrated by Berlow for species interactions within intertidal ecological communities [57]. However, even if highly versatile components do exhibit a relatively low affinity in each of their interactions, they may still have a large influence on system coherence, integration, and stability [58] [59]. For instance, the low affinity weak links of some degenerate components are known to play a vital role in the stability of social networks [60] and within the cells interactome, e.g. protein chaperones [59]. However, for reasons associated with time and cost

restrictions, weak links are typically discounted in both data collection and analysis of biological systems. In summary, we suspect that commonly accepted forms of experimental bias and conceptual (reductionist) bias have hindered scientific exploration of degeneracy and its role in facilitating phenotypic robustness and evolvability.

25.

Tables

Table 1 Overview of key studies on the relationship between degeneracy, robustness, complexity and evolvability. The information is mostly taken (with permission) from [3].
Relationship Unknown whether degeneracy is a primary source of robustness in biology Summary Distributed robustness (and not pure redundancy) accounts for a large proportion of robustness in biological systems (Kitami, 2002), (Wagner, 2005). Although many traits are stabilized through degeneracy (Edelman and Gally, 2001) its total contribution is unknown. Degeneracy is positively correlated and conceptually similar to complexity. For instance degenerate components are both functionally redundant and functionally independent while complexity describes systems that are functionally integrated and functionally segregated. Accessibility of distinct phenotypes requires robustness through degeneracy Context Large scale gene deletion studies and other biological evidence (e.g. cryptic genetic variation) Ref [43], [61] [2]

1)

2)

Degeneracy has a strong positive correlation with system complexity

3)

4)

Degeneracy is a precondition for evolvability and a more effective source of robustness Evolvability is a prerequisite for complexity

Simulation models of artificial neural networks are evaluated based on information theoretic measures of redundancy, degeneracy, and complexity Abstract simulation models of evolution

[33]

[3]

5)

Complexity increases to improve robustness

6)

Evolvability emerges from robustness

All complex life forms have evolved through a succession of incremental changes and are not irreducibly complex (according to Darwins theory of natural selection). The capacity to generate heritable phenotypic variation (evolvability) is a precondition for the evolution of increasingly complex forms. According to the theory of highly optimized tolerance, complex adaptive systems are optimized for robustness to common observed variations in conditions. Moreover, robustness is improved through the addition of new components/processes that are integrated with the rest of the system and add to the complexity of the organizational form. Genetic robustness reflects the presence of a neutral network. Over the long-term this neutral network provides access to a broad range of distinct phenotypes and helps ensure the long-term evolvability of a system.

Theory of natural selection

[62]

Based on theoretical arguments that have been applied to biological evolution and engineering design (e.g. aircraft, internet) Simulation models of gene regulatory networks and RNA secondary structure.

[29] [35] [30]

[6] [4]

26.

Figures

Figure 7 high level illustration of the relationships between degeneracy, complexity, robustness, and evolvability. The numbers in llustration column one of Table 1 correspond with the abbreviated descriptions shown here. This diagram is reproduced with permission fro from [3].

Figure 8: The conflicting properties of robustness and evolvability and their proposed resolution. A system (central node) is : exposed to changing conditions (peripheral nodes). Robustness of a function requires minimal variation in the function (panel a) while the discover of new functions requires the testing of a large number of functional variants (panel b). The existence of a .

neutral network may allow for both requirements to be met (panel c). In the context of a fitness landscape, movement along edges of each graph would reflect changes in genotype while changes in color would reflect changes in phenotype.

a) Degeneracy in buffering actions

b) Resources can indirectly support unrelated tasks

Excess Resources

Buffering Agent

Functional group/ Node cluster

Functional group/ Node cluster

Need Resources

c) Reconfiguration options are large

d) Pure redundancy in buffering actions

Buffering Agent

Functional group/ Node cluster

Need Resources

Functional group/ Node cluster

Figure 9 Illustration of how distributed robustness can be achieved in degenerate systems (panels a-c) and why it is not possible in purely redundant systems (panel d). Nodes describe tasks, dark nodes are active tasks. In principle, agents can perform two distinct tasks but are able to perform only one task at a time. Panels a and d are reproduced with permission from [3].

27.

References

[1] M. Kirschner and J. Gerhart, "Evolvability," Proceedings of the National Academy of Sciences, USA, vol. 95, pp. 8420-8427, 1998.

[2] G. M. Edelman and J. A. Gally, "Degeneracy and complexity in biological systems," Proceedings of the National Academy of Sciences, USA, vol. 98, pp. 13763-13768, 2001.

[3] J. M. Whitacre and A. Bender, "Degeneracy: a design principle for achieving robustness and evolvability," Journal of Theoretical Biology, (Accepted Nov, 2009), 2009.

[4] A. Wagner, "Robustness and evolvability: a paradox resolved," Proceedings of the Royal Society of London, Series B: Biological Sciences, vol. 275, pp. 91-100, 2008.

[5] L. W. Ancel and W. Fontana, "Plasticity, evolvability, and modularity in RNA," Journal of Experimental Zoology, vol. 288, pp. 242-283, 2000.

[6] S. Ciliberti, O. C. Martin, and A. Wagner, "Innovation and robustness in complex regulatory gene networks," Proceedings of the National Academy of Sciences, USA, vol. 104, pp. 13591-13596, 2007.

[7] M. A. Huynen, P. F. Stadler, and W. Fontana, "Smoothness within ruggedness: The role of neutrality in adaptation," Proceedings of the National Academy of Sciences, USA, vol. 93, pp. 397-401, 1996.

[8] J. P. Crutchfield and E. Van Nimwegen, "The evolutionary unfolding of complexity," in DIMACS Workshop, Princeton, 2002.

[9] R. Dawkins, "Universal darwinism," in Evolution from molecules to man, D. S. Bendall, Ed.: Cambridge University Press, 1983, p. 202.

[10] W. Banzhaf, "Genotype-Phenotype-Mapping and Neutral Variation-A Case Study in Genetic Programming," in Parallel Problem Solving from Nature PPSN III. vol. 866, 1994, pp. 322-332.

[11] R. E. Keller and W. Banzhaf, "Genetic programming using genotype-phenotype mapping from linear genomes into linear phenotypes," in Proceedings of the first annual Conference on Genetic Programming Stanford, California, 1996, pp. 116122.

[12] J. D. Knowles and R. A. Watson, "On the Utility of Redundant Encodings in Mutation-Based Evolutionary Search," Lecture Notes in Computer Science, pp. 88-98, 2003.

[13] V. K. Vassilev and J. F. Miller, "The Advantages of Landscape Neutrality in Digital Circuit Evolution," in Evolvable systems: from biology to hardware Berlin: Springer, 2000.

[14] T. Yu and J. F. Miller, "Neutrality and the Evolvability of Boolean Function Landscape," in Proceedings of the 4th European Conference on Genetic Programming, 2001, pp. 204-217.

[15] T. Smith, A. Philippides, P. Husbands, and M. O'Shea, "Neutrality and ruggedness in robot landscapes," in Congress on Evolutionary Computation, 2002, pp. 1348-1353.

[16] F. Rothlauf and D. E. Goldberg, "Redundant Representations in Evolutionary Computation," Evolutionary Computation, vol. 11, pp. 381-415, March 13, 2006 2003.

[17] B. Guo, C. A. Styles, Q. Feng, and G. R. Fink, "A Saccharomyces gene family involved in invasive growth, cell-cell adhesion, and mating," Proceedings of the National Academy of Sciences, USA, vol. 97, pp. 12158-12163, 2000.

[18] U. Sauer, F. Canonaco, S. Heri, A. Perrenoud, and E. Fischer, "The Soluble and Membrane-bound Transhydrogenases UdhA and PntAB Have Divergent Functions in NADPH Metabolism of Escherichia coli," Journal of Biological Chemistry, vol. 279, p. 6613, 2004.

[19]

H. Kitano, "Biological robustness," Nature Reviews Genetics, vol. 5, pp. 826-837, 2004.

[20] H. W. Ma and A. P. Zeng, "The connectivity structure, giant strong component and centrality of metabolic networks," Bioinformatics, vol. 19, pp. 1423-1430, 2003.

[21] J. M. Whitacre and A. Bender, "Degenerate neutrality creates evolvable fitness landscapes," in WorldComp-2009 Las Vegas, Nevada, USA, 2009.

[22] D. W. McShea, "Perspective: Metazoan Complexity and Evolution: Is There a Trend?," Evolution, vol. 50, pp. 477492, 1996.

[23]

M. Gell-Mann, "What is complexity," Complexity, vol. 1, pp. 1-9, 1995.

[24]

C. Adami, "Sequence complexity in Darwinian evolution," Complexity, vol. 8, pp. 49-57, 2002.

[25] C. R. Shalizi, R. Haslinger, J. B. Rouquier, K. L. Klinkner, and C. Moore, "Automatic filters for the detection of coherent structure in spatiotemporal systems," Physical Review E, vol. 73, p. 36104, 2006.

[26] J. P. Crutchfield and O. Grnerup, "Objects that make objects: the population dynamics of structural complexity," Journal of The Royal Society Interface, vol. 3, pp. 345-349, 2006.

[27] R. M. Hazen, P. L. Griffin, J. M. Carothers, and J. W. Szostak, "Functional information and the emergence of biocomplexity," Proceedings of the National Academy of Sciences, vol. 104, pp. 8574-8581, 2007.

[28]

B. Edmonds, "Complexity and scientific modelling," Foundations of Science, vol. 5, pp. 379-390, 2000.

[29] J. M. Carlson and J. Doyle, "Highly optimized tolerance: A mechanism for power laws in designed systems," Physical Review E, vol. 60, pp. 1412-1427, 1999.

[30] J. M. Carlson and J. Doyle, "Complexity and robustness," Proceedings of the National Academy of Sciences, USA, vol. 99, pp. 2538-2545 2002.

[31]

R. Tanaka, "Scale-Rich Metabolic Networks," Physical Review Letters, vol. 94, pp. 168101-168105, 2005.

[32] C. S. Holling, "Understanding the complexity of economic, ecological, and social systems," Ecosystems, vol. 4, pp. 390-405, 2001.

[33] G. Tononi, O. Sporns, and G. M. Edelman, "Measures of degeneracy and redundancy in biological networks," Proceedings of the National Academy of Sciences, USA, vol. 96, pp. 3257-3262, 1999.

[34]

G. Tononi and G. M. Edelman, "Consciousness and Complexity," Science, vol. 282, pp. 1846-1851, 1998.

[35] 2002.

M. E. Csete and J. C. Doyle, "Reverse Engineering of Biological Complexity," Science, vol. 295, pp. 1664-1669,

[36] S. Carpenter, B. Walker, J. M. Anderies, and N. Abel, "From metaphor to measurement: resilience of what to what?," Ecosystems, vol. 4, pp. 765-781, 2001.

[37] B. Walker, C. S. Holling, S. R. Carpenter, and A. Kinzig, "Resilience, Adaptability and Transformability in Social-ecological Systems," Ecology and Society, vol. 9, pp. 1-5, 2004.

[38] J. Stelling, U. Sauer, Z. Szallasi, F. J. Doyle, and J. Doyle, "Robustness of Cellular Functions," Cell, vol. 118, pp. 675-685, 2004.

[39] J. Visser, J. Hermisson, G. P. Wagner, L. A. Meyers, H. Bagheri-Chaichian, J. L. Blanchard, L. Chao, J. M. Cheverud, S. F. Elena, and W. Fontana, "Perspective: Evolution and Detection of Genetic Robustness," Evolution, vol. 57, pp. 1959-1972, 2003.

[40] G. P. Wagner and L. Altenberg, "Complex adaptations and the evolution of evolvability," Evolution, vol. 50, pp. 967-976, 1996.

[41] J. Reisinger, K. O. Stanley, and R. Miikkulainen, "Towards an empirical measure of evolvability," in Proceedings of the Genetic and Evolutionary Computation Conference Washington, D.C., 2005, pp. 257-264.

[42] M. Aldana, E. Balleza, S. Kauffman, and O. Resendiz, "Robustness and evolvability in genetic regulatory networks," Journal of Theoretical Biology, vol. 245, pp. 433-448, 2007.

[43] A. Wagner, "Distributed robustness versus redundancy as causes of mutational robustness," BioEssays, vol. 27, pp. 176-188, 2005.

[44] A. Wagner, "The role of population size, pleiotropy and fitness effects of mutations in the evolution of overlapping gene functions," Genetics, vol. 154, pp. 1389-1401, 2000.

[45] S. A. Newman, "Generic physical mechanisms of tissue morphogenesis: A common basis for development and evolution," Journal of Evolutionary Biology, vol. 7, pp. 467-488, 1994.

[46] A. Wagner, "Evolution of Gene Networks by Gene Duplications: A Mathematical Model and its Implications on Genome Organization," Proceedings of the National Academy of Sciences, USA, vol. 91, pp. 4387-4391, 1994.

[47]

S. Ohno, Evolution by Gene Duplication: Springer-Verlag, 1970.

[48] K. H. Wolfe and D. C. Shields, "Molecular evidence for an ancient duplication of the entire yeast genome," Nature, vol. 387, pp. 708-713, 1997.

[49] D. B. Wake, "Homoplasy: the result of natural selection, or evidence of design limitations?," American Naturalist, pp. 543-567, 1991.

[50]

J. Moore and P. Willmer, "Convergent evolution in invertebrates," Biological Reviews, vol. 72, pp. 1-60, 1997.

[51]

G. C. Conant and A. Wagner, "Convergent evolution of gene circuits," Nature Genetics, vol. 34, pp. 264-266, 2003.

[52] A. Force, M. Lynch, F. B. Pickett, A. Amores, Y. Yan, and J. Postlethwait, "Preservation of duplicate genes by complementary, degenerative mutations," Genetics, vol. 151, pp. 1531-1545, 1999.

[53]

A. Wagner, "Neutralism and selectionism: a network-based reconciliation," Nature Reviews Genetics, 2008.

[54] M. C. Cowperthwaite, J. J. Bull, and L. A. Meyers, "From bad to good: Fitness reversals and the ascent of deleterious mutations," PLoS Computational Biology, vol. 2, 2006.

[55] C. O. Wilke, R. E. Lenski, and C. Adami, "Compensatory mutations cause excess of antagonistic epistasis in RNA secondary structure folding," BMC Evolutionary Biology, vol. 3, p. 3, 2003.

[56] J. D. J. Han, N. Bertin, T. Hao, D. S. Goldberg, G. F. Berriz, L. V. Zhang, D. Dupuy, A. J. M. Walhout, M. E. Cusick, and F. P. Roth, "Evidence for dynamically organized modularity in the yeast proteinprotein interaction network," Nature, vol. 430, pp. 88-93, 2004.

[57]

E. L. Berlow, "Strong effects of weak interactions in ecological communities," Nature, vol. 398, pp. 330-334, 1999.

[58] H. Kitano, "A robustness-based approach to systems-oriented drug design," Nature Reviews Drug Discovery, vol. 6, pp. 202-210, 2007.

[59]

P. Csermely, Weak links: Stabilizers of complex systems from proteins to social networks: Springer Verlag, 2006.

[60]

M. S. Granovetter, "The strength of weak ties," American journal of sociology, vol. 78, pp. 1360-1380, 1973.

[61] T. Kitami and J. H. Nadeau, "Biochemical networking contributes more to genetic buffering in human and mouse metabolic pathways than does gene duplication," Nature genetics, vol. 32, pp. 191-194, 2002.

[62]

C. Darwin, The origin of species: Signet Classic, 2003.

Degeneracy: a design principle for achieving robustness and evolvability James Whitacrea* and Axel Benderb Defence and Security Applications Research Centre; University of New South Wales at the Australian Defence Force Academy, Canberra, Australia
b a

Land Operations Division, Defence Science and Technology Organisation; Edinburgh, Australia

*To whom correspondence should be addressed. E-mail: jwhitacre79@yahoo.com Phone: 61403595280 Conflicts of Interest: The authors have no conflicts of interest to report Contributions: JW designed and performed research, analyzed data and wrote the paper. AB analyzed data and wrote the paper.

Abstract
Robustness, the insensitivity of some of a biological systems functionalities to a set of distinct conditions, is intimately linked to fitness. Recent studies suggest that it may also play a vital role in enabling the evolution of species. Increasing robustness, so is proposed, can lead to the emergence of evolvability if evolution proceeds over a neutral network that extends far throughout the fitness landscape. Here, we show that the design principles used to achieve robustness dramatically influence whether robustness leads to evolvability. In simulation experiments, we find that purely redundant systems have remarkably low evolvability while degenerate, i.e. partially redundant, systems tend to be orders of magnitude more evolvable. Surprisingly, the magnitude of observed variation in evolvability can neither be explained by differences in the size nor the topology of the neutral networks. This suggests that degeneracy, a ubiquitous characteristic in biological systems, may be an important enabler of natural evolution. More generally, our study provides valuable new clues about the origin of innovations in complex adaptive systems. Keywords: evolution, fitness landscapes, neutral networks, redundancy, distributed robustness

2. Introduction
Life exhibits two unique qualities, both of which are highly desirable and hard to create artificially. The first of these is robustness. At almost every scale of biological organization, we observe systems that are highly versatile and robust to changing conditions. The second quality is the ability to innovate. Here we are referring to lifes remarkable capacity for creative, innovative, selectable change. Understanding the origins of robustness, innovation and their relationship, is one of the most interesting open problems in biology and evolution. We begin by briefly describing these concepts and recent progress in understanding their relationship with each other.

Robustness Insensitivity to Varying Conditions


Despite the numerous definitions of robustness in the literature [1], there is surprising agreement on what robustness means. In its most general form, robustness describes the insensitivity of some functionality or measured system state to a set of distinct conditions. The state is assumed critical to the continued existence of the system, e.g. by being intimately tied to system survival or fitness. Robustness is a commonly observed property of biological systems [2], and there are many possible explanations for its existence [3] [2] [4]. It is generally agreed that robustness is vital because cells, immune systems, organisms, species, and ecosystems live in changing and often uncertain conditions under which they must maintain satisfactory fitness in order to survive. A biological system can be subjected to both internal and external change. Genotype mutations, variations caused by the stochasticity of internal dynamics, altered species interactions and regime shifts in the physical environment are examples for drivers of such change. Thus, for a population of organisms to be robust, the phenotype needs to be controlled. In some cases, this means maintaining a stable phenotype despite variability of the environment (canalization), while in other cases it requires modification of the phenotype to improve or maintain fitness within a new environment (phenotypic plasticity) [2].

Evolvability Accessibility of Distinct Phenotypes


Evolvability is concerned with the selection of new phenotypes. It requires an ability to generate distinct phenotypes and a non-negligible selection probability for some of them. Kirschner and Gerhart define evolvability as an organisms capacity to generate heritable phenotypic variation [5]. In this sense, evolvability is the dispositional concept of phenotypic variability, i.e. it is the potential or the propensity for the existence of diverse phenotypes [6]. More precisely, it is the total accessibility of distinct phenotypes. As with other studies [7] [8] [9], we use this definition of phenotypic variability as a proxy for a systems evolvability. Many researchers have recognized the importance of evolvability [5] [8] [9] [10]. By defining natural evolution as descent with modification, Darwin implicitly assumed that iterations of variation and selection would result in the successive accumulation of useful variations [10]. However, decades of research involving computer models and simulation have shown that Darwins principles of natural

evolution can only generate adaptive changes that are at best finite and at worst short-lived. It is no longer refuted that the founding principles of evolution are insufficient to evolve systems of unbounded complexity. A modern theory of evolution therefore must unravel the mystery that surrounds the origin of innovations in nature [11] [12].

Robustness-Evolvability Paradox
At first, the robustness of biological systems appears to be in conflict with other demands of natural evolution. On the one hand, species are highly robust to internal and external perturbations while, on the other hand, innovations have evolved continually over the past 3.5 billion years of evolution. Robustly maintaining developed functionalities while at the same time exploring and finding new ones seem to be incompatible. Progress in our understanding of the simultaneous occurrence of robustness and evolvability in a single system is well illustrated by the recent work of Ciliberti et al [8]. In their study, the authors model gene regulatory networks (GRN). GRN instances are points in the genotypic GRN space and their expression pattern represents an output or phenotype. Together, genotype and phenotype define a fitness landscape. Ciliberti et al discovered that a large number of genotypic changes have no phenotypic effect, thereby indicating robustness to such changes. The phenotypically equivalent systems connect to form a neutral network in the fitness landscape. A search over this neutral network is able to reach genotypes that are almost as different from each other as randomly sampled GRNs. Ciliberti et al found that the number of distinct phenotypes in the local vicinity of the neutral network is extremely large. This indicates that close to a viable phenotype a wide range of different phenotypes can be accessed, leading to a high degree of evolvability. From these results, they propose that the existence of an extensive neutral network can resolve the robustness-evolvability paradox. The study by Ciliberti et al emphasizes the importance of the neutral network, i.e. the connected graph of equivalent (or nearly equivalent) phenotypes that extends through the fitness landscape. Network connectivity relates directly to robustness because it allows for mutations and/or perturbations that leave the phenotype unchanged [9]. The degree of robustness depends on the local topology and the size of the network. Evolvability, on the other hand, is concerned with long-term movements that can reach over widely different regions of the fitness landscape. An extensive neutral network with a rich phenotypic neighborhood allows evolution to explore many diverse phenotypes without surrendering a systems core functionalities. Ciliberti et al in [8] were not the first researchers to point to the importance of neutral networks in the evolution of species. Kimura formulated a neutral theory of evolution as early as 1955 [13], which was expanded by Ohta to include nearly neutral conditions [14]. More recently, several other studies have demonstrated the presence of neutral networks in computer models of biological systems. Particularly noteworthy is the pioneering work by Schuster et al [15] who found that neutral networks exist in RNA secondary structures. Ciliberti et als work, however, is novel because it quantifies phenotypic variability and demonstrates the huge range of accessible phenotypes that can emerge as a consequence of robust phenotypic expression. From this Ciliberti et al conclude (tentatively) that reduced phenotypic variation, i.e. increased mutational robustness, and enhanced phenotypic variability, i.e. increased evolvability, are positively correlated in natural evolution. The topology of the neutral network, so they suggest, may matter greatly.

Redundancy and Degeneracy Design Principles for Robustness


Redundancy and distributed robustness are two basic design principles that are believed to play an important role in achieving robustness in biological systems [16] [17]. Redundancy is an easily recognizable design principle that is prevalent in both biological and man-made systems. Here, redundancy means redundancy of parts and refers to the coexistence of identical components with identical functionality. It is a common feature of engineered systems where redundancy provides robustness against variations of a very specific type (more of the same variations). For example, redundant parts can substitute others that malfunction or fail, or augment output when demand for a particular output increases. Redundancy is also prevalent in biology. Polyploidy, as commonly found in fern, flowering plant or some lower-form animal eukaryotic cells, homogenous tissues and allozymes are examples of functional biological redundancy. Another and particular impressive example is neural redundancy, i.e. the multiplicity of neural units (e.g. pacemaker cells) that perform identical functions (e.g. generate the swimming rhythms in jellyfish or the heartbeat in humans). For instance, almost half of the parvocellular axons in the human optic nerve appear to be redundant. Distributed robustness emerges through the actions of multiple dissimilar parts [17] [18]. It is in many ways unexpected because it is only derived in complex systems where heterogeneous components (e.g. gene products) have multiple interactions with each other. In our experiments we show that distributed robustness can be achieved through degeneracy (see Section 3). Degeneracy is also known as partial redundancy. In biology it refers to conditions under which the functions or capabilities of components overlap partially [16]. It particularly describes the coexistence of structurally distinct components (genes, proteins, modules or pathways) that can perform similar roles or are interchangeable under certain conditions, yet have distinct roles under other conditions. Degeneracy is ubiquitous in biology as evidenced by the numerous examples provided by Edelman and Gally [16]. One case in point is the adhesins gene family in Saccharomyces, which expresses proteins that typically play unique roles during development, yet can perform each others functions when their expression levels are elevated [19]. Another example is found in glucose metabolism which can take place through two distinct pathways, glycolysis and the pentose phosphate pathway, that can substitute for each other if necessary [20]. Ma and Zeng argue that the robustness of the bow-tie architecture they discovered in metabolism is largely derived through the presence of multiple distinct routes to achieving a given function or activity [21].

Fitness landscape model


In this study, we investigate how redundant and degenerate design principles influence a systems robustness and evolvability, and whether any observable differences can be accounted for by the properties of the neutral network. We use an exploratory abstract model of a fitness landscape that is designed with the following considerations in mind. First, the model should enable an unambiguous distinction between redundant and degenerate systems. Second, interactions between components should be simple in order to explore the mechanistic differences between redundant and degenerate robustness. Third, we want a model that is minimalist in the conditions needed to probe our research questions. By pursuing this minimalist approach, we sacrifice some biological fidelity. For instance we constrain our study to a linear genome-proteome model; however, we believe that the general principles we explore apply broadly to other biological systems and abiotic complex adaptive systems. Indeed, our aim is to arrive at conclusions that are widely applicable to systems that are subjected to variation and selective driving forces and thus need to be both robust and evolvable in order to maintain long-term viability. Model Overview We model the genotype-phenotype map of a genetic system neglecting population properties, i.e. we explore evolution over a fitness landscape with a population size of one. In our model, each gene expresses a single protein product that has multiple functional targets (e.g. non-trivial interactions with multiple molecular species). Through these interactions a gene product contributes to the formation of multiple phenotypic traits (pleiotropy) In the mapping from genotype to phenotype in biology, a genes influence on different traits can vary in the size of its effect and these effects can either be related or functionally separated, e.g. through spatially and temporally isolated events. In our model, we simplify this aspect of the mapping and assume that each functional target of a gene product influences a distinct (separable) trait and furthermore that each of these events is additive and has the same effect size (see Error! Reference source not found.). This abstraction in modelling gene pleiotropy is partly justified because of 1) the knowledge that many proteins act as versatile building blocks and can perform different cellular functions with the function depending on the complex a protein forms with other gene products [22] [23]; and 2) evidence that the scope of protein multi-functionality is broad [24] [25] and the execution of these functions typically occurs at different times [26]. The gene products we model are energetically driven to form complexes with a genetically predetermined set of functional target types. However, through the mediated availability of targets, each functionally versatile gene product can vary in how it contributes to system traits. This variation will depend on the genetic and environmental background, i.e. what targets are available to bind with and what other gene products it must compete with due to the functional overlap of gene products. This competition among gene products enables compensatory actions to occur within the model. Phenotype Attractor We assume that the architecture of any genome-proteome mapping is such that it spontaneously organizes towards a set of stable trait values (homeostasis), and that the attractor for these system dynamics is robust either as a consequence of its evolutionary history (genetically canalized) or due to system properties that generically lead to capacitance and buffering, e.g. see [27] [3]. In a deliberate departure from other studies, we do not explicitly simulate regulatory interactions that direct system dynamics towards this phenotypic attractor, e.g. where protein-target binding events directly regulate the composition of proteins and targets expressed in the system. Instead, we assume that an ancestral phenotype represents a strong stationary attractor for the system. This does not preclude the phenotypic traits themselves from being non-stationary. The separation between the phenotypic attractor and the components that comprise the system, although rarely considered in simulations, allows for interesting insights into the material conditions that limit phenotypic control. Thus when this system finds itself in a perturbed phenotypic state (i.e. new targets or new proteins), compensatory actions by extant gene products are taken, if available, that move the phenotype towards its attractor. In our model, these actions simply consist of changes in protein-target binding that are made based on the availability of functional targets and competition between functionally redundant gene products. One consequence of this model is that adaptive genetic mutations are only possible when mutations prevent the system from accessing its ancestral attractor. As we will demonstrate, the exposure of new phenotypes is ultimately influenced by what previously appears as cryptic genetic changes. Below we give a concise description of the parameters and functions defining a mathematical realization of this model. Technical Description: The model consists of a set of genetically specified proteins (i.e. material components). Protein state values indicate the functional targets they have interacted with and also defines the trait values of the system. The genotype determines which traits a protein is able to influence, while a proteins state dictates how much a protein has actually contributed to each of the traits it is capable of influencing. The extent to which a protein i contributes to a trait j is indicated by the matrix elements Cij Z. Each protein has its own unique set of genes, which are given by a set of binary values ij, i n, j m. The matrix element ij takes a value of one if protein i can functionally contribute to trait j (i.e. bind to protein target j) and zero otherwise. In our experiments, each gene expresses a single protein (no alternative splicing). To simulate the limits of functional plasticity, each protein is restricted to contribute to at most two traits, i.e. i n ij 2 i. To model limits on protein utilization (i.e. caused by the material basis of gene products), maximum trait contributions are defined for each protein, which for simplicity are set equal, i.e. j m Cij ij = i with the integer being a model parameter. The set of system traits defines the system phenotype with each trait calculated as a sum of the individual protein contributions TjP= i n Cij ij. The environment is defined by the vector TE, whose components stipulate the number of targets that are available. The phenotypic attractor F is defined in Eq. 1 and acts to (energetically) penalize a system configuration when any targets are left in an unbound state, i.e. TjP values fall below the satisfactory level TjE. Through control over its phenotype a system is driven to satisfy the environmental

conditions. This involves control over protein utilization, i.e. the settings of C. We implement ordered asynchronous updating of C where each protein stochastically samples local changes in its utilization (changes in state values Cij that alter the proteins contribution to system traits). Changes are kept if compatible with the global attractor for the phenotype defined by Eq. 1. Genetic mutations involve modifying the gene matrix . For mutations that cause loss of gene function, we set ij = 0 j when gene i is mutated.

F T P = j

( )

jm

(1)

0, T jP > T jE j = P E else Tj Tj ,

We model degeneracy and redundancy by constraining the settings of the matrix . This controls how the trait contributions of proteins are able to overlap. In the redundant model, proteins are placed into subsets in which all proteins are genetically identical and thus influence the same set of traits. However, redundant proteins are free to take on distinct state values, which reflects the fact that proteins can take on different functional roles depending on their local context. In the degenerate model, proteins can only have a partial overlap in what traits they are able to affect. The intersection of trait sets influenced by two degenerate proteins is non-empty and truly different to their union. An illustration of the redundant and degenerate models is given in Error! Reference source not found..

Measuring Robustness and Evolvability


Using the model just described, we investigate how the functional overlap in genes places theoretical limits on a systems capacity to regulate its phenotype. This involves evaluating the latent canalization potential of a system (i.e. measuring robustness) as well as the uniqueness of phenotypes associated with evolutionarily accessible genotypes that are not fully canalized (i.e. measuring evolvability). Many of the analytical concepts and steps are similar to those in the study of gene regulatory networks by Ciliberti et al [8] and RNA secondary structure in [11]. Fitness landscape. First, we consider a network in which each node represents a particular systemenvironment tuple. The tuple is characterized by (genotype), C (phenotype) and TE (environment). Connections (arcs) represent feasible variations in conditions, i.e. external or internal changes to respectively TE or . Each node can be assigned a fitness value according to Eq. 1; thus the network is a generalized representation of a fitness landscape. For simplicity, we assume that all changes that occur along arcs are equally probable and reversible, e.g. we neglect variability in mutation rates across the genome. The result of this assumption is an unweighted and undirected network, for which the robustness and evolvability calculations are simplified. In response to a genetic mutation, i.e. a one-step move within the network, the phenotype is subjected to ordered asynchronous updating that is driven by the phenotypic attractor of the system (Eq. 1). Neutral network. A neutral network is defined as a connected graph of nodes with equal fitness. One can consider it a connected set of external and internal conditions within which a system has the same fitness. Notice that connectedness implies that each node within the network can be reached by every other without changing the systems fitness along the path of arcs. Ignoring population properties, this implies selective neutrality. We restrict the class of condition changes to single gene mutations, allowing us to recover Ciliberti et als neutral networks [8], which exist within a fitness landscape such as originally described by Wright [28]. Compared with [8], we relax the neutrality criterion slightly. We consider all systems neutral that are within % of the original system fitness. This relaxation is necessary to describe satisficing behavior. Justifications for approximate neutrality are varied in the literature. Typically, they are based upon constraints that are observed in physical environments and lead to reductions in selection pressure or limitations to perfect selection. 1-neighborhood and evolvability. Similar to [9], we define a 1-neighborhood of all non-neutral nodes that are directly connected to a neutral network. These nodes represent mutations that, for the first time, result in non-neutral changes of system fitness. As defined in Section 2 evolvability is equivalent to the total accessibility of distinct phenotypes. It thus equals the count of unique phenotypes in the 1neighborhood. Robustness can be evaluated in many ways, and we therefore introduce several robustness metrics. For a system (i.e. a node) in the neutral network, its local robustness is defined as the proportion of arcs that connect it to the neutral network. In other words, local robustness is the proportion of immediately possible (single gene) mutations under which a system can maintain its fitness. The local robustness measurements reported in the next section are the local robustness for each neutral node averaged over all neutral nodes. An alternative robustness measure is a systems versatility. It is the total count of distinct and mutationally accessible genotypes under which a system can remain sufficiently fit. Since we assume that the network of changes is unweighted and undirected, versatility is directly proportional to, and thus well approximated by, the size of the neutral network. Finally, we measure differential robustness by analyzing a systems response to increasingly larger mutation rates. For this we record the fitness of the initial genetic system as it is subjected to increasingly large numbers of genetic mutations. Fitness landscape exploration. In order to measure evolvability, both the neutral network and the 1-neighborhood need to be explored. The details of the algorithm for searching the fitness landscape is given in Section 5. Unless stated otherwise, the remaining experimental conditions are observed in all experiments. Genotypes ij are randomly initialized as binary values that meet the previous sections

constraints, including the requirements of degeneracy or redundancy as of the model being tested. For the initial system (first node in the neutral network), component state values Cij are randomly initialized as integer values between 0 and . The initial environment TE is defined as the initial system phenotype. The neutrality threshold is set to = 5% and the model parameter to = 10. The number of traits is m = 8, and the number of system components n = 2m = 16. In our random initializations of the models we enforce that each trait has the exact same number of proteins contributing to it; thus j m ij 4. This ensures that the redundant and degenerate models start with systems that have exactly the same fitness and functionalities. Furthermore, the size of the fitness landscape and gene mutations have been defined to be identical for both types of models. Ad hoc experiments varying the settings of , n, and m did not alter our basic findings. Each experiment is conducted with 50 experimental replicates. Based on several considerations, we decided not to explicitly model genetic mutations that create novel functions or model the recruitment of gene products to previously unrelated system traits, i.e. changes to protein specificity. First, this would require us to make additional assumptions about the topology of protein functional space and the selective relevance of new functions within an environment. Secondly, for almost any protein function landscape one could envision, increasing the number of points sampled in the landscape increases the mutational accessibility of distinct functions, up to saturation. Because the degenerate model displays greater gene diversity (a requirement derived from the definition of redundancy) we wanted to remove any confounding effects that could be caused by differences in the number of distinct genes in the two systems. If we had allowed for genetic mutations other than loss of function, the observed differences in system evolvability that are presented in our results would have been off-handedly attributed to differences in mutational access between the two fitness landscapes.

3. Results
Design principles considerably affect system evolvability
First, we investigate how the system design principles influence robustness and evolvability. In Figure 11 we show results for local robustness, versatility and evolvability as the algorithm explores the neutral networks and 1-neighborhoods. Presenting the results in this way exposes the rate at which new neutral genotypes and new (non-neutral) phenotypes are being discovered during the search process. Over the evolution of the networks, a degenerate system is found to be over twice as versatile as a redundant system, with the neutral network sizes converging to respectively NNdeg = 660 + 15 50 and NNred = 280 5 after 3x105 search steps. This means that a degenerate system maintains sufficient fitness in approximately twice as many circumstances of gene deletions as a redundant system. After 3x105 search steps the 1-neighbourhood of the degenerate system contains 1,900 + 600 400 unique phenotypes compared with merely 90 30 for the redundant system. Thus, the degenerate system is about 20 times more evolvable than the redundant system. The local robustness of the two systems is initially quite different (Rdeg = 0.38 0.005, Rred= 0.30 0.005) with the difference becoming smaller (but remaining significant, p < 1E-6) as the exploration of the neutral networks progresses. This indicates that the robustness of the initial degenerate system is much larger, however the average robustness of all genotypes on the neutral network is less distinct. In other words, the robustness advantage for the degenerate system is reduced as the neutral network is being explored. A similar effect is expected in natural evolution as populations are driven towards mutation-selection balance and subsequently towards less robust regions of the network. To obtain a clearer sense of the robustness of each design principle, we analyze the differential robustness of the initial (un-mutated) systems when being subjected to increasingly larger numbers of mutations. Differential robustness is given as the proportion of conditions for which the perturbed systems can maintain satisfactory fitness. Unlike in the experiments shown in Figure 11 we do not only explore the effect of single gene mutations but also that of multiple gene mutations. Figure 12 demonstrates that, on average, degenerate systems are more robust to increasingly larger changes in conditions than are redundant systems. Our experiments strongly support the finding that the design principles markedly influence system properties such as neutral network size, robustness and evolvability. It is not clear however, why the evolvability of these systems is so dramatically different. In particular, it is not clear whether the differences in evolvability can be accounted for by differences in robustness or in neutral network size. In the remaining experiments, we explore these questions and show that differences in neutral network size, topology, and system robustness cannot account for the huge differences in evolvability. From this, we are left to conclude that the design principles are mainly responsible for the observed effect.

Evolvability does not derive from neutral network exploration


In the next set of experiments, we evaluate the properties of the neutral networks (that are encountered during the search process) to determine if these can account for the observed differences in system evolvability. First we check if the size of the explored neutral network is the main determinant of the number of unique phenotypes that are discovered. In Figure 13 we see that large neutral networks do not necessarily lead to a greater access to unique phenotypes. The redundant systems are strongly limited with respect to their accessibility of distinct phenotypes. We only observe a small dependence of evolvability on the size of the neutral network. In the degenerate system, on the other hand, the exact opposite is observed: accessibility of new unique phenotypes increases considerably as new regions of the neutral network are explored.

Neutral network topology cannot account for evolvability


If the size of the explored neutral network is not highly correlated with the observed differences in evolvability, it seems reasonable to suspect that the manner in which the neutral network extends across genotype space could influence system evolvability, as suggested in [8]. To test this hypothesis, we analyze network distances, i.e. proxies for a networks ability to reach distinct regions of genotype space. One way of determining the distance between two nodes is to calculate the shortest path, or geodesic, between them. If we take, for a specific node, the average of the geodesics to all other nodes in the network, and then take the average of these over all nodes in the network, we get the so called characteristic path length. In panel a) of Figure 14 we show this characteristic path length as a function of network size. Characteristic path length increases with network size and approaches largely similar values at NN=800 for the two design principles (12.4 for degenerate and 10.8 for redundant systems). The small differences in characteristic path length though do not explain the huge differences in evolvability as shown in panel b) of Figure 14. Similar conclusions can be drawn when other network distance measures are analyzed, such as the top 10% longest path lengths or the Hamming distance in genotype space (see Figure 15).

Versatility and local robustness do not guarantee evolvability


In the next set of experiments, we investigate whether versatility and local robustness can account for differences in evolvability. For this, we study the effect of making available additional resources while maintaining environmental (trait) requirements, i.e. maintaining the same phenotypic attractor. We employ the same experimental conditions as previously, with the exception that we increase the number n of system genes that can be expressed and that can contribute to system traits. Due to their additive effect on system traits, the inclusion of new functional genes should make both types of systems redundant and degenerate more robust to loss of function gene mutations and act to establish larger neutral networks. As shown in Figure 16, adding excess functional genes indeed increases the size of the neutral network as well as the local robustness for both types of systems. Surprisingly however, the redundant system does not display a substantial growth in evolvability. The degenerate system, on the other hand, is found to have large increases in evolvability and becomes orders of magnitude more evolvable than the redundant model, even when n increases only modestly. The most important conclusion we draw from this is that neither local robustness nor versatility can guarantee that a system will be highly evolvable. This fact can be directly observed in Figure 17 where, for the two system designs, the evolvability data of Figure 16 are plotted as functions of versatility and local robustness.

4. Discussion
Taken as a whole, our results indicate that the mechanisms used to achieve robustness generally determine how evolvable a system is. In particular, we showed that differences in the evolvability of a fitness landscape are not necessarily due to differences in local robustness, versatility (neutral network size) or neutral network topology. Mutational robustness and neutrality achieved through redundancy alone does not lead to evolvable systems, regardless of the size of the neutral network. On the other hand, robustness that is achieved through degeneracy can dramatically increase the accessibility of distinct phenotypes and hence the evolvability of a system. Using evidence from biological studies, Edelman and Gally were the first to propose that degeneracy may act both as a source of robustness and innovation in biological systems [16]. Here we have provided the first experimental evidence that supports this relationship between degeneracy and evolvability. However, from observing how evolvability scales with system size in the two classes of models considered in this study, we conclude that degeneracy does not only contribute to new innovations, but that it could be a precondition of evolvability.

Degenerate distributed robustness


How degeneracy allows for distributed robustness and evolvability is not obvious however our model was designed in order to help explore this issue. Below we illustrate how degeneracy creates a connectivity of buffering actions in our model where stress that originates from localized perturbations is diffused to other parts of the system, even though the actions and the functional properties of those actions are not all identical. Such diffusion through networked buffering could be a new source of distributed robustness in biological systems. We emphasize that this buffering network would not have been easy to observe had the phenotypic attractor been an endogenous (selforganized) property of the system. An illustration of our hypothesis is given in Figure 18. In this illustration, clusters of nodes represent functional groups that contribute to the phenotypic traits of the system. If a particular functional group is stressed (e.g. through loss of contributing components or changes in the desired trait value), the degeneracy of components allows resources currently assigned to other functional groups to be reassigned and alleviate this stress (buffering). Depending on the topology of buffer connectivity and the current placement of resources, excess resources that were initially localized can quickly spread and diffuse to other regions of the system. Small amounts of excess functional resources are thus found to be much more versatile; although interoperability of components is localized, at the system level resources can be seen to have huge reconfiguration options. The buffer connectivity and associated reconfiguration options are clearly not afforded to the redundant system (see Figure 18b).

In the degeneracy models considered in our study, the number of links between nodes is constant but otherwise randomly assigned. This random assignment results in a small-world effect in the buffering topology, such that careful design is not necessary to ensure the connectivity of buffers. Hence, the distributed robustness effect is expected to be germane to this class of systems. Future studies will investigate whether constraints imposed by the functional landscape of components can limit the level of distributed robustness observed from degeneracy.

The role of degeneracy in evolution


The investigation of our abstract model provides new clues about the relationship between degeneracy, robustness, and evolution. In the gene deletion studies presented here, we found that the degenerate system can reach a desired phenotype from a broad range of distinct internal (genetic) conditions. We have conducted parallel experiments involving changes to the environment (incremental changes to TE) that have found that the degenerate system can also express a broad range of distinct phenotypes from the same genotypic makeup. Taken together, these results outline two complementary reasons for why distributed robustness can be achieved in degenerate systems. In particular, we speculate that it is both the diversity of unique outputs (i.e the potential for phenotypic plasticity) in addition to the multitude of ways in which a particular output can be achieved (i.e the potential for canalization) that allows for distributed robustness in degenerate systems. Although this richness in phenotypic expression increases the number of unique ways in which the system can fail, it also opens up new opportunities for innovation. Hence, degeneracy may afford the requisite variety of actions that is necessary for both robustness and system evolvability. These plasticity and canalization properties in the genotype-phenotype mapping are unique to the degenerate system and moreover are generally consistent with studies of cryptic genetic variation in natural populations. The evolution of complex phenotypes requires a long series of adaptive changes to take place. At each step, these adaptations must result in a viable and robust system but also must not inhibit the ability to find subsequent adaptations. Complexity clearly demands evolvability to form such systems and robustness to maintain such systems at every step along the way. How biological systems are able to achieve these relationships between robustness, evolvability and complexity is not known. However it is clear that the mechanisms that provide robustness in biological systems must at the very least be compatible with occasional increases in system complexity and must also allow for future innovations. We believe that degeneracy is a good candidate for enabling these relationships in natural evolution. As already noted in this study, degeneracy is unique in its ability to provide high levels of robustness while also allowing for future evolvability. Moreover, in [29] it was found that only systems with high degeneracy are also able to achieve high levels of hierarchical complexity, i.e. the degree to which a system is both functionally integrated and locally segregated [29]. Based on these findings and other supporting evidence illustrated in Figure 19 and summarized in Table 2, we speculate that degenerate forms of robustness could be unique in their capacity to allow for the evolution of complex forms. It has been proposed that the existence and preservation of degeneracy within distributed genetic systems can be explained by the Duplication-Degeneracy-Complementation model first proposed in [30]. In the DDC model, degenerate genes are retained through a process of sub-functionalization, i.e. where a multi-functional ancestral gene is duplicated and these duplicate genes acquire complementary loss-of-function mutations. Although the present study does not consider the origins of genetic degeneracy or multifunctionality, our results do suggest alternate ways by which degeneracy could be retained during evolution. First, we have shown that degeneracy amongst multifunctional genes has a positive and systemic effect on robustness that is considerably stronger than what is achieved through pure redundancy. Under conditions where the acquisition of such robustness is selectively relevant, e.g. due to variable conditions inside and outside an organism, degenerate genes could be retained due to a direct selective advantage. In this scenario, the ubiquity of degeneracy would be due to its efficacy as a mechanism for achieving selectively relevant robustness, while its impact on evolvability and its compatibility with hierarchical complexity could lead to the emergence of increasingly complex phenotypes. Alternatively, the enhanced robustness from degeneracy may facilitate its preservation even without a direct selective advantage. Newly added degenerate genes can increase the total number of loss of function mutations with no phenotypic effect, however without a selective advantage this enhanced robustness can be subsequently lost under mutation-selection balance. Due to the distributed nature of the robustness provided, neutral mutations will emerge in several genes that are functionally distinct from the degenerate gene. Following one of these mutations, the compensatory effects of the degenerate gene would be revealed, making its continued functioning selectively relevant and thereby ensuring its future retention within the genetic system. Considering the large mutational target represented by these other genes, retention of the degenerate gene would be a likely outcome in this scenario. Thus, degeneracy may become ingrained within genotypes precisely due to the distributed nature of their compensatory effects, even if these effects do not initially have a selective relevance.

5. Methods
Neutral Network Generation
Starting with an initial system and a given external environment, defined as the first node in the neutral network, the neutral network and 1neighborhood are explored by iterating the following steps: 1) select a node from the neutral network at random; 2) change the conditions (genetic mutation or change in environment) based on the set of feasible transitions; 3) allow the system to modify its phenotype in order to

robustly respond to the new conditions; and 4) if fitness is within % of initial system fitness then the system is added to the neutral network, else it is added to the 1-neighborhood of the neutral network. Additions to the neutral network and 1-neighborhood must represent unique conditions, i.e. (TE,) pairs, meaning that duplicate conditions are discarded when encountered by the search process. The sizes of the neutral network and 1-neighborhood can be prohibitively large to allow for an exhaustive search and so the neutral network search algorithm includes a stopping criterion after 3x105 steps (changes in condition).

Results- neutral shadow


The neutral shadow results are obtained by running the neutral network and 1-neighborhood exploration algorithms as before except that each newly sampled genotype is added to the neutral network irrespective of system fitness. The neutral shadow is analyzed to show the neutral network properties for a maximally diffusive (i.e. unconstrained) neutral network. It provides an upper bound on both genotypic and topological distance measurements. Because the size and dimensionality of the degenerate and redundant fitness landscapes are identical, the neutral shadow generates the same topological and Hamming distance results for both system types.

6. Figures

Figure 10: Overview of genome-proteome model. a) Genotype-phenotype mapping conditions and pleiotropy: Each gene contributes to system traits through the expression of a protein product that can bind with functionally relevant targets (based on genetically determined protein specificity). b) Phenotypic expression: Target availability is influenced by the environment and by competition with functionally redundant proteins. The attractor of the phenotype can be loosely described as the binding of each target with a protein. c) Functional overlap of genes: Redundant genes can affect the same traits in the same manner. Degenerate traits only have a partial similarity in what traits they affect.

Figure 11: Local robustness, versatility and evolvability measured as the fitness landscape is explored. Each of the metrics is defined in the text and the procedure for exploring the fitness landscape is described in the Methods. Experiments are conducted with m=8 and n=16. Results show the median value from 50 runs with bars indicating 95% confidence intervals. For each position along the horizontal axis, degenerate and redundant data samples are found to be significantly different based on the MannWhitney U Test (U>2450, Umax = 2500, n1=50, n2=50) with larger median values in each case from the degenerate system (p < 1E6).

Figure 12: Differential robustness of initial (un-mutated) systems as they are exposed to increasingly larger gene deletions. Experiments are conducted with m=8 and n=16. Results are shown as the median value from 50 runs with bars indicating 95% confidence intervals.

Figure 13: The number of unique phenotypes (evolvability) discovered versus the number of fitness neutral genotypes (neutral network) discovered. Similar behaviour is observed when evolvability is plotted against the size of the 1-neighborhood. Experiments are conducted with m=8 and n=16 and results are shown as the median value from 50 runs with bars indicating 95% confidence intervals.

Figure 14: a) Characteristic path length of the neutral network for different network sizes (n=8, m=20, excess resources = 25%). The concept of resource excess is described in the context of Figure 16; it generates neutral networks larger than for those systems studied in Figure 13. Displayed results are medians of 50 experimental runs. Error bars indicate 95% confidence intervals; they are typically smaller than the resolution of data points. According to a Kruskal-Wallis test, characteristic path length distributions are significantly different (p<1E-6) for network sizes above 200. Results for the degenerate and redundant systems are also compared with a shadow of the neutral network search algorithm, which is described in Methods and provides an approximate upper bound for path length calculations. b) Evolvability as a function of characteristic path length.

Figure 15: a) In genotype space, distance can be measured by the Hamming distance between genotypes. For the purposes of this measurement, genes are simply defined as binary values indicating whether each gene is deleted or not. Results are shown as the average Hamming distance of genotypes for all node pairs in the neutral network. Results are normalized with the maximum Hamming distance set equal to one. b) Evolvability is plotted as a function of the normalized Hamming distance. Similar results are obtained when analyzing the top 10% largest Hamming distances.

Figure 16: Versatility, robustness and evolvability as functions of excess components added to the two system types. For the baseline systems (i.e. systems without excess resources) the experiments are conducted with m=8, and n=16. For experiments

where excess resources are larger than zero, the initial environment trait requirements TE are set based on the previous conditions (m=8, n=16) and then afterwards the system is redefined with n increased by n = 16*(1 + % excess). Results represent the median value from 50 runs with bars indicating 95% confidence intervals. For each position along the horizontal axis, degenerate and redundant sample distributions are found to be significantly different based on the Mann-Whitney U Test (U>2450, Umax = 2500, n1=50, n2=50) with larger median values in each case from the degenerate system (p < 1E-6).

Figure 17 Results of Figure 16 but with evolvability plotted as a function of local robustness (left) and versatility (right).

Figure 18 Illustration of the connectivity of buffering actions provided by degeneracy. Functional groups are indicated by clusters of nodes, while connected node pairs represent individual components with a context dependent functional response (in this case, components have only two types of functional response). Dark/light shading is used to indicate which functional response a component is/is not currently carrying out. Darkened arrows indicate components that might be available if needed by the

functional group from which the arrows originate. Here the darkened arrows illustrate how a stress to the circled functional group has the potential to cause a distributed response to that stress.

Figure 19 Proposed relationship between degeneracy, evolution, robustness, and complexity

7. Tables
Table 2 Summary of evidence relating degeneracy, evolution, robustness, and complexity
1) Relationship Degeneracy is a key source of biological robustness Degeneracy has a strong positive correlation with system complexity Summary Distributed robustness (and not pure redundancy) accounts for a large proportion of robustness in biological systems Degeneracy is correlated and conceptually similar to complexity. For instance degenerate components are both functionally redundant and functionally independent while complexity describes systems that are functionally integrated and functionally segregated. Genetic robustness reflects the presence of a neutral network. Over the long-term this neutral network provides access to a broad range of distinct phenotypes and helps ensure the long-term evolvability of a system. All complex life forms have evolved through a Context Large scale gene deletion studies and other biological evidence (e.g. cryptic genetic variation) Simulation models of artificial neural networks are evaluated based on information theoretic measures of redundancy, degeneracy, and complexity Simulation models of gene regulatory networks and RNA secondary structure. Ref [17]

2)

[29]

3)

Evolvability emerges from robustness

[8] [9]

4)

Evolvability is a

prerequisite for complexity

5)

Complexity increases to improve robustness

succession of incremental changes and are not irreducibly complex (according to Darwins theory of natural selection). The capacity to generate heritable phenotypic variation (evolvability) is a precondition for the evolution of increasingly complex forms. According to the theory of highly optimized tolerance, complex adaptive systems are optimized for robustness to common observed variations in conditions. Moreover, robustness is improved through the addition of new components/processes that add to the complexity of the organizational form. Accessibility of distinct phenotypes requires robustness through degeneracy

Based on theoretical arguments that have been applied to biological evolution and engineering design (e.g. aircraft, internet) Abstract simulation models of evolution

[31] [32] [33]

6)

Degeneracy is a precondition for evolvability and a more effective source of robustness

This Study

8. References

[1] [2] [3]

[4]

[5] [6] [7] [8]

[9] [10] [11] [12]

J. Stelling, U. Sauer, Z. Szallasi, F. J. Doyle, and J. Doyle, "Robustness of Cellular Functions," Cell, vol. 118, pp. 675-685, 2004. H. Kitano, "Biological robustness," Nature Reviews Genetics, vol. 5, pp. 826-837, 2004. M. L. Siegal and A. Bergman, "Waddington's canalization revisited: Developmental stability and evolution," Proceedings of the National Academy of Sciences, USA, vol. 99, pp. 10528-10532, 2002. J. Visser, J. Hermisson, G. P. Wagner, L. A. Meyers, H. Bagheri-Chaichian, J. L. Blanchard, L. Chao, J. M. Cheverud, S. F. Elena, and W. Fontana, "Perspective: Evolution and Detection of Genetic Robustness," Evolution, vol. 57, pp. 1959-1972, 2003. M. Kirschner and J. Gerhart, "Evolvability," Proceedings of the National Academy of Sciences, USA, vol. 95, pp. 8420-8427, 1998. G. P. Wagner and L. Altenberg, "Complex adaptations and the evolution of evolvability," Evolution, vol. 50, pp. 967-976, 1996. M. Aldana, E. Balleza, S. Kauffman, and O. Resendiz, "Robustness and evolvability in genetic regulatory networks," Journal of Theoretical Biology, vol. 245, pp. 433-448, 2007. S. Ciliberti, O. C. Martin, and A. Wagner, "Innovation and robustness in complex regulatory gene networks," Proceedings of the National Academy of Sciences, USA, vol. 104, p. 13591, 2007. A. Wagner, "Robustness and evolvability: a paradox resolved," Proceedings of the Royal Society of London, Series B: Biological Sciences, vol. 275, pp. 91-100, 2008. S. A. Kauffman, "Requirements for evolvability in complex systems: orderly components and frozen dynamics," Physica D, vol. 42, pp. 135152, 1990. A. Wagner, "Neutralism and selectionism: a network-based reconciliation," Nature Reviews Genetics, 2008. M. W. Kirschner and J. C. Gerhart, The Plausibility of Life: Resolving Darwin's Dilemma: Yale University Press, 2006.

[13] [14] [15]

[16] [17] [18] [19]

[20]

[21] [22]

[23] [24]

[25]

[26]

[27] [28] [29]

[30]

M. Kimura, "Solution of a Process of Random Genetic Drift with a Continuous Model," Proceedings of the National Academy of Sciences, USA, vol. 41, pp. 144-150, 1955. T. Ohta, "Near-neutrality in evolution of genes and gene regulation," Proceedings of the National Academy of Sciences, USA, vol. 99, pp. 16134-16137, 2002. P. Schuster, W. Fontana, P. F. Stadler, and I. L. Hofacker, "From Sequences to Shapes and Back: A Case Study in RNA Secondary Structures," Proceedings of the Royal Society of London, Series B: Biological Sciences, vol. 255, pp. 279-284, 1994. G. M. Edelman and J. A. Gally, "Degeneracy and complexity in biological systems," Proceedings of the National Academy of Sciences, USA, p. 231499798, 2001. A. Wagner, "Distributed robustness versus redundancy as causes of mutational robustness," BioEssays, vol. 27, pp. 176-188, 2005. A. Wagner, "Robustness against mutations in genetic networks of yeast," Nature Genetics, vol. 24, pp. 355-362, 2000. B. Guo, C. A. Styles, Q. Feng, and G. R. Fink, "A Saccharomyces gene family involved in invasive growth, cell-cell adhesion, and mating," Proceedings of the National Academy of Sciences, USA, p. 220420397, 2000. U. Sauer, F. Canonaco, S. Heri, A. Perrenoud, and E. Fischer, "The Soluble and Membranebound Transhydrogenases UdhA and PntAB Have Divergent Functions in NADPH Metabolism of Escherichia coli," Journal of Biological Chemistry, vol. 279, p. 6613, 2004. H. W. Ma and A. P. Zeng, "The connectivity structure, giant strong component and centrality of metabolic networks," Bioinformatics, vol. 19, pp. 1423-1430, 2003. A. C. Gavin, P. Aloy, P. Grandi, R. Krause, M. Boesche, M. Marzioch, C. Rau, L. J. Jensen, S. Bastuck, and B. Dmpelfeld, "Proteome survey reveals modularity of the yeast cell machinery," Nature, vol. 440, pp. 631-636, 2006. R. Krause, C. von Mering, P. Bork, and T. Dandekar, "Shared components of protein complexesversatile building blocks or biochemical artefacts?," BioEssays, vol. 26, pp. 1333-1343, 2004. N. N. Batada, T. Reguly, A. Breitkreutz, L. Boucher, B. J. Breitkreutz, L. D. Hurst, and M. Tyers, "Stratus not altocumulus: a new view of the yeast protein interaction network," PLoS Biol, vol. 4, p. e317, 2006. N. N. Batada, T. Reguly, A. Breitkreutz, L. Boucher, B. J. Breitkreutz, L. D. Hurst, and M. Tyers, "Still stratus not altocumulus: further evidence against the date/party hub distinction," PLoS Biol, vol. 5, p. e154, 2007. J. D. J. Han, N. Bertin, T. Hao, D. S. Goldberg, G. F. Berriz, L. V. Zhang, D. Dupuy, A. J. M. Walhout, M. E. Cusick, and F. P. Roth, "Evidence for dynamically organized modularity in the yeast proteinprotein interaction network," Nature, vol. 430, pp. 88-93, 2004. A. Bergman and M. L. Siegal, "Evolutionary capacitance as a general feature of complex gene networks," Nature, vol. 424, pp. 549-552, 2003. S. Wright, "The roles of mutation, inbreeding, crossbreeding and selection in evolution," Proceedings of the Sixth International Congress on Genetics, vol. 1, pp. 356366, 1932. G. Tononi, O. Sporns, and G. M. Edelman, "Measures of degeneracy and redundancy in biological networks," Proceedings of the National Academy of Sciences, USA, vol. 96, pp. 32573262, 1999. A. Force, M. Lynch, F. B. Pickett, A. Amores, Y. Yan, and J. Postlethwait, "Preservation of duplicate genes by complementary, degenerative mutations," Genetics, vol. 151, pp. 1531-1545, 1999.

[31] [32] [33]

J. M. Carlson and J. Doyle, "Highly optimized tolerance: A mechanism for power laws in designed systems," Physical Review E, vol. 60, pp. 1412-1427, 1999. M. E. Csete and J. C. Doyle, "Reverse Engineering of Biological Complexity," Science, vol. 295, pp. 1664-1669, 2002. J. M. Carlson and J. Doyle, "Complexity and robustness," in Proceedings of the National Academy of Sciences, USA, 2002.

Evolution-Inspired Approaches for Engineering Emergent Robustness in an Uncertain Dynamic World


James M. Whitacre
CERCIA, School of Computer Science, University of Birmingham, UK j.m.whitacre@cs.bham.ac.uk

Extended Abstract
Engineering involves the design and assemblage of elements that work in specific ways to achieve a predictable purpose and function. In systems design, engineering takes a conceptual top-down approach to problem solving that aims to decompose a complicated problem into separable and more manageable sub-problems. While this strategy has been successful in designing systems that deftly operate under predetermined conditions, these same systems are often notoriously fragile when conditions change unexpectedly. In contrast, biological systems operate in a highly flexible manner with no pre-assignment between components and system traits. Instead of relying on the prediction of future environments, biological systems (e.g. immune systems, cell regulation) quickly learn/explore appropriate responses to novel conditions and inherit new routines to remain competitive under persistent environmental change. Taking examples throughout biology, it has been proposed that degeneracy - the existence of multi-functioning components with context-dependent functional similarity - is a primary determinant of biological flexibility and a key differentiating factor in the robustness and evolvability of designed and evolved systems (Edelman and Gally 2001) (Whitacre 2010) (Whitacre and Bender 2010) (Whitacre and Bender 2010). Degeneracy is routinely eliminated in engineering design and its role in the robustness of biological traits is well-documented, however the influence that degeneracy might have on the flexibility of engineered and artificial systems has only begun to be investigated (Whitacre et al. in press). Here we present evidence (Figure 1) that degeneracy enhances the robustness and evolvability (i.e. the rate and magnitude of heritable adaptive change) of multi-agent systems (MAS) that are taken from (Whitacre et al. in press) and modified to more closely reflect systems engineering problems subject to heterogeneous and unpredictable environments. First, we find degeneracy can increase MAS robustness toward a set of environments experienced during the MAS lifecycle. When robustness is important to fitness, we also find degeneracy can be selectively (not only passively/neutrally) acquired. However, and unbeknownst to myopic selection, this acquisition of degenerate robustness ultimately promotes faster rates of MAS design adaptation when the environment changes dramatically (at generation 3000, Figure 1), i.e. evolvability has been indirectly enhanced through the selection of degenerate forms of robustness. In contrast, robustness and evolvability are lower in MAS comprised of multi-functioning agents that are never degenerate, i.e. agents do not exhibit partially overlapping functionality but instead are either identical or completely dissimilar to other agents. In a forthcoming article, we further show that many of these findings can be reversed if environments are simplified and decomposable, i.e. environments show little variability during the MAS lifecycle and those environmental variations that are experienced are separable/modular. In presenting these findings, we discuss how degeneracy might lead to new prescriptive guidelines for complex systems engineering: a nascent field that applies Darwinian and systems theory principles with the aim of improving flexibility and adaptation for systems that operate within volatile environments. We propose that versatile and functionally similar agents/sub-systems/software/vehicles/machinery/plans may sometimes dramatically improve a systems robustness to unexpected environments in ways that cannot be accounted for by economic portfolio theory.

Figure 1: Top-Left Panel) Multi-Agent System (MAS) encoded within a genetic algorithm; for details, see (Whitacre et Agent al. in press).. Agents perform tasks to improve MAS fitness in its environment. Top-Right Panel) Illustration of genetic ) architectures for degenerate and non-degenerate MAS. Each agent is depicted by a pair of connected nodes, with the degenerate two nodes representing two types of (genetically determined) tasks an agent can perform. Models are adapted from (Whitacre et al. in press) to reflect a systems engineering context that is to be fully described in a forthcoming article. Differences in modeling conditions, compared with (Whitacre et al. in press), include: larger MAS (120 agents), each , agent takes on more tasks during its interaction with the environment (20 tasks), agent behaviors are simulated using an unordered asynchronous updating scheme, environments are defined by more types of tasks (20 types, 48000 tasks in total), and new constraints in function combinations within each agent (to be described in forthcoming paper). unction Bottom-Left Panel) Evolution of MAS Fitness under one set of environments and then (at gen. 3000) evolution ) continues under a new set of environments. Optimal fitness = 0 for both original and new environments. Within the new both environments, degenerate MAS appear to evolve more quickly while non degenerate MAS evolve somewhat more non-degenerate gradually. Bottom-Right Panel) Degeneracy and fitness calculations for MAS in which degeneracy is perm ) permitted. Results show MAS evolved under random selection and MAS evolved to be robust within the environment. Here we see selection has increased degeneracy levels in the MAS (reported results are taken immediately after the first 3000 generations of evolution).

Acknowledgements: This research was partially supported by DSTO and CERCIA.

References
Edelman, G. M. and J. A. Gally (2001). "Degeneracy and complexity in biological systems." Proceedings of the National Academy of Sciences, USA 98(24): 13763-13768. Whitacre, J. M. (2010). "Degeneracy: a link between evolvability, robustness and complexity in biological systems." Theoretical Biology and Medical Modelling 7(6). Whitacre, J. M. and A. Bender (2010). "Degeneracy: a design principle f achieving robustness and evolvability." Journal of Theoretical Biology 263(1): for 143-153. Whitacre, J. M. and A. Bender (2010). "Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems." Theoretical Biology and Medical Modelling 7(20). Whitacre, J. M., P. Rohlfshagen, X. Yao and A. Bender (in press). The role of degenerate robustness in the evolvability of multi-agent systems in dynamic multi environments. 11th International Conference on Parallel Problem Solving from Nature (PPSN 2010), Krakow, Poland. .

Genetic and Environment-Induced Innovation: Complementary Pathways to Adaptive Change that are Facilitated by Degeneracy in Multi-Agent Systems
James M. Whitacre
CERCIA, School of Computer Science, University of Birmingham, UK j.m.whitacre@cs.bham.ac.uk

Extended Abstract
Understanding how heritable and selectively relevant phenotypes are generated is fundamental to understanding evolution in biotic and artificial systems. With few exceptions (e.g. viral evolution), the generation of phenotypic novelty is predominantly discussed from two perspectives. The first perspective is organized around the concept of fitness landscape neutrality and emphasizes how the robustness of fitness towards mutations can facilitate the discovery of heritable adaptive traits within a static fitness landscape (Wagner 2008). A somewhat distinct perspective is organized around the concept of cryptic genetic variation (CGV) and mostly emphasizes the importance of particular population properties within a dynamic environment (Gibson and Dworkin 2004). CGV is defined as standing genetic variation that does not contribute to the normal range of phenotypes observed in a population, but that is available to modify a phenotype after environmental change (or the introduction of novel alleles). In short, CGV permits genetic diversity in populations when selection is stable yet exposes heritable phenotypic variation that can be selected upon when populations are presented with novel conditions. Both pathways to adaptation (genetic and environmentinduced phenotypic variation) are likely to have contributed to the evolution of complex traits (Palmer 2004) and theories of evolution that cannot account for both pathways are either fragile to or reliant upon environmental dynamics. Here we use requirements from these pathways to evaluate the merits of a new hypothesis on the mechanics of evolution. In particular, Gerald Edelman has proposed that degeneracy the existence of structurally distinct components with context dependent functional similarities is a fundamental source of heritable phenotypic change at most/all biological scales and thus is an enabling factor of evolution (Edelman and Gally 2001) (Whitacre 2010). While it is well-documented (and intuitive) that degeneracy contributes to trait stability for conditions where degenerate components are functionally compensatory (Whitacre and Bender 2010), Edelman argues that the differential responses outside those conditions provide access to unique functional effects, some of which can be selectively relevant given the right environment. We recently reported evidence that degeneracy supports the first pathway by creating particular types of neutrality in static fitness landscapes that can increase mutational access to heritable phenotypes (Whitacre and Bender 2010), and fundamentally alter a systems propensity to adapt (Whitacre et al. in press). Using models from (Whitacre et al. in press), here we present findings that degeneracy within evolving multi-agent systems may create characteristic features of CGV at the population level; thereby allowing the model to also exploit an environmentinduced pathway to adaptation. In particular, we show that for static environments, degeneracy facilitates high genetic diversity in populations that is phenotypically cryptic, i.e. individuals remain similar in fitness (Figure 1). When the environment changes, trait differences across the population are revealed and some individuals display a phenotypically plastic response that is highly adaptive for the new environment. These CGV features are not observed in populations when degeneracy is absent from our model. We discuss the theoretical significance of a single mechanistic basis (degeneracy) for complementary pathways to adaptation.

Agent Figure 1: Top-Left Panel) Multi-Agent System (MAS) encoded within a genetic algorithm. Agents perform tasks to improve MAS fitness in its environment, see (Whi (Whitacre et al., in press). Top-Right Panel) Illustration of genetic ) architectures for degenerate and non-degenerate MAS. Each agent is depicted by a pair of connected nodes, with the two degenerate nodes representing two types of (genetically determined) tasks that the agent can perform. Bottom-Right Panel) The number Right Panel of task type combinations (alleles) possible in a degenerate MAS is larger than non degenerate MAS so it is necessary to non-degenerate artificially restrict experiments to similar genotype space sizes as illustrated here; for more details see mutation operator here; description in (Whitacre et al., in press). Bottom Bottom-Left Panel) Genetic diversity (Hamming distance in genotype space ) between population members) plotted over 3000 generations of evolution within a static environment. Bottom-Middle environment. Panel) Fitness of population members at generation 3000 is recorded and then reevaluated within a moderately perturbed ) environment. In these results, we observe high genetic diversity in the degenerate population that is cryptic (negligi (negligible fitness differences) within the stable environment, but that is released/exposed when the same population is presented with a new environment. Some of the observed plastic phenotypic responses are found to be highly adaptive in the new environment. CGV was largely absent in the evolution of non degenerate MAS, even when environments are modified to non-degenerate increase mutational robustness (not shown). Optimal fitness = 0 for original and perturbed environments.

Acknowledgements: This research was partially supported by DSTO and CERCIA. supported

References
Edelman, G. M. and J. A. Gally (2001). "Degeneracy and complexity in biological systems." Proceedings of the National Academy of Sciences, USA 98(24): 13763-13768. Gibson, G. and I. Dworkin (2004). "Uncovering cryptic genetic variation." Nature Reviews Genetics 5(9): 681-690. ncovering Palmer, A. (2004). "Symmetry breaking and the evolution of development." Science 306(5697): 828. Wagner, A. (2008). "Robustness and evolvability: a paradox resolved." Proceedings of the Royal Society of London, Series B: Biological Sciences 275: 91100. Whitacre, J. M. (2010). "Degeneracy: a link between evolvability, robustness and complexity in biological systems." Theoretical Biology and Medical Modelling 7(6). Whitacre, J. M. and A. Bender (2010). "Degeneracy: a design principle for achieving robustness and evolvability." Journal of Theoretical Biology 263(1): nd 143-153. Whitacre, J. M. and A. Bender (2010). "Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems." Theoretical complex Biology and Medical Modelling 7(20). Whitacre, J. M., P. Rohlfshagen, X. Yao and A. Bender (in press). The role of degenerate robustness in the evolvability of multi-agent systems in dynamic multi environments. 11th International Conference on Parallel Problem Solving from Nature (PPSN 2010), Krakow, Poland. nal

Degenerate neutrality creates evolvable fitness landscapes


James Whitacre1, Axel Bender2 1 School of Information Technology and Electrical Engineering; University of New South Wales at the Australian Defence Force Academy, Canberra, Australia 2 Land Operations Division, Defence Science and Technology Organisation; Edinburgh, Australia

Abstract - Understanding how systems can be designed to be evolvable is fundamental to research in optimization, evolution, and complex systems science. Many researchers have thus recognized the importance of evolvability, i.e. the ability to find new variants of higher fitness, in the fields of biological evolution and evolutionary computation. Recent studies by Ciliberti et al (Proc. Nat. Acad. Sci., 2007) and Wagner (Proc. R. Soc. B., 2008) propose a potentially important link between the robustness and the evolvability of a system. In particular, it has been suggested that robustness may actually lead to the emergence of evolvability. Here we study two design principles, redundancy and degeneracy, for achieving robustness and we show that they have a dramatically different impact on the evolvability of the system. In particular, purely redundant systems are found to have very little evolvability while systems with degeneracy, i.e. distributed robustness, can be orders of magnitude more evolvable. These results offer insights into the general principles for achieving evolvability and may prove to be an important step forward in the pursuit of evolvable representations in evolutionary computation. Keywords: degeneracy, evolutionary computation, evolvability, neutral networks, optimization, redundancy, robustness.

In EC, the study of evolvability has mostly focused on a closely related topic; the searchability of a fitness landscape. Because almost every aspect of the design of an Evolutionary Algorithm (EA) can influence its ability to search a particular fitness landscape, there are a broad number of ways in which this problem has been studied. One approach that will not be discussed here, is the design and implementation of variation operators, which we consider to include also the learning of the epistatic linkage between genes, the development of metamodels of fitness functions, and the discovery of building blocks. Although we neglect these issues here, clearly how variation is imposed on a population does influence evolvability [11] as well as the effectiveness of a search process, e.g. see [12]. Another useful way to study evolvability is within the context of the so-called representation problem. When designing an EA, it is necessary to represent a problem in parametric form (i.e. the genotype) that is then expressed (as a phenotype) through some mapping process and is finally evaluated for fitness. The challenge is to develop a good mapping from genotype to phenotype (G:P mapping) so that the fitness landscape is searchable from the perspective of the search bias ingrained within an EA. The encoding of the genotype (i.e. representation) has been actively studied in the EC community [13] [14] [15] [16] [17] [18] [19] [20] [7] [8] [9]. For a recent book on the subject, see [21]. Studying evolvability as a representation problem allows us to use knowledge of the G:P mapping process in biology to inform studies in EC. A number of EC studies have investigated features of the biological G:P mapping process, such as mechanisms for expressing complex phenotypes from compact genetic representations. This is seen for instance in the study of G:P mappings that incorporate protein expression [13] [14] or that simulate biological growth and development [18] [19]. These approaches seem promising, given the importance of development in the evolvability of a species [22] (but also see [23]). Recently, evolutionary biology has had a resurgent interest in the role of fitness neutrality in evolution [2] [24] [25] [26] [27] [28] [29] [10] [5]. These developments have been followed by the EC community, and some have started to investigate whether increasing neutrality (e.g. artificially introducing a many-to-one mapping between genotypes and phenotypes) can improve the evolvability of a search process

28.

Introduction

Evolvability describes a systems ability to discover new variants of higher fitness. The importance of evolvability is well recognized by many researchers studying biological evolution [1] [2] [3] [4] [5] and Evolutionary Computation (EC) [6] [7] [8] [9]. By describing natural selection as a process of retaining fitter variants, Darwin implicitly assumed that repeated iterations of variation and selection would result in the successive accumulation of useful variations [3]. However, decades of research applying Darwinian principles to computer models have irrefutably demonstrated that the founding principles of natural selection are an incomplete recipe for evolving systems of unbounded complexity. In computer simulations, adaptive changes (i.e. innovations) are at best finite and at worst short-lived. Understanding the origin of innovations is one of the most important open questions that a theory of evolution must still address [10].

[30] [15] [16] [20] [7] [8] [9]. The most common appr . approach in these studies has been to introduce a basic genetic redundancy [30] [15] [8] [9]. Although some studies have . suggested that simple redundant forms of neutrality can improve an EAs evolvability, others have questioned the actual utility of fitness landscape neutrality that is introduced through redundant encodings [16]. A few studies have investigated neutrality more closely and have considered different ways that neutrality can be introduced. In [20], it is suggested that redundancy in the , G:P mapping is only useful when the genotypes that map to the same phenotype are genetically similar (i.e. close in genotype space) and when higher fitness phenotypes are overrepresented. In [7], evidence is provided that weak , coupling between genes (described as reduced ruggedness in oupling a fitness landscape) is needed for neutrality to enhance evolvability. This study investigates different forms of neutrality that are inspired by observations of biological systems. In particular, the neutrality is generated through mechanisms for achieving robust phenotypes. Our chief concern is to understand the necessary conditions for evolvability, how these conditions are attained in biological systems, as well as the origins of useful neutrality in evolution. lity In the next section, we define phenotypic variability and explain why it is an important precondition and useful surrogate measure for evolvability. We then touch upon recent developments that have indicated evolvability might be an emergent property of robust complex systems. We also mergent introduce redundancy and degeneracy as two distinct design principles for achieving robustness and neutrality in biological systems. Section 30 presents a simulation model that is used to investigate how these design concepts influence evolvability. The results in Section 4 point to an important role for degeneracy (and not redundancy) in the emergence of evolvability. A brief discussion and conclusions finish the paper in Sections 5 and 6. ions

capacity to generate heritable phenotypic variation [4]. To variation further clarify the meaning of this definition, it is worth defin differentiating between phenotypic variation and phenotypic variability (evolvability) [11]. Phenotypic variation is the . simultaneous existence of distinct phenotypes (e.g. in a population); i.e. it is a directly measurable property of a set of distinct phenotypes. On the other hand, phenotypic variability is a dispositional concept, namely the potential or the propensity for phenotypic variation. More precisely, it is the total accessibility of distinct phenotypes. As with other studies [1] [2] [5], we thus use phenotypic variability as a phenot proxy for a systems evolvability.

Robustness and Evolvability


Recent studies [2] [5] have indicated that robustness may allow for, or even encourage, evolution in biology. At first this may seem surprising since increasing robustness appears to be in direct conflict with the requirements of requir evolvability. As illustrated in Figure 20, the conflict comes from the apparently simultaneous requirement to robustly maintain developed phenotypes while continually exploring and finding new ones. For example, species are highly robust to internal and external perturbations while on the other hand, evolution has demonstrated a capacity for continual innovation for billions of years. In spite of this apparent conflict, these studies suggest that increasing robustness can sometimes enhance a systems evolvability [2] [5]. In [2] [5] it was speculated that robustness increases in evolvability, largely through the existence of a neutral network that extends far throughout the fitness landscape. On the one hand, robustness is achieved through a connected network of equivalent (or nearly equivalent) phenotypes. Because of this connectivity, we know that some mutations or perturbations will leave the phenotype unchanged [5], the extent of which depending on the local network topology.

29.

Robustness and Evolvability

Evolvability
Many different definitions of evolvability exist in the literature (e.g. [6] [5] [11]), so it is important to articulate ), what we mean when we use this term. In general, an evolvability is concerned with the selection of new phenotypes. It requires an ability to generate distinct phenotypes and it requires that some of these phenotypes have a non-negligible probability of being selected b the negligible by environment. Given the important role the environment plays in the selection process, studies of biological evolution often consider the ability to generate distinct phenotypes as an important precondition and a useful proxy for evolvability. Similarly, in this study we use Kirchner and Gerharts rly, definition, which defines evolvability as an organisms

Figure 20 conflicting forces of robustness and evolvability. A system (central node) is exposed to changing conditions (outer nodes). Robustness of a phenotype requires minimal variation (left) while the discovery of new phenotypes requires exploration of a large number of phenotypic variants (right).

On the other hand, evolvability is achieved over the long-term by movement across a neutral network that term reaches over widely different regions of the fitness landscape. This assumes that different regions of the diffe landscape can access very distinct phenotypes, as was found

to occur in the study of artificial gene regulatory networks in [2]. In short, the size and topology of the neutral network could allow evolution to explore a broad range of phenotypes while maintaining core functionalities. The work by Ciliberti et al in [2] was not the first to highlight the importance of neutral networks in evolution. A neutral theory of molecular evolution was formulated by Kimura [31] and others have studied neutral networks in computer models of biological systems [26]. The novelty of Ciliberti et als work is the demonstrated expansive range of accessible phenotypes that they believe emerges as a consequence of robust phenotypic expression. This result leads Ciliberti et al to the exciting (but still tentative) conclusion that a causality exists between reduced phenotypic variation (increased robustness) and enhanced phenotypic variability (increased evolvability). In fact, the authors go even further and suggest that robustness alone is not sufficient but that the topology of the neutral network could also matter greatly.

perform similar roles (i.e. are interchangeable) under certain conditions, yet can play distinct roles in others.

30.

Experimental Setup

This study investigates whether the design principles for achieving neutrality and robustness will impact a systems evolvability in different ways. We use an exploratory abstract model that has been developed to unambiguously distinguish between redundancy and degeneracy concepts and allows us to explore in detail the relationship between these design principles and evolvability. To help ground the work, we present the model within the context of a transportation fleet mix problem. However, we are confident that these robustness design principles and their influence on evolvability is more general and can be related to other contexts including operations, planning, and evolution.

Transportation model
In its simplest form, the transportation fleet model consists of a set of vehicles and is specified by the types of tasks that each vehicle can accomplish. In particular, we define a set of n vehicles and m task types. Vehicles are characterized by a matrix with components ij, which take a value of one if vehicle type i is capable of doing task type j and zero otherwise. The tasks that are allocated to each vehicle define a vehicles state vector, which is given by Ci. The vector components Cij denote the number of tasks of type j that are allocated to a vehicle of type i within the fleet. Without loss of generality, we assume that each state Cij takes a value of zero whenever ij is zero. Over some unspecified period of time, each vehicle is assumed to be able to accomplish at most tasks, i.e. for each vehicle type i, j m Cij ij = . Each vehicle is also restricted to only be capable of dealing with two distinct types of tasks (e.g. see Figure 21), i.e. for each vehicle type i we have j m ij = 2. In the model, the matrix defines the internal changeable components (i.e. genotype) of a given fleet design. In particular, single mutations to the settings of act to replace one vehicle with another vehicle that may have different task capabilities. We elaborate on these mutations in more detail shortly. A fleets utilization or phenotype TP is defined as the fleets readiness to accomplish particular tasks. We can define each phenotypic trait of a fleet as a vector whose components contain, for each task type, the number of tasks that all of the vehicles in the fleet are ready to accomplish. For a given task type j the trait vector component is TjP= in Cij ij. The current operating environment TE for a fleet consists of a set of tasks that need to be accomplished. The fitness F is then defined in (1). As can be seen from this definition, a fleet is penalized for tasks that it is not prepared to accomplish.

Design principles for achieving robustness


There are two design principles that are believed to play a role in achieving robustness in biological systems; redundancy and distributed robustness [32] [33]. Redundancy is an easily recognizable design principle that is prevalent in both biological and man-made systems. Here, redundancy is used to refer to a redundancy of parts, that is, identical parts that have identical functionality. It is a common feature in engineered systems where redundancy provides a robustness against environmental variations of a very specific type. In particular, redundant parts can be used to replace parts that fail or can be used to augment output when demand for a particular output increases.4 Distributed robustness emerges through the actions of multiple dissimilar parts [33] [34]. It is in many ways unexpected because it is only derived in complex systems where heterogeneous components have multiple interactions with each other. In our experiments we demonstrate that distributed robustness can be achieved through degeneracy. Degeneracy is ubiquitous in biology as evidenced by the numerous examples provided by Edelman and Gally [32]. Degeneracy, sometimes also referred to as partial redundancy, is a term used in biology to refer to conditions where there is a partial overlap in the functions or capabilities of components [32]. In particular, degeneracy refers to conditions where we have structurally distinct components (but also modules and pathways) that can

This definition of redundancy is not identical to other uses in the EC literature. In many papers, the term is used in a more general way to refer to the existence of a many to one G:P mapping.

F T P = j

( )

jm

(1)

0, T jP > T jE j = P E 2 else Tj Tj ,

F(TP). Connections between nodes represent genetic mutations where an existing vehicle in the fleet is replaced with a different type of vehicle. A neutral network is then defined as a connected graph of nodes (within the fitness landscape), for which each and every node contains a fleet of the same fitness. Alternatively, one can think of the neutral network as a connected set of genotypes, in which each genotype can reach every other by local movements in genotype space without degrading the systems fitness below that of the stated threshold. In these experiments, we relax the neutrality criteria such that all systems within % of the optimal fleet fitness are considered to have equivalent fitness. This neutrality relaxation is necessary in order to consider satisficing behavior [35]. Justifications for considering an approximate neutrality are varied in the literature, but often are based on constraints observed in physical environments that lead to reductions in selection pressure or limitations to perfect selection. Similar to [5], we also define a 1-neighborhood representing all non-neutral nodes connected to the neutral network. These nodes correspond to the changes in the fleet that cause a non-neutral change of fitness and that are also reachable from the neutral network. Evolvability (phenotypic variability) of a fleet design is then defined as the total count of unique phenotypes that can be accessed directly from the neutral network (i.e. unique phenotypes within the 1-neighborhood). To understand why this measure of evolvability is meaningful, we elaborate on a possible interpretation of genetic mutations to the fleet. First, assume that a genetic mutation to the fleet represents the replacement of a vehicle type with a new type of vehicle that is not suitable for any of the currently existing task types. For the purposes of our analysis, this is analogous to a gene deletion or vehicle failure within a fleet. However, it is worth considering what might happen if these new vehicle types could in some rare cases provide an opportunity to achieve new types of tasks that were not conceivable during the fleet design/planning process. Furthermore, assume that the emergence of new task capabilities is dependent on the environment and the systems phenotype. By allowing for the possibility that these mutations might present new opportunities, the 1-neighborhood obtains new meaning. In particular, although the fleet fails the fitness test from (1), we simply describe these as being non-neutral phenotypes and do not make a priori judgments on the fleets utility. Hence, while we can not directly model innovation, we consider the diversity of these phenotypes a precondition for innovation.

A fleet attempts to satisfy environmental conditions through control over its phenotype, which involves changing the settings of the vehicle states C. We implement an ordered asynchronous updating of C where each vehicle conducts a local search and evaluates the changes to fleet fitness resulting from an incremental increase or decrease in the state value of the vehicle. In other words, we reallocate the vehicle to improve its utilization for the vehicles set of feasible task types. A change in state value is kept if it improves system fitness. Unless stated otherwise, updating component state values is stopped once the fleet fitness converges to a stable fitness value.5 Degeneracy and redundancy are modeled by constraining the setting of the matrix , which acts to control how the capabilities of vehicles are able to overlap. In the purely redundant model, vehicles are placed into subsets in which all vehicles are genetically identical. In other words, vehicles within a subset can only influence the same set of traits (but are free to take on distinct state values). In the degenerate model, a vehicle can only have a partial overlap in its capabilities when compared with any other vehicle. A simple illustration of the difference between these two design principles is given in Figure 21.
Traits/ Task types

Genes / Vehi cle types

Redundancy

Degeneracy

Figure 21 Illustration of G:P mapping constraints in models of redundancy and degeneracy. A vehicle may be assigned many tasks, however the types of tasks assigned are restricted by the G:P mapping used in a particular fleet.

Measuring Evolvability
Here we describe the steps used to analyze the evolvability of a fleet, which are similar to those outlined in [2]. The general aim is to discover and analyze, within the fitness landscape, a neutral network and the immediate neighborhood of that network. First, we consider a network representation of a fitness landscape where each node in the network represents a particular fleet and environment. In our transportation model, this means that a node in the fitness landscape is characterized by a genotype , a phenotype TP, and a fitness
5

Fitness landscape exploration


Measurement of evolvability requires an exploration of both the neutral network and the 1-neighborhood. Starting

This phenotypic control might also be interpreted as local search or Baldwinian evolution, depending on the context.

1200 Neutral Network Size 1000 800 600 400 200 0 0

Evolvability (Unique Phenotypes)

with an initial fleet and a given external environment, defined as the first node in the neutral network, the neutral network and 1-neighborhood are explored by iterating the following steps: 1) select a node from the neutral network at random; 2) mutate the fleet; 3) allow the fleet to modify its phenotype in order to adapt to the new conditions; and 4) if fitness is within % of initial fleet fitness then the fleet is added to the neutral network, else it is added to the 1neighborhood. Additions to the neutral network and 1-neighborhood must represent unique genotypes, meaning that duplicate genotypes are discarded when encountered by the search process. The size of the neutral network and 1-neighborhood are too large to allow for an exhaustive search and so the neutral network search algorithm includes a stopping criteria of 20,000 steps (genetic changes). Remaining conditions: Unless stated otherwise, the following experimental conditions are observed in all experiments. Vehicle state values are randomly initialized as integer values between 0 and 10 and genotypes are randomly initialized but constrained to meet the requirements of degeneracy or redundancy, depending on the model tested. The environment is defined with an optimal phenotype that is identical to the initial fleet phenotype and does not change once initialized. The neutrality threshold is set to = 5%, the number of task types is set to n=16 and the number of vehicles in a fleet is set to m=2n. Ad hoc experiments varying the settings of n and m did not significantly alter our results. Results are averaged over 50 runs.

the final calculations for neutral network size and evolvability that are obtained after running the search algorithm for 20,000 genotype changes.
1400 Degenerate Redundant
5000 Degenerate Redundant 4000

3000

2000

1000

50 100 150 Search steps (x100)

200

50 100 150 Search steps (x100)

200

Figure 22 count of fitness-neutral genotypes (left) unique nonneutral phenotypes (right) discovered during search.
5000 4000 3000 2000 1000 0 0 10 20

Evolvability (Unique Phenotypes)

degenerate redundant

10000 8000 6000 4000 2000 0 0

degenerate redundant

Neutral Net Size

10

20

% Excess Resources

% Excess Resources

Figure 23 Estimated neutral network size (left) and evolvability (right) as new redundant and degenerate vehicles are added to the respective fleets.

31.

Results

First, we investigate whether the different design principles influence the size of the neutral network and the evolvability of the fleet. Results are shown in Figure 22 for the search algorithms exploration of the neutral network and 1-neighborhood. Presenting the results in this way allows one to observe the rate at which new neutral genotypes and non-neutral phenotypes (i.e. the innovation rate [36]) are being discovered during the search process. On average, after 20,000 search steps the degenerate system is found to have a much larger neutral network (NNdeg = 1309, NNred = 576) and is at least 10-times more evolvable. This seems to suggest that even moderate increases in the size of the neutral network can lead to dramatic improvements in evolvability. In order to explore this idea further, we modify the models and introduce additional excess vehicle resources to both fleet designs. In particular, we employ the same experimental conditions used previously, except now we increase the number of vehicles in each fleet (m). By increasing the amount of resources while maintaining the same number of task requirements, we are expecting that both types of fleets will become more robust to gene deletions and will hence be able to establish larger neutral networks. Results are reported in Figure 23 as

As indicated in Figure 23, adding excess resources increases the size of the neutral network for both types of fleets. Surprisingly however, the redundant system does not display any substantial increase in evolvability as its neutral network grows. In contrast, the degenerate system is found to have large increases in evolvability, becoming orders of magnitude more evolvable compared with the redundant model, with only modest increases in fleet size. The most important conclusion drawn from these results is that the size of the neutral network within a fitness landscape does not necessarily lead to differences in evolvability, which refutes our earlier speculation. This can be directly observed from the results in Figure 23 by comparing the evolvability of different fleet types for conditions where they have similar neutral network sizes. In separate experiments (results not shown), we found that changes to the neutrality threshold () have a similar impact on fleet behavior. Threshold relaxation results in larger neutral networks, however, only for the degenerate system does the evolvability improve markedly.

32.

Discussion

These experiments provide new insights into the relationship between neutrality and evolvability. While purely redundant encodings are not likely to provide access to distinct phenotypes, degenerate robustness appears to

increase phenotypic variability and hence provides the foundation for higher system evolvability. Although these results do not directly investigate evolvable representations in EC, this study provides a theoretical basis for future developments in this area. Although we have discovered that the design principles for achieving robustness can determine whether a system is evolvable, it is still not clear why exactly this is the case. Some have suggested that the topological properties of the neutral network may partly determine whether robust phenotypic expression leads to evolvability [2]. However, preliminary analysis of the topological properties (e.g. path length, degree average) of the neutral networks studied here have not indicated substantial differences in network topology between the redundant and degenerate models, once network size effects are accounted for. In this work, we did not directly evaluate the robustness of the different fleet models and so we cannot comment on the actual relationship between robustness and evolvability. Having said that, the fact that the degenerate system can effectively operate under a broader range of genotypic conditions (as evidenced by the neutral network size) suggests that such systems are more robust, at least to this type of change in conditions. Our future work will investigate the robustness afforded by these design principles to determine if the level of system robustness can account for the observed differences in evolvability. Preliminary results indicate that high robustness in redundant systems does not always result in high evolvability. Taken in light of the results presented here, this suggests that it is not simply the size of the neutral network within a fitness landscape or the existence of high robustness that determines the evolvability of a system. Instead it could be the design principles used to achieve robustness and neutrality that matter. Finally, we should note that the phenotypic expressiveness in our model is bounded within a predefined state space, without any possibility of elaboration. In order to achieve a greater distinctiveness in phenotypic expression, developmental processes directed by a compact genetic representation are almost certainly essential. In biology for instance, the developmental growth of a phenotype and its plasticity in the external environment is critical to the elaboration of more complex expressive forms. Hence, we are not claiming to have completely solved the evolvability question as it pertains to natural or artificial evolutionary processes. However, before attempting to tackle these grander challenges in EC and artificial life, it is important to understand how design principles can lead to accessible diversity in phenotypic expression. Here we have shown that degeneracy may play an important role in achieving this precondition of evolvability.

33.

Conclusions

This study demonstrates that the design principles used to achieve robustness/neutrality in a fitness landscape can dramatically affect the accessibility of distinct phenotypes and hence the evolvability of a system. In agreement with [16], we find that a many-to-one G:P mapping does not guarantee a highly evolvable fitness landscape. However, we also discovered that distributed robustness or degeneracy can result in remarkably high levels of evolvability. Degeneracy is known to be a ubiquitous property of biological systems and is believed to play an important role in achieving robustness [32]. Here we have suggested that the importance of degeneracy could be much greater than previously thought. It actually may act as a key enabling factor in the evolvability of complex systems.

34.

REFERENCES

[1] M. Aldana, E. Balleza, S. Kauffman, and O. Resendiz, "Robustness and evolvability in genetic regulatory networks," J. Theor. Biol., vol. 245, pp. 433-448, 2007. [2] S. Ciliberti, O. C. Martin, and A. Wagner, "Innovation and robustness in complex regulatory gene networks," Proc. Natl. Acad. Sci. USA, vol. 104, p. 13591, 2007. [3] S. A. Kauffman, "Requirements for evolvability in complex systems: orderly components and frozen dynamics," Physica D, vol. 42, pp. 135152, 1990. [4] M. Kirschner and J. Gerhart, "Evolvability," Proc. Natl. Acad. Sci. USA, vol. 95, pp. 8420-8427, 1998. [5] A. Wagner, "Robustness and evolvability: a paradox resolved," Proc. R. Soc. Lond., Ser. B: Biol. Sci., vol. 275, pp. 91-100, 2008. [6] J. Reisinger, K. O. Stanley, and R. Miikkulainen, "Towards an empirical measure of evolvability," GECCO, pp. 257-264, 2005. [7] T. Smith, A. Philippides, P. Husbands, and M. O'Shea, "Neutrality and ruggedness in robot landscapes," in Congress on Evolutionary Computation, 2002, pp. 1348-1353. [8] V. K. Vassilev and J. F. Miller, "The Advantages of Landscape Neutrality in Digital Circuit Evolution," in Evolvable systems: from biology to hardware Berlin: Springer, 2000. [9] T. Yu and J. F. Miller, "Neutrality and the Evolvability of Boolean Function Landscape," in Proceedings of the 4th European Conference on Genetic Programming, 2001, pp. 204-217. [10] A. Wagner, "Neutralism and selectionism: a network-based reconciliation," Nature Reviews Genetics, 2008. [11] G. P. Wagner and L. Altenberg, "Complex adaptations and the evolution of evolvability," Evolution, vol. 50, pp. 967-976, 1996. [12] J. M. Whitacre, "Adaptation and Self-Organization in Evolutionary Algorithms," Thesis: University of New South Wales, 2007, p. 283. [13] P. J. Bentley, "Fractal Proteins," Genetic Programming and Evolvable Machines, vol. 5, pp. 71-101, 2004. [14] C. Ferreira, "Gene Expression Programming: A New Adaptive Algorithm for Solving Problems," Complex Systems, vol. 13, pp. 87-129, 2001.

[15] R. E. Keller and W. Banzhaf, "Genetic programming using genotype-phenotype mapping from linear genomes into linear phenotypes," Genetic Programming, pp. 116122, 1996. [16] J. D. Knowles and R. A. Watson, "On the Utility of Redundant Encodings in Mutation-Based Evolutionary Search," Lecture Notes in Computer Science, pp. 88-98, 2003. [17] S. Kumar and P. Bentley, On Growth, Form and Computers: Academic Press, 2003. [18] J. Miller and P. Thomson, "Beyond the complexity ceiling: Evolution, emergence and regeneration," in GECCO, 2004. [19] J. F. Miller, "Evolving a Self-Repairing, Self-Regulating, French Flag Organism," Lecture Notes in Computer Science, pp. 129-139, 2004. [20] F. Rothlauf and D. E. Goldberg, "Redundant Representations in Evolutionary Computation," Evolutionary Computation, vol. 11, pp. 381-415, 2003. [21] F. Rothlauf, Representations for Genetic And Evolutionary Algorithms. Heidelburg: Springer Verlag, 2006. [22] M. J. West-Eberhard, "Evolution in the light of developmental and cell biology, and vice versa," Proc. Natl. Acad. Sci. USA, vol. 95, pp. 8417-8419, 1998. [23] S. A. Newman and G. B. Mueller, "Epigenetic mechanisms of character origination," Journal of Experimental Zoology, vol. 288, pp. 304-317, 2000. [24] A. L. Hughes, "Leading Edge of the Neutral Theory of Molecular Evolution," Ann. N. Y. Acad. Sci., vol. 1133, pp. 162179, 2008. [25] T. Ohta, "Near-neutrality in evolution of genes and gene regulation," Proc. Natl. Acad. Sci. USA, vol. 99, pp. 16134-16137, 2002. [26] P. Schuster, W. Fontana, P. F. Stadler, and I. L. Hofacker, "From Sequences to Shapes and Back: A Case Study in RNA Secondary Structures," Proc. R. Soc. Lond., Ser. B: Biol. Sci., vol. 255, pp. 279-284, 1994. [27] E. van Nimwegen and J. P. Crutchfield, "Metastable evolutionary dynamics: Crossing fitness barriers or escaping via neutral paths?," Bulletin of Mathematical Biology, vol. 62, pp. 799848, 2000. [28] E. van Nimwegen, J. P. Crutchfield, and M. Huynen, "Neutral evolution of mutational robustness," Proc. Natl. Acad. Sci. USA, vol. 96, pp. 9716-9720, 1999. [29] J. Visser, J. Hermisson, G. P. Wagner, L. A. Meyers, H. Bagheri-Chaichian, J. L. Blanchard, L. Chao, J. M. Cheverud, S. F. Elena, and W. Fontana, "Perspective: Evolution and Detection of Genetic Robustness," Evolution, vol. 57, pp. 1959-1972, 2003. [30] W. Banzhaf, "Genotype-Phenotype-Mapping and Neutral Variation-A Case Study in Genetic Programming," Lecture Notes in Computer Science, pp. 322-322, 1994. [31] M. Kimura, "Solution of a Process of Random Genetic Drift with a Continuous Model," Proc. Natl. Acad. Sci. USA, vol. 41, pp. 144-150, 1955. [32] G. M. Edelman and J. A. Gally, "Degeneracy and complexity in biological systems," Proc. Natl. Acad. Sci. USA, p. 231499798, 2001.

[33] A. Wagner, "Distributed robustness versus redundancy as causes of mutational robustness," Bioessays, vol. 27, pp. 176-188, 2005. [34] A. Wagner, "Robustness against mutations in genetic networks of yeast," Nat. Genet., vol. 24, pp. 355-362, 2000. [35] H. A. Simon, A Behavioral Model of Rational Choice. Santa Monica: Rand Corp, 1953. [36] M. Ebner, M. Shackleton, and R. Shipman, "How neutral networks influence evolvability," Complexity, vol. 7, pp. 19-33, 2001.

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

RESEARCH

Open Access

Networked Research buffering: a basic mechanism for distributed robustness in complex adaptive systems
James M Whitacre*1 and Axel Bender2

* Correspondence: jwhitacre79@yahoo.com
1

School of Computer Science, University of Birmingham, Edgbaston, UK

Full list of author information is available at the end of the article

Abstract A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems.

Introduction Robustness reflects the ability of a system to maintain functionality or some measured output as it is exposed to a variety of external environments or internal conditions. Robustness is observed whenever there exists a sufficient repertoire of actions to counter perturbations [1] and when a system's memory, goals, or organizational/structural bias can elicit those responses that match or counteract particular perturbations, e.g. see [2]. In many of the complex adaptive systems (CAS) discussed in this paper, the actions of agents that make up the system are based on interactions with a local environment, making these two requirements for robust behavior interrelated. When robustness is observed in such CAS, we generally refer to the system as being self-organized, i.e. stable properties spontaneously emerge without invoking centralized routines for matching actions and circumstances. Many mechanisms that lead to robust properties have been distilled from the myriad contexts in which CAS, and particularly biological systems, are found [3-21]. For instance, robustness can form from loosely coupled feedback motifs in gene regulatory networks, from saturation effects that occur at high levels of flux in metabolic reactions, from spatial and temporal modularity in protein folding, from the functional redundancy in genes and
2010 Whitacre and Bender; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 2 of 20

metabolic pathways [22,23], and from the stochasticity of dynamicsi occurring during multi-cellular development [24] or within a single cell's interactome [25]. Although the mechanisms that lead to robustness are numerous and diverse, subtle commonalities can be found. Many mechanisms that contribute to stability act by responding to perturbations through local competitive interactions that appear cooperative at a higher level. A system's actions are rarely deterministically bijective (i.e. characterized by a one-to-one mapping between perturbation and response) and instead proceed through a concurrent stochastic process that in some circumstances is described as exploratory behavior [26]. This paper proposes a new basic mechanism that can lead to both local and distributed robustness in CAS. It results from a partial competition amongst system components and shares similarities with several of the mechanisms we have just mentioned. In the following, we speculate that this previously unexplored form of robustness may readily emerge within many different systems comprising multi-functional agents and may afford new insights into the exceptional flexibility that is observed within some complex adaptive systems. In the next section we summarize accepted views of how diversity and degeneracy can contribute to robustness of system traits. We then present a mechanism that describes how a system of degenerate agents can create a widespread and comprehensive response to perturbations - the networked buffering hypothesis (Section 3). In Section 4 we provide evidence for the realisation of this hypothesis. We particularly describe the results of simulations that demonstrate that distributed robustness emerges from networked buffering in models of genome:proteome mappings. In Section 5 we discuss the importance of this type of buffering in natural and human-made CAS, before we conclude in Section 6. Three appendices supplement the content of the main body of this paper. In Appendix 1 we provide some detailed definitions for (and discriminations of ) the concepts of degeneracy, redundancy and partial redundancy; in Appendix 2 we give background materials on degeneracy in biotic and abiotic systems; and in Appendix 3 we provide a technical description of the genome:proteome model that is used in our experiments.

Robustness through Diversity and Degeneracy As described by Holland [27], a CAS is a network of spatially distributed agents which respond concurrently to the actions of others. Agents may represent cells, species, individuals, firms, nations, etc. They can perform particular functions and make some of their resources (physical assets, knowledge, services, etc) work for the system.ii The control of a CAS tends to be largely decentralized. Coherent behavior in the system generally arises from competition and cooperation between agents; thus, system traits or properties are typically the result of the interplay between many individual agents. Degeneracy refers to conditions where multi-functional CAS agents share similarities in only some of their functions. This means there are conditions where two agents can compensate for each other, e.g. by making the same resources available to the system, or can replace each other with regard to a specific function they both can perform. However, there are also conditions where the same agents can do neither. Although degeneracy has at times been described as partial redundancy, we distinctly differentiate between these two concepts. Partial redundancy only emphasizes the many-to-one map-

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 3 of 20

ping between components and functions while degeneracy concerns many-to-many mappings. Degeneracy is thus a combination of both partial redundancy and functional plasticity (explained below). We discuss the differences of the various concepts surrounding redundancy and degeneracy in Appendix 1 and Figure 1. On the surface, having similarities in the functions of agents provides robustness through a process that is intuitive and simple to understand. In particular, if there are many agents in a system that perform a particular service then the loss of one agent can be offset by others. The advantage of having diversity amongst functionally similar agents is also straightforward to see. If agents are somewhat different, they also have somewhat different weaknesses: a perturbation or attack on the system is less likely to present a risk to all agents at once. This reasoning reflects common perceptions about the value of diversity in many contexts where CAS are found. For instance, it is analogous to what is described as functional redundancy [28,29] (or response diversity [30]) in ecosystems, it reflects the rationale behind portfolio theory in economics and biodiversity management [31-33], and it is conceptually similar to the advantages from ensemble approaches in machine learning or the use of diverse problem solvers in decision making [34]. In short, diversity is commonly viewed as advantageous because it can help a system to consistently reach and sustain desirable settings for a single system property by providing multiple distinct paths to a particular state. In accordance with this thinking, examples from many biological contexts have been given that illustrate degeneracy's

Figure 1 Illustration of degeneracy and related concepts. Components (C) within a system have a functionality that depends on their context (E) and can be functionally active (filled nodes) or inactive (clear nodes). When a component exhibits qualitatively different functions (indicated by node color) that depend on the context, we refer to that component as being functionally plastic (panel a). Pure redundancy occurs when two components have identical functions in every context (panels b and c). Functional redundancy is a term often used to describe two components with a single (but same) function whose activation (or capacity for utilization) depends on the context in different ways (panel d). Degeneracy describes components that are functionally plastic and functionally redundant, i.e. where the functions are similar in some situations but different in others (panel e).

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 4 of 20

positive influence on the stability of a single trait, e.g. see Appendix 2. Although this view of diversity is conceptually and practically useful, it is also simplistic and, so we believe, insufficient for understanding how common types of diversity such as degeneracy will influence the robustness of multiple interdependent system traits. CAS are frequently made up of agents that influence the stability of more than just a single trait because of their having a repertoire of functional capabilities. For instance, gene products act as versatile building blocks that form complexes with many distinct targets [35-37]. These complexes often have unique and non-trivial consequences inside or outside the cell. In the immune system, each antigen receptor can bind with (i.e. recognize) many different ligands and each antigen is recognized by many receptors [38,39]; a feature that has only recently been integrated into artificial immune system models, e.g. [40-42]. In gene regulation, each transcription factor can influence the expression of several different genes with distinct phenotypic effects. Within an entirely different domain, people in organizations are versatile in the sense that they can take on distinct roles depending on who they are collaborating with and the current challenges confronting their team. More generally, the function an agent performs often depends on the context in which it finds itself. By context, we are referring to the internal states of an agent and the demands or constraints placed on the agent by its environment. As illustrated further in Appendix 2, this contextual nature of an agent's function is a common feature of many biotic and abiotic systems and it is referred to hereafter as functional plasticity. Because agents are generally limited in the number of functions they are able to perform over a period of time, tradeoffs naturally arise in the functions an agent performs in practice. These tradeoffs represent one of several causes of trait interdependence and they obscure the process by which diverse agents influence the stability of single traits. A second complicating factor is the ubiquitous presence of degeneracy. While one of an agent's functions may overlap with a particular set of agents in the system, another of its functions may overlap with an entirely distinct set of agents. Thus functionally related agents can have additional compensatory effects that are differentially related to other agents in the system, as we describe in more detail in the next section. The resulting web of conditionally related compensatory effects further complicates the ways in which diverse agents contribute to the stability of individual traits with subsequent effects on overall system robustness.

Networked Buffering Hypothesis Previous authors discussing the relationship between degeneracy and robustness have described how an agent can compensate for the absence or malfunctioning of another agent with a similar function and thereby help to stabilize a single system trait. One aim of this paper is to show that when degeneracy is observed within a system, a focus on single trait robustness can turn away attention from a form of system robustness that spontaneously emerges as a result of a concurrent, distributed response involving chains of mutually degenerate agents. We organize these arguments around what we call the networked buffering hypothesis (NBH). The central concepts of our hypothesis are described by referring to the abstract depictions of Figure 2; however, the phenomenon itself is not limited to these modeling conditions as will be elucidated in Section 5.

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 5 of 20

Figure 2 Conceptual model of a buffering network. Each agent is depicted by a pair of connected nodes that represent two types of tasks/functions that the agent can perform, e.g. see dashed circle in panel a). Node pairs that originate or end in the same node cluster ("Functional group") correspond to agents that can carry out the same function and thus are interchangeable for that function. Darkened nodes indicate the task an agent is currently performing. If that task is not needed then the agent is an excess resource or "buffer". Panel a) Degeneracy in multi-functional agents. Agents are degenerate when they are only similar in one type of task. Panel b) End state of a sequence of task reassignments or resource reconfigurations. A reassignment is indicated by a blue arrow with switch symbol. The diagram illustrates a scenario in which requests for tasks in the Z functional group have increased and requests for tasks of type X have decreased. Thus resources for X are now in excess. While no agent exists in the system that performs both Z and X, a pathway does exist for reassignment of resources (XTY, YTZ). This illustrates how excess resources for one type of function can indirectly support unrelated functions. Panel c) Depending on where excess resources are located, reconfiguration options are potentially large as indicated by the different reassignment pathways shown. Panel d) A reductionist system design with only redundant system buffers cannot support broad resource reconfiguration options. Instead, agent can only participate in system responses related to its two task type capabilities.vi

Consider a system comprising a set of multi-functional agents. Each agent performs a finite number of tasks where the types of tasks performed are constrained by an agent's functional capabilities and by the environmental requirement for tasks ("requests"). A system's robustness is characterized by the ability to satisfy tasks under a variety of conditions. A new "condition" might bring about the failure or malfunctioning of some agents or a change in the spectrum of environmental requests. When a system has many agents that perform the same task then the loss of one agent can be compensated for by others, as can variations in the demands for that task. Stated differently, having an excess of functionally similar agents (excess system resources) can provide a buffer against variations in task requests. In the diagrams of Figure 2, for sake of illustration the multi-functionality of CAS agents is depicted in an abstract "functions space". In this space, bi-functional agents

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 6 of 20

(represented by pairs of connected nodes) form a network (of tasks or functions) with each node representing a task capability. The task that an agent currently performs is indicated by a dark node, while a task that is not actively performed is represented by a light node. Nodes are grouped into clusters to indicate functional similarity amongst agents. For instance, agents with nodes occupying the same cluster are said to be similar with respect to that task type. To be clear, task similarity implies that either agent can adequately perform a task of that type making them interchangeable with respect to that task. In Figure 2d we illustrate what we call 'pure redundancy' or simply 'redundancy': purely redundant agents are always functionally identical in either neither or across both of the task types they can perform. In all other panels of Figure 2, we show what we call 'pure degeneracy': purely degenerate agents either cannot compensate for each other or can do so in only one of the two task types they each can carry out. Important differences in both scale and the mechanisms for achieving robustness can be expected between the degenerate and redundant system classes. As shown in Figure 2b, if more (agent) resources are needed in the bottom task group and excess resources are available in the top task group, then degeneracy allows agents to be reallocated from tasks where they are in excess to tasks where they are needed. This occurs through a sequence of reassignments triggered by a change in environmental conditions (as shown in Figure 2b by the large arrows with switch symbols) - a process that is autonomous so long as agents are driven to complete unfulfilled tasks matching their functional repertoire. Figure 2b illustrates a basic process by which resources related to one type of function can support unrelated functions. This is an easily recognizable process that can occur in each of the different systems that are listed in Table 1. In fact, conditional interoperability is so common within some domains that many domain experts would consider this an entirely unremarkable feature. What is not commonly appreciated though is that the number of distinct paths by which reconfiguration of resources is possible can potentially be enormous in highly degenerate systems, depending on where resources are needed and where they are in excess (see Figure 2c). Conversely, this implies that it is theoretically possible for excess agent resources (buffers) in one task to indirectly support an enormous number of other tasks, thereby increasing the effective versatility of any single buffer (seen if we reversed the flow of reassignments in Figure 2c). Moreover, because buffers in a degenerate system are partially related, the stability of any system trait is potentially the result of a distributed, networked response within the system. For instance, resource availability can arise through an aggregated response from several of the paths shown in Figure 2c. Although interoperability of agents may be localized, extra resources can offer huge reconfiguration opportunities at the system level. These basic attributes are not feasible in reductionist systems composed of purely redundant agents (Figure 2d). Without any partial overlap in capabilities, agents in the same functional groups can only support each other and, conversely, excess resources cannot support unrelated tasks outside the group. Buffers are thus localized. In the particular example illustrated in Figure 2d, agent resources are always tied to one of two types of tasks. Although this ensures certain levels of resources will always remain available within a given group, it also means they are far less likely to be utilized when resource requirements vary, thereby reducing resource efficiency. In other words, resource buffers in purely redundant systems are isolated from each other, limiting how

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 7 of 20

versatile the system can be in reconfiguring these resources. In fact, every type of variability in task requirements needs a matching realization of redundancies. If broad reconfigurations are required (e.g. due to a volatile environment) then these limitations will adversely affect system robustness. Although such statements are not surprising, they are not trivial either because the sum of agent capabilities within the redundant and degenerate systems are identical.

Networked Buffering in Genome: Proteome Mappings More than half of all mutational robustness in genes is believed to be the result of distributed actions and not genetic redundancy [4]. Although a similar analysis of the origins of robustness has not taken place for other biotic contexts, there is plenty of anecdotal evidence for the prevalence of both local functional redundancy and distributed forms of robustness in biology. Degeneracy may be an important causal factor for both of these forms of robustness. Edelman and Gally have presented considerable evidence of degeneracy's positive influence on functional redundancy, i.e. single trait stability through localized compensatory actions, see [23], Section 2 and Appendices 1 and 2. What is missing though is substantiation for degeneracy's capacity to cause systemic forms of robustness through distributed compensatory actions. In the previous section we hypothesized how degeneracy might elicit distributed robustness through networked sequences of functional reassignments and resource reconfigurations. To substantiate this hypothesis, we evaluate robustness in a model of genome:proteome (G:P) mappings that was first studied in [43]. In the model, systems of proteins ("agents") are driven to satisfy environmental conditions through the utilization of their proteins. Protein-encoding genes express a single protein. Each protein has two regions that allow it to form complexes with ligands that have a strong affinity to those regions (see Figure 3). A protein's "behavior" is determined by how much time it spends interacting with each of the target ligands. The sum of protein behaviors defines the system phenotype, assuming that each protein's trait contributions are additive. It is further assumed that genetic functions are modular [44] such that there are little or no restrictions in what types of functions can be co-expressed in a single gene or represented in a single protein.iii The environment is defined by the ligands available for complex formation. Each protein is presented with the same well-mixed concentrations of ligands. A phenotype that has unused proteins is energetically penalized and is considered unfit when the penalty exceeds a predefined threshold. Two types of systems are evaluated: those where the G:P mapping is purely redundant (as of the abstract representation in Figure 2d) and those where it is purely degenerate (as of Figure 2a). For more details on the model see [43] and Appendix 3. In [43], we found that purely degenerate systems are more robust to perturbations in environmental conditions than are purely redundant ones, with the difference becoming larger as the systems are subjected to increasingly larger perturbations (Figure 4a). In addition we measured the number of distinct null mutation combinations under which a system could maintain fitness and found that degenerate systems are also much more robust with respect to this measurement ("versatility") [43]. Importantly, this robustness improvement becomes more pronounced as the size of the systems increases (Figure 4b). We now expand on the studies of [43] by showing that the enhanced robustness in purely degenerate systems originates from distributed compensatory effects. First, in

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 8 of 20

Figure 3 Overview of genome-proteome model. a) Genotype-phenotype mapping conditions and pleiotropy: Each gene contributes to system traits through the expression of a protein product that can bind with functionally relevant targets (based on genetically determined protein specificity). b) Phenotypic expression: Target availability is influenced by the environment and by competition with functionally redundant proteins. The attractor of the phenotype can be loosely described as the binding of each target with a protein. c) Functional overlap of genes: Redundant genes can affect the same traits in the same manner. Degenerate traits only have a partial similarity in what traits they affect.

Figure 4d we repeat the experiments used to evaluate system versatility; however, we restrict the systems' response options to local actions only. More precisely, only proteins of genes that share some functional similarity to the products of the mutated genes are permitted to change their behaviors and thus participate in the system's response to gene mutations. By adding this constraint to the simulation, the possibility that distributed compensatory pathways (as described in Figure 2b and 2c) can be active is eliminated. In other words, this constraint allows us to measure the robustness that results from direct functional compensation; i.e. the type of robustness in those examples of the literature where degeneracy has been related to trait stability, e.g. see [23]. In Figure 4d the robustness of the purely redundant systems remains unchanged compared with the results in Figure 4b while the robustness of degenerate systems degrades to values that are indistinguishable from the redundant system results. Comparing the

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 9 of 20

Figure 4 Local and distributed sources of robustness in protein systems designed according to purely redundant and purely degenerate G:P mappings. a) Differential robustness as a function of the percentage of genes that are mutated in each protein system. Differential robustness is defined as the probability for a system phenotype to maintain fitness after it was allowed to adjust to a change in conditions (here: gene mutations). Source: [43] b) Versatility-robustness as a function of initial excess protein resources. Versatility is measured as the number of null mutation combinations ("neutral network size") for which the system phenotype maintains fitness. Source: [43]. c) Frequency distribution for the proportion C of distinct gene products that change their function when versatility is evaluated (as of panel b experiments) in systems with 0% initial excess resources. d) Versatility of redundant and degenerate systems when the system response to null mutations is restricted to local compensation only; i.e. gene products can only change their functional contribution if they are directly related to those functions lost as a result of a null mutation.

two sets of experiments, we find that roughly half of the total robustness that is observable in the degenerate G:P models originates from non-local effects that cannot be accounted for by the relationships between degeneracy and robustness that were previously described in the literature, e.g. in [23]. As further evidence of distributed robustness in degenerate G:P mappings, we use the same conditions as in Figure 4b except now we systematically introduce single loss of function mutations and record the proportion C of distinct gene products that change state. In the probability distributions of Figure 4c, the redundant systems only display localized responses as would be expected while the degenerate systems respond to a disturbance with both small and large numbers of changes to distinct gene products. As small amounts of excess resources are added to degenerate systems (Figure 5a), single null mutations tend to invoke responses in a larger number of distinct gene products

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 10 of 20

Table 1: Systems where agents are multifunctional and have functions that can partially overlap with other agents.
Agent Vehicle type System Transportation Fleet Environment Transportation Network Control Centralized Command and Control Strategic Planning Management Self-organized Agent Tasks Transporting goods, pax

Force element

Defence Force Structure Organization Ecosystem

Future Scenarios Marketplace Physical Environment Cell

Missions

Person Deme

Job Roles Resource usage and creation Energetic and sterric interactions Recognizing foreign proteins

Gene Product

Interactome

Self-organized and evolved

Antigen

Immune System

Antibodies and host proteins

Immune learning

while robustly maintaining system traits, i.e. system responses become more distributed while remaining phenotypically cryptic. In measuring the magnitude S of state changes for individual gene products, we find the vast majority of state changes that occur are consistently small across experiments (making them hard to detect in practice), although larger state changes become more likely when excess resources are introduced (Figure 5b). The effect from adding excess resources saturates quickly and shows little additional influence on system properties (C and S) for excess resources > 2%. Individually varying other parameters of the model such as the maximum rate of gene expression, the size of the genetic system, or the level of gene multi-functionality did not alter the basic findings reported here. Thus for the degenerate models of G:P mappings, we find that distributed responses play an important role in conferring mutational

Figure 5 Probability distributions for a) proportion C of distinct gene products that change state and b) magnitude S of change in gene products. Experiments are shown for degenerate G:P mappings using the same conditions as in Figure 4b, but with the following modifications: 1) perturbations to the system are of single null mutations only and 2) systems are initialized with different amounts of excess resources (% excess indicated by data set label).

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 11 of 20

robustness towards single null mutations. Although our experimental conditions differ in some respects from the analysis of single gene knockouts in Saccharomyces cerevisiae [4], both our study and [4] find evidence that roughly half the mutational robustness of genetic systems is a consequence of distributed effects: a finding that is similar to observations of robustness in the more specific case of metabolic networks [13]. The robustness we evaluate in our experiments only considers loss of function mutations. However, we experimentally observed similar relationships between degeneracy and distributed robustness when we expose our model systems to small environmental perturbations, e.g. changes to ligand concentrations. This is suggestive not only of congruency in the robustness towards distinct classes of perturbations [45], but also that distributed robustness is conferred in this model through the same mechanistic process, i.e. a common source of biological canalization as proposed in [46]. As supported by the findings of other studies [14,43,47,48], the observation of an additional "emergent form" of robustness also suggests that robustness is neither a conserved property of complex systems nor does it have a conceptually intuitive trade-off with resource efficiency as has been proposed in discussions related to the theory of Highly Optimized Tolerance [3,7,49].

Discussion There is a long-standing interest in the origins of robustness and resilience within CAS in general and biological systems in particular [28,45,46,48-59]. Although considerable progress has been made in understanding constraint/deconstraint processes in biology [26], a full account of biological robustness remains elusive. The extent to which degeneracy can fill this knowledge gap is unknown, however we outline several reasons why degeneracy might play a vital role in facilitating local and distributed forms of biological robustness.
Omnipresence of degeneracy in biological CAS

Stability under moderately variable conditions (i.e. modest internal or external changes to the system) is a defining attribute of biology at all scales [46,60].iv Any mechanism that broadly contributes to such stability must be as ubiquitous as the robust traits it accounts for. Although many mechanisms studied in the literature (such as those mentioned in the Introduction) are broadly observed, few are as pervasive as degeneracy. In fact, degeneracy is readily seen throughout molecular, genetic, cellular, and population levels in biology [23] and it is a defining attribute of many communication and signalling systems in the body including those involved in development, immunity, and the nervous system [23,38,61,62]. As described in Appendix 2, degeneracy is also readily observed in other complex adaptive systems including human organizations, complex systems engineering, and ecosystems. When the degenerate components of these systems form a network of partially overlapping functions, and when component responses are fast relative to the timescale of perturbations, we argue that networked buffering should, in principle, enhance the robustness and flexibility observed in each of these distinct system classes.
Cellular robustness

If degeneracy broadly accounts for biological robustness then it should be intimately related to many mechanisms discussed in the literature. One prominent example where this occurs is the relationship between degeneracy and cell regulation. For example, the organization or structure of metabolic reactions, signalling networks, and gene expression elicits some control over the sequences of interactions that occur in the 'omic' net-

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 12 of 20

work. This control is often enacted by either a process of competitive exclusion or recruitment within one of the interaction steps in a pathway (e.g. a metabolite in a reaction pathway or the initial binding of RNA polymerase prior to gene transcription). Given the reductionist bias in science, it is not surprising that biologists initially expected to find a single molecular species for every regulatory action. Today however most of the accumulated evidence indicates that local regulatory effects are often enacted by a number of compounds that are degenerate in their affinity to particular molecular species and are roughly interchangeable in their ability to up/down regulate a particular pathway. NBH suggests that when the relationships between degenerate regulators form a network of partial competition for regulatory sites, this may confer high levels of regulatory stability, e.g. against stochastic fluctuations for stabilizing gene expression [63] or, and more generally, towards more persistent changes in the concentrations of molecular species. On the other hand, when degeneracy is absent then the regulatory processes in biology are more sensitive to genetic and environmental perturbations, although in some case this sensitivity is useful, e.g. in conferring stability to traits at a higher level. However in the complete absence of degeneracy, only one type of molecular species could be responsible for each type of control action, and the removal of that species could not be directly compensated for by others. Under these conditions, change in function mutations to non-redundant genes would most likely result in changes to one or more traits. In other words, mutational robustness would be greatly reduced and cryptic genetic variation would not be observed in natural populations.
Systems engineering

The redundancy model in Figure 2d reflects a logical decomposition of a system that is encouraged (though not fully realized) in most human planning/design activities, e.g. [64,65]. While there are many circumstances where redundancy is beneficial, there are others where we now anticipate it will be detrimental. Redundancy can afford economies of scale and provide transparency, which can allow a system to be more amenable to manipulation by bounded-rational managers (cf [66-68]). When systems or subsystems operate within a predictable environment with few degrees of freedom, redundancy/ decomposition design principles have proven to be efficient and effective. However, when variability in conditions is hard to predict and occurs unexpectedly, purely redundant and decomposable system architectures may not provide sub-systems with the flexibility necessary to adapt and prevent larger systemic failures. Under these circumstances, we propose that networked buffering from degeneracy can improve system stability. We are currently involved in a project that is exploring these ideas in the context of algorithm design and strategic planning and we have now accumulated some evidence that networked buffering is a relevant attribute for some systems engineering contexts [47,69,70].
Weak Links in complex networks

Within fluid markets, social systems, and each of the examples listed in Table 1, one can find systems composed of functionally plastic degenerate components that operate within a dynamic uncertain world. We have argued that the ability of these components to partially overlap across different function classes will lead to the emergence of networked buffering. This functional compensation, however, is not always easy to detect. When agents are functionally plastic, they tend to interact with many distinct component types. This behavior causes individual interaction strengths to appear weak when

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 13 of 20

they are evaluated in aggregation using time-averaged measurements, e.g. see [71] and Figure 5b. As we elaborate in [72], commonly accepted forms of experimental bias tend to overlook weak interactions in the characterization and analysis of CAS networks. Yet there is a growing number of examples (e.g. in social networks and proteomes) where weak links contribute substantially to system robustness as well as similar properties such as system coherence [73,74]. Particularly in the case of social networks, degenerate weak links help to establish communication channels amongst cliques and support cohesion within the social fabric through processes that mirror the basic principles outlined in NBH, e.g. see [73,74].
Ecosystem Resilience

In a world undergoing regional environmental regime shifts brought about by changes in the global climate, it is becoming increasingly important to understand what enables ecosystems to be resilient, i.e. to tolerate disturbances without shifting into qualitatively different states controlled by different sets of processes [29]. Ecology theory and decades of simulation experiments have concluded that increasing complexity (increasing numbers of species and species interactions) should destabilize an ecosystem. However, empirical evidence suggests that complexity and robustness are positively correlated. In a breakthrough study, Kondoh [75,76] has demonstrated that this paradox can be resolved within biologically plausible model settings when two general conditions are observed: i) species are functionally plastic in resource consumption (adaptive foraging) and ii) potential connectivity in the food web is high. Because higher connectivity between functionally plastic species allows for degeneracy to arise, Kondoh's requirements act to satisfy the two conditions we have set out for the emergence of networked buffering and its subsequent augmenting of system stability. We therefore advocate that the findings of [75] may provide the first direct evidence that degeneracy and networked buffering are necessary for positive robustness-complexity relationships to arise in ecosystems. Other recent studies confirm that including degeneracy within ecosystem models results in unexpected non-localized communication in ecosystem dynamics [77]. We propose that these non-local effects could be another example of the basic resource rearrangement properties that arise due to networked buffering. Despite rich domain differences, we contend there are similarities in how the organizational properties in several CAS facilitate flexibility and resilience within a volatile environment. While the potential advantages from networked buffering are obvious, our intention here is not to make claims that it is the only mechanism that explains the emergence of robustness in system traits. Nor is it our intent to make general claims about the adaptive significance of this robustness or to imply selectionist explanations for the ubiquity of degeneracy within the systems discussed in this article Much degeneracy is likely to be passively acquired in nature (e.g. see [72]). Moreover, there are instances where trait stability is not beneficial as is illustrated in [78-80] where examples of mal-adaptive robustness in biological and abiotic contexts is provided.

Conclusions This paper introduces what is argued to be a new mechanism for generating robustness in complex adaptive systems that arises due to a partial overlap in the functional roles of multi-functional agents; a system property also known in biology as degeneracy. There are many biological examples where degeneracy is already known to provide robustness through the local actions of functionally redundant components. Here however we have

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 14 of 20

presented a conceptual model showing how degenerate agents can readily form a buffering network whereby agents can indirectly support many functionally dissimilar tasks. These distributed compensatory effects result in greater versatility and robustness - two characteristics with obvious relevance to systems operating in highly variable environments. Recent studies of genome-proteome models have found degenerate systems to be exceptionally robust in comparison to those without degeneracy. Expanding on these results, we have tested some of the claims of the buffering network hypothesis and determined that the enhanced robustness within these degenerate genome:proteome mappings is in fact a consequence of distributed (non-local) compensatory effects that are not observable when robustness is achieved using only pure redundancy. Moreover, the proportion of local versus non-local sources of robustness within the degenerate models shows little sensitivity to scaling and is compatible with biological data on mutational robustness.

Appendix 1: Degeneracy, Redundancy, and Partial Redundancy Redundancy and degeneracy are two system properties that contribute to the robustness of biological systems [4,23]. Redundancy is an easily recognizable property that is prevalent in both biological and man-made systems. Here, redundancy means 'redundancy of parts' and refers to the coexistence of identical components with identical functionality (i.e. the components are isomorphic and isofunctional). In information theory, redundancy refers to the repetition of messages, which is important for reducing transmission errors. Redundancy is also a common feature of engineered or planned systems where it provides robustness against variations of a very specific type ('more of the same' variations). For example, redundant parts can substitute for others that malfunction or fail, or augment output when demand for a particular output increases. Degeneracy differs from pure redundancy because similarities in the functional response of components are not observed for all conditions (see Figure 1d). In the literature, degeneracy has at times been referred to as functional redundancy or partial redundancy, however most definitions for these terms only emphasize the many-to-one mapping between components and functions (e.g. [9,81-86]). On the other hand, the definition of degeneracy used here and in [23,77,87-90] also emphasizes a one-to-many mapping. To put it more distinctly, our definition of degeneracy requires degenerate components to also be functionally versatile (one-to-many mapping), with the function performed at any given time being dependent on the context; a behavior we label as functional plasticity [77,90]. For degeneracy to be present, some (but not all) functions related to a component or module must also be observable in others, i.e. a partial and conditional similarity in the repertoire of functional responses (see Figure 1). In contrast, partial redundancy is often used to describe the conditional similarity in functional responses for components capable of only a single function (see Figure 1c). This is analogous to the definition of response diversity within ecosystems [30]v and is conceptually similar to ensemble approaches in machine learning. Functional plasticity is necessary to create the buffering networks discussed in Section 3 and the enhanced evolvability observed in [43,47]. However this requirement is not as demanding as it may at first seem. Functional plasticity is common in biological systems

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 15 of 20

and occurs for most cited examples of degeneracy in [23]. For instance, gene products such as proteins typically act like versatile building blocks, performing different functions that depend on the complex a protein forms with other gene products or other targets in its environment [91,92]. In contrast to earlier ideas that there was one gene for each trait, gene products are now know to have multiple non-trivial interactions with other "targets", i.e. in the interactome [36,37] and these are rarely correlated in time [93]. The alternative, where a gene's functions are all performed within the same context (referred to as "party hubs" in [93]), is known to be considerably less common in biology.

Appendix 2: Degeneracy in biotic and abiotic systems In biology, degeneracy refers to conditions where the functions or capabilities of components overlap partially. In a review by Edelman and Gally [23], numerous examples are used to demonstrate the prevalence of degeneracy throughout biology. It is pervasive in proteins of every functional class (e.g. enzymatic, structural, or regulatory) [90,94] and is readily observed in ontogenesis (see page 14 in [95]), the nervous system [87] and cell signalling (crosstalk). In the particular case of proteins, it is also now known that partial functional similarities can arise even without any obvious similarities in sequence or structure [96]. Degeneracy and associated properties like functional plasticity are also prevalent in other biotic and abiotic systems, such as those listed below in Table 1. For instance, in transportation fleets the vehicles are often interchangeable but only for certain tasks. Multi-functional force elements within a defence force structure also can exhibit an overlap in capabilities but only within certain missions or scenarios. In an organization, people often have overlapping job descriptions and are able to take on some functions that are not readily achieved by others that technically have the same job. In the food webs of complex ecosystems, species within similar trophic levels sometimes have a partial overlap in resource competition. Resource conditions ultimately determine whether competition will occur or whether the two species will forage for distinct resources [75]. Degeneracy has become increasingly appreciated for its role in trait stability, as was noted in [72] and more thoroughly discussed in [23]. For instance, gene families can encode for diverse proteins with many distinctive roles yet sometimes these proteins can compensate for each other during lost or suppressed gene expression, as seen in the developmental roles of the adhesins gene family in Saccharomyces [97]. At higher scales, resources are often metabolized by a number of distinct compensatory pathways that are effectively interchangeable for certain metabolites even though the total effects of each pathway are not identical. More generally, when agents are degenerate some functions will overlap meaning that the influence an agent has in the system could alternatively be enacted by other agents, groups of agents, or pathways. This functional redundancy within a specified context provides the basis for both competition and collaboration amongst agents and in many circumstances can contribute to the stability of individual traits (cf. [23]). Appendix 3: Technical description of genome:proteome model The genome:proteome model was originally developed in [43] and consists of a set of genetically specified proteins (i.e. material components). Protein state values indicate the functional targets they have interacted with and also define the trait values of the system. The genotype determines which traits a protein is able to influence, while a protein's

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 16 of 20

state dictates how much a protein has actually contributed to each of the traits it is capable of influencing. The extent to which a protein i contributes to a trait j is indicated by the matrix elements Mij Z. Each protein has its own unique set of genes, which are given by a set of binary values ij, i n, j m. The matrix element ij takes a value of one if protein i can functionally contribute to trait j (i.e. bind to protein target j) and zero otherwise. In our experiments, each gene expresses a single protein (i.e. there is no alternative splicing). To simulate the limits of functional plasticity, each protein is restricted to contribute to at most two traits, i.e. i n ij 2 i. To model limits on protein utilization (e.g. as caused by the material basis of gene products), maximum trait contributions are defined for each protein, which for simplicity are set equal, i.e. j m Mij ij = i with the integer being a model parameter. The set of system traits defines the system phenotype with each trait calculated as a sum of the individual protein contributions TjP= i n Mij ij. The environment is defined by the vector TE, whose components stipulate the number of targets that are available. The phenotypic attractor F is defined in Eq. 1 and acts to (energetically) penalize a system configuration when any targets are left in an unbound state, i.e. TjP values fall below the satisfactory level TjE.
Simulation

Through control over its phenotype a system is driven to satisfy the environmental conditions. This involves control over protein utilization, i.e. the settings of M. We implement ordered asynchronous updating of M where each protein stochastically samples local changes in its utilization (changes in state values Mij that alter the protein's contribution to system traits). Changes are kept if compatible with the global attractor for the phenotype defined by Eq. 1. Genetic mutations involve modifying the gene matrix . For mutations that cause loss of gene function, we set ij = 0 j when gene i is mutated.
F TP =

( ) q
jm

T jP > T jE 0, qj = P E else Tj Tj ,

(1)

Degenerate Systems

We model degeneracy and redundancy by constraining the settings of the matrix . This controls how the trait contributions of proteins are able to overlap. In the 'redundant model', proteins are placed into subsets in which all proteins are genetically identical and thus influence the same set of traits. However, redundant proteins are free to take on distinct state values, which reflects the fact that proteins can take on different functional roles depending on their local context. In the 'degenerate model', proteins can only have a partial overlap in what traits they are able to affect. The intersection of trait sets influenced by two degenerate proteins is non-empty and truly different to their union. An illustration of the redundant and degenerate models is given in Figure 3.

Appendix: Notes i Stochasticity enhances robustness but is not technically a mechanism for achieving it. Over time, stochasticity forces the states and structures of a system towards paths that

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 17 of 20

are less sensitive to natural fluctuations and this provides "robustness for free" to any other congruent perturbations that were not previously observed. ii In this sense, agents are resources. In the models presented in Figure 2, Section 3 and Appendix 3 we assume, without loss of generality, that agent resources are reusable. iii In a forthcoming paper we provide evidence that the findings in [43] are typically not affected by constraints on the functional combinations allowed within a single gene. iv Our emphasis on robustness towards small/moderate changes is an acknowledgement of the contingency of robustness that is observed in CAS, e.g. the niches of individual species. Mentioning robustness to different classes of perturbation is not meant to imply robustness measurements are not affected by the type of perturbation. Instead it reflects our belief that the mechanistic basis by which robustness is achieved is similar in both cases, i.e. there is a common cause of canalization [46]. v Response diversity is defined as the range of reactions to environmental change among species contributing to the same ecosystem function. vi Note that the diagrams of redundant and degenerate systems represent educative examples only. In many biotic and abiotic CAS, agents are able to perform more than two functions. Also, in practice, systems with multi-functional agents will have some degree of both redundancy and degeneracy. For instance, if the circled agent in panel (a) were introduced to the system in panel (d) then that system would have partially overlapping buffers and thus some small degree of degeneracy.
Competing interests The authors declare that they have no competing interests. Authors' contributions JW designed and carried out experiments. JW and AB wrote the paper and interpreted the results. Both authors have read and approved the final manuscript. Acknowledgements This research was partially supported by DSTO and an EPSRC grant (No. EP/E058884/1). Author Details 1School of Computer Science, University of Birmingham, Edgbaston, UK and 2Land Operations Division, Defence Science and Technology Organisation; Edinburgh, Australia Received: 14 April 2010 Accepted: 15 June 2010 Published: 15 June 2010
2010 Whitacre and Bender; distributed under7:20terms of the Creative This is an Open Access from: http://www.tbiomed.com/content/7/1/20 Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Theoretical Biology and article licensee BioMed Central Ltd. article is available Medical Modelling 2010, the

References 1. Ashby WR: An introduction to cybernetics Methuen London; 1964. 2. Heylighen F, Joslyn C: Cybernetics and second order cybernetics. Encyclopedia of physical science & technology 2001, 4:155-170. 3. Carlson JM, Doyle J: Complexity and robustness. Proceedings of the National Academy of Sciences, USA 2002, 99(Suppl 1):2538-2545. 4. Wagner A: Distributed robustness versus redundancy as causes of mutational robustness. BioEssays 2005, 27(2):176-188. 5. Kitano H: Biological robustness. Nature Reviews Genetics 2004, 5(11):826-837. 6. Aldana M, Balleza E, Kauffman S, Resendiz O: Robustness and evolvability in genetic regulatory networks. Journal of Theoretical Biology 2007, 245(3):433-448. 7. Stelling J, Sauer U, Szallasi Z, Doyle FJ, Doyle J: Robustness of Cellular Functions. Cell 2004, 118(6):675-685. 8. Flix MA, Wagner A: Robustness and evolution: concepts, insights challenges from a developmental model system. Heredity 2008, 100:132-140. 9. Levin S, Lubchenco J: Resilience, robustness, and marine ecosystem-based management. Bioscience 2008, 58(1):27-32. 10. Ciliberti S, Martin OC, Wagner A: Robustness can evolve gradually in complex regulatory gene networks with varying topology. PLoS Comput Biol 2007, 3(2):e15. 11. van Nimwegen E, Crutchfield JP, Huynen M: Neutral evolution of mutational robustness. Proceedings of the National Academy of Sciences, USA 1999, 96(17):9716-9720. 12. Gmez-Gardenes J, Moreno Y, Flori LM: On the robustness of complex heterogeneous gene expression networks. Biophysical Chemistry 2005, 115:225-228.

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 18 of 20

13. Kitami T, Nadeau JH: Biochemical networking contributes more to genetic buffering in human and mouse metabolic pathways than does gene duplication. Nature genetics 2002, 32(1):191-194. 14. Siegal ML, Bergman A: Waddington's canalization revisited: Developmental stability and evolution. Proceedings of the National Academy of Sciences, USA 2002, 99(16):10528-10532. 15. Wilhelm T, Behre J, Schuster S: Analysis of structural robustness of metabolic networks. Systems biology 2004, 1(1):114-120. 16. Szollosi GJ, Derenyi I: Congruent evolution of genetic and environmental robustness in micro-rna. Molecular Biology and Evolution 2009, 26(4):867. 17. Conant GC, Wagner A: Duplicate genes and robustness to transient gene knock-downs in Caenorhabditis elegans. Proceedings of the Royal Society B: Biological Sciences 2004, 271(1534):89-96. 18. Rutherford SL, Lindquist S: Hsp90 as a capacitor for morphological evolution. Nature 1998:336-342. 19. Kauffman SA: Requirements for evolvability in complex systems: orderly components and frozen dynamics. Physica D 1990, 42:135-152. 20. Braendle C, Flix MA: Plasticity and Errors of a Robust Developmental System in Different Environments. Developmental Cell 2008, 15(5):714-724. 21. Fontana W, Schuster P: Continuity in Evolution: On the Nature of Transitions. Science 1998, 280(5368):1451-1455. 22. Ma HW, Zeng AP: The connectivity structure, giant strong component and centrality of metabolic networks. Bioinformatics 2003, 19(11):1423-1430. 23. Edelman GM, Gally JA: Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences, USA 2001, 98(24):13763-13768. 24. Kupiec JJ: A Darwinian theory for the origin of cellular differentiation. Molecular and General Genetics MGG 1997, 255(2):201-208. 25. Feinerman O, Veiga J, Dorfman JR, Germain RN, Altan-Bonnet G: Variability and Robustness in T Cell Activation from Regulated Heterogeneity in Protein Levels. Science 2008, 321(5892):1081. 26. Kirschner M, Gerhart J: Evolvability. Proceedings of the National Academy of Sciences, USA 1998, 95(15):8420-8427. 27. Holland J: Adaptation in natural and artificial systems MIT press Cambridge, MA; 1992. 28. Carpenter S, Walker B, Anderies JM, Abel N: From metaphor to measurement: resilience of what to what? Ecosystems 2001, 4(8):765-781. 29. Holling C: Engineering resilience versus ecological resilience. Engineering within ecological constraints 1996:31-43. 30. Elmqvist T, Folke C, Nystrom M, Peterson G, Bengtsson J, Walker B, Norberg J: Response diversity, ecosystem change, and resilience. Frontiers in Ecology and the Environment 2003, 1(9):488-494. 31. Figge F: Bio-folio: applying portfolio theory to biodiversity. Biodiversity and Conservation 2004, 13(4):827-849. 32. Tilman D: Biodiversity: population versus ecosystem stability. Ecology 1996, 77(2):350-363. 33. Schindler DE, Hilborn R, Chasco B, Boatright CP, Quinn TP, Rogers LA, Webster MS: Population diversity and the portfolio effect in an exploited species. Nature 2010, 465(7298):609-612. 34. Hong L, Page SE: Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences 2004, 101(46):16385-16389. 35. Kurakin A: Scale-free flow of life: on the biology, economics, and physics of the cell. Theoretical Biology and Medical Modelling 2009, 6(1):6. 36. Batada NN, Reguly T, Breitkreutz A, Boucher L, Breitkreutz BJ, Hurst LD, Tyers M: Stratus not altocumulus: a new view of the yeast protein interaction network. PLoS Biol 2006, 4(10):e317. 37. Batada NN, Reguly T, Breitkreutz A, Boucher L, Breitkreutz BJ, Hurst LD, Tyers M: Still stratus not altocumulus: further evidence against the date/party hub distinction. PLoS Biol 2007, 5(6):e154. 38. Cohen IR, Hershberg U, Solomon S: Antigen-receptor degeneracy and immunological paradigms. Molecular immunology 2004, 40(14-15):993-996. 39. Cohn M: Degeneracy, mimicry crossreactivity in immune recognition. Molecular immunology 2005, 42(5):651-655. 40. Tieri P, Castellani GC, Remondini D, Valensin S, Loroni J, Salvioli S, Franceschi C: Capturing degeneracy of the immune system. In To appear In Silico Immunology. Springer; 2007. 41. Mendao M, Timmis J, Andrews PS, Davies M: The Immune System in Pieces: Computational Lessons from Degeneracy in the Immune System. Foundations of Computational Intelligence (FOCI) 2007. 42. Andrews PS, Timmis J: A Computational Model of Degeneracy in a Lymph Node. Lecture Notes in Computer Science 2006, 4163:164. 43. Whitacre JM, Bender A: Degeneracy: a design principle for achieving robustness and evolvability. Journal of Theoretical Biology 2010, 263(1):143-153. 44. Force A, Cresko WA, Pickett FB, Proulx SR, Amemiya C, Lynch M: The Origin of Subfunctions and Modular Gene Regulation. Genetics 2005, 170(1):433-446. 45. de Visser J, Hermisson J, Wagner GP, Meyers LA, Bagheri-Chaichian H, Blanchard JL, Chao L, Cheverud JM, Elena SF, Fontana W: Perspective: Evolution and Detection of Genetic Robustness. Evolution 2003, 57(9):1959-1972. 46. Meiklejohn CD, Hartl DL: A single mode of canalization. Trends in Ecology & Evolution 2002, 17(10):468-473. 47. Whitacre JM, Bender A: Degenerate neutrality creates evolvable fitness landscapes. In WorldComp-2009 Las Vegas, Nevada, USA; 2009. 48. Bergman A, Siegal ML: Evolutionary capacitance as a general feature of complex gene networks. Nature 2003, 424(6948):549-552. 49. Csete ME, Doyle JC: Reverse Engineering of Biological Complexity. Science 2002, 295(5560):1664-1669. 50. Kitano H: A robustness-based approach to systems-oriented drug design. Nature Reviews Drug Discovery 2007, 6(3):202-210. 51. Holling CS: The resilience of terrestrial ecosystems: local surprise and global change. Sustainable development of the biosphere 1986:292-320. 52. Holling CS: Understanding the complexity of economic, ecological, and social systems. Ecosystems 2001, 4(5):390-405. 53. Walker B, Holling CS, Carpenter SR, Kinzig A: Resilience, Adaptability and Transformability in Social--ecological Systems. Ecology and Society 2004, 9(2):1-5.

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 19 of 20

54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69.

70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93.

94. 95. 96.

McCann KS: The diversity-stability debate. Nature 2000, 405(6783):228-233. Lenski RE, Barrick JE, Ofria C: Balancing robustness and evolvability. PLoS Biol 2006, 4(12):e428. Smith JM, Szathmry E: The major transitions in evolution. Oxford University Press, USA; 1997. Kauffman SA: The origins of order: Self organization and selection in evolution. Oxford University Press, USA; 1993. Waddington CH: Canalization of Development and the Inheritance of Acquired Characters. Nature 1942, 150(3811):563. Wagner A: Robustness and evolvability: a paradox resolved. Proceedings of the Royal Society of London, Series B: Biological Sciences 2008, 275:91-100. Ciliberti S, Martin OC, Wagner A: Innovation and robustness in complex regulatory gene networks. Proceedings of the National Academy of Sciences, USA 2007, 104(34):13591-13596. Edelman G: Neural Darwinism: The theory of neuronal group selection Basic Books New York; 1987. Edelman G: Bright air, brilliant fire: On the matter of the mind BasicBooks; 2001. Fraser H, Hirsh A, Giaever G, Kumm J, Eisen M: Noise minimization in eukaryotic gene expression. PLoS Biology 2004, 2:834-838. Bar-Yam Y: About Engineering Complex Systems: Multiscale Analysis and Evolutionary Engineering. Engineering Self Organising Sytems: Methodologies and Applications 2005, 3464:16-31. Ottino J: Engineering complex systems. Nature 2004, 427(6973):399-399. Simon HA: A Behavioral Model of Rational Choice Santa Monica: Rand Corp; 1953. March J: Bounded rationality, ambiguity, and the engineering of choice. The Bell Journal of Economics 1978, 9(2):587-608. Cyert RM, March JG: Behavioral Theory of the Firm Prentice-Hall; 1963. Whitacre J, Rohlfshagen M, Yao X, Bender A: The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments. in 11th International Conference on Parallel Problem Solving from Nature (PPSN 2010) Krakow, Poland in press. Whitacre JM: Evolution-inspired approaches for engineering emergent robustness in an uncertain dynamic world. in Proceedings of the Conference on Artificial Life XII Odense, Denmark in press. Berlow EL: Strong effects of weak interactions in ecological communities. Nature 1999, 398(6725):330-334. Whitacre JM: Degeneracy: a link between evolvability, robustness and complexity in biological systems. Theoretical Biology and Medical Modelling 2010, 7(1):6. Granovetter MS: The strength of weak ties. American journal of sociology 1973, 78(6):1360-1380. Csermely P: Weak links: Stabilizers of complex systems from proteins to social networks Springer Verlag; 2006. Kondoh M: Foraging adaptation and the relationship between food-web complexity and stability. Science 2003, 299(5611):1388. Kondoh M: Does foraging adaptation create the positive complexity-stability relationship in realistic food-web structure? Journal of Theoretical Biology 2006, 238(3):646-651. Atamas S, Bell J: Degeneracy-Driven Self-Structuring Dynamics in Selective Repertoires. Bulletin of Mathematical Biology 2009, 71(6):1349-1365. Mogul J: Emergent (mis) behavior vs. complex software systems. in European Conference on Computer Systems Leuven, Belgium: ACM; 2006. Parunak H, VanderBok R: Managing emergent behavior in distributed control systems. in Proc. ISA-Tech ' 97 1997. Kitano H: Cancer as a robust system: implications for anticancer therapy. Nature Reviews Cancer 2004, 4(3):227-235. Minai A, Braha D, Bar-Yam Y: Complex engineered systems: A new paradigm, in Complex Engineered Systems Springer; 2006:1-21. Sussman G: Building Robust Systems an essay Citeseer; 2007. Zhan S, Miller J, Tyrrell A: Obtaining System Robustness by Mimicking Natural Mechanisms. CEC-2009 2009. Macia J, Sol R: Distributed robustness in cellular networks: insights from synthetic evolved circuits. Royal Society Interface 2009, 6:393-400. Price C, Friston K: Degeneracy and cognitive anatomy. Trends in Cognitive Sciences 2002, 6(10):416-421. Seth A, Baars B: Neural Darwinism and consciousness. Consciousness and Cognition 2005, 14(1):140-168. Tononi G, Sporns O, Edelman GM: Measures of degeneracy and redundancy in biological networks. Proceedings of the National Academy of Sciences, USA 1999, 96(6):3257-3262. Sporns O, Tononi G, Edelman G: Connectivity and complexity: the relationship between neuroanatomy and brain dynamics. Neural Networks 2000, 13(8-9):909-922. Sol RV, Ferrer-Cancho R, Montoya JM, Valverde S: Selection, tinkering, and emergence in complex networks. Complexity 2002, 8(1):20-33. Atamas S: Les affinits lectives. Pour la Science 2005, 46:39-43. Gavin AC, Aloy P, Grandi P, Krause R, Boesche M, Marzioch M, Rau C, Jensen LJ, Bastuck S, Dmpelfeld B: Proteome survey reveals modularity of the yeast cell machinery. Nature 2006, 440:631-636. Krause R, von Mering C, Bork P, Dandekar T: Shared components of protein complexes-versatile building blocks or biochemical artefacts? BioEssays 2004, 26(12):1333-1343. Han JDJ, Bertin N, Hao T, Goldberg DS, Berriz GF, Zhang LV, Dupuy D, Walhout AJM, Cusick ME, Roth FP: Evidence for dynamically organized modularity in the yeast protein-protein interaction network. Nature 2004, 430(6995):88-93. Wagner A: The role of population size, pleiotropy and fitness effects of mutations in the evolution of overlapping gene functions. Genetics 2000, 154(3):1389-1401. Newman SA: Generic physical mechanisms of tissue morphogenesis: A common basis for development and evolution. Journal of Evolutionary Biology 1994, 7(4):467-488. Petrey D, Honig B: Is protein classification necessary? Toward alternative approaches to function annotation. Current Opinion in Structural Biology 2009, 19(3):363-368.

Whitacre and Bender Theoretical Biology and Medical Modelling 2010, 7:20 http://www.tbiomed.com/content/7/1/20

Page 20 of 20

97. Guo B, Styles CA, Feng Q, Fink GR: A Saccharomyces gene family involved in invasive growth, cell-cell adhesion, and mating. Proceedings of the National Academy of Sciences, USA 2000, 97(22):12158-12163.
doi: 10.1186/1742-4682-7-20 Cite this article as: Whitacre and Bender, Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems Theoretical Biology and Medical Modelling 2010, 7:20

Journal of Natural Computing - Special Issue on Emergent Engineering manuscript No. (will be inserted by the editor)

Degeneracy and Networked Buering: principles for supporting emergent evolvability in agile manufacturing systems
Regina Frei James Whitacre

Received: date / Accepted: date

Abstract This article introduces new principles for improving upon the design and implementation of agile manufacturing and assembly systems. It focuses particularly on challenges that arise when dealing with novel conditions and the associated requirements of system evolvability, e.g. seamless recongurability to cope with changing production orders, robustness to failures and disturbances, and modiable user-centric interfaces. Because novelty in manufacturing or the marketplace is only predictable to a limited degree, the exible mechanisms that will permit a system to adequately respond to novelty cannot be entirely pre-specied. As a solution to this challenge, we propose how evolvability can become a pervasive property of the assembly system that, while constrained by the systems historical development and domain-specic requirements, can emerge and re-emerge without foresight or planning. We rst describe an important mechanism by which biological systems can cope with uncertainty through properties described as degeneracy and networked buering. We discuss what degeneracy means, how it supports a system facing unexpected challenges, and we review evidence from simulations using evolutionary algorithms that support some of our conjectures in models with similarities to several assembly system contexts. Finally, we discuss potential design strategies for encouraging emergent changeability in assembly systems. We also discuss practical challenges to the realization of these concepts within a systems engineering context, especially issues related to system transparency, design costs, and eciency. We discuss how some of these diculties can be overcome while also elaborating on those factors that are likely to limit the applicability of these principles.
Regina Frei currently receives a fellowship for prospective researchers from the Swiss National Science Foundation. R.Frei Imperial College London, South Kensington, London SW7 2AZ, UK E-mail: work@reginafrei.ch J. Whitacre University of Birmingham, Edgbaston B15 2TT, UK E-mail: j.m.whitacre@cs.bham.ac.uk

1 Introduction Developments in engineering and technology have repeatedly taken cues from properties found in biology, mainly for designing individual systems. However, there may be an even bigger potential for collective and networked systems-of-systems to adopt principles observed in nature. In addition to self-* properties and emergence [Frei and Barata, 2010], degeneracy and networked buering are promising characteristics that if adopted by engineered systems may improve their adaptability towards novel stresses. But why should designers, engineers, planners, analysts, and decision makers care about the concept of emergent engineering? The hypothesis explored in this article is that there is an important (and growing) set of problems for which traditional engineering paradigms are now known to be insucient and where new biological paradigms can be shown to be more eective. These problems are characterised rstly by the presence of unpredictability that arises across multiple timescales and necessitates that systems display an inherent propensity to be modied and adapted to novel conditions. Mass customisation, operational volatility, and strategic uncertainty are common features of these problems and subsequently require systems to display recongurability, robustness, and evolvability. We reect on the conditions under which distributed self-organised systems can display new emergent properties at the system level which are congruent with system objectives yet driven largely by boundedly rational individuals undergoing short-sighted and possibly selsh decisions. Emergent properties often involve some element of surprise and are not necessarily benecial. In this article, we describe the necessary conditions for the realisation of a particular emergent property that directly contributes to a systems recongurability, robustness, and evolvability and thus represents a potentially important example whereby emergent properties directly contribute to the performance of engineered systems-of-systems. Organisation of this article: Section 2 details some of the challenges that are faced by agile assembly systems. Section 3 discusses complexity in biology and engineering. Section 4 explains how complex systems in nature use degeneracy, networked buering and evolvability to cope with challenges associated with uncertainty, and applies these concepts to agile assembly systems. Our arguments for the realisation of a particular form of engineering emergence are supported from simulations of evolutionary processes. Because these have a close correspondence to standing challenges in nature-inspired computing, we use section 4.2 to discuss how previous simulation results are relevant to that eld of study and how the challenges being faced in that eld have important similarities to those faced in the design of evolvable technological artifacts. Finally, section 5 discusses some of the benets and open questions surrounding the approach proposed in this article, and section 6 concludes.

2 Challenges in agile assembly The production and assembly of small mechanic, electronic and mechatronic products such as mobile phones, ipods, computer mice, remote controls, watches, wash-

ing machine handles and coee machines is today mostly automated; robots in the industrial shop oor assemble the product parts according to the customers orders. Assembly is an important component of the manufacturing process that is common to almost all modern industrially produced goods. In some cases, the concepts developed for assembly are applicable to manufacturing in general; in particular for operations such as milling, drilling, turing, painting, marking, quality checks, packing, and others. The robots used for the automation of assembly and other manufacturing processes are either complete industrial robots of diverse types or, increasingly, composed of versatile modules. Prior to the presence of modularity in manufacturing and mass-customisation in products, most manufacturing and assembly lines [Koren et al., 1999] were dedicated and custom-built facilities that massproduced a specic product. It typically takes several months to build and program such a system, which will then operate under central top-down control. These systems cope poorly with unexpected failures and disturbances; if they can be adapted or changed at all, the re-engineering and re-programming is a work-intensive and error-prone procedure. Such manufacturing facilities are increasingly unsuitable for todays dynamic, customer-driven markets; innovative solutions for coping with new products/services are becoming a standard system requirement. Over the last two or three decades, numerous collaborative projects involving academia and industry have proposed alternative conceptual approaches for the design of adaptive manufacturing facilities. These projects have broadly followed four main paradigms (and variations thereof), some of which are at least partially complementary: exible, recongurable, holonic and agile manufacturing. The following overview of these paradigms is brief - more extensive discussion is available in [Frei, 2010, Frei and Di Marzo Serugendo, 2011c]. Flexible Manufacturing Systems (FMS) [Buzacott and Shanthikumar, 1980, Kaula, 1998, Onori and Groendahl, 1998] are composed of machines that display a predened set of manufacturing capabilities, which makes them highly sophisticated and potentially dicult to manage [Barata et al., 2005]. The likelihood of paying for unutilised/wrong capabilities is high if such systems were to be implemented within a dynamic manufacturing environment. On the other hand, for companies that are condent that their capability requirements will not change over several years, FMS may provide a suitable solution. Recongurable Manufacturing Systems (RMS) [Koren et al., 1999, Mehrabi et al., 2000, ElMaraghy, 2006] aim to develop modular systems in which an engineer can add / remove functionalities according to current demands. Modularity is viewed as an important precondition for promoting shop-oor level agility and recent eorts in the area of RMS focus on recongurable machines [Katz, 2007] and the evolution of product characteristics [ElMaraghy et al., 2008]. While conceptually promising, the elaboration of guidelines for these design principles and the associated control strategies are thus far lacking. Holonic Manufacturing Systems (HMS) [Valckenaers and Van Brussel, 2005, Marik et al., 2007] follow a paradigm based on the so-called holarchies, as suggested by Koestler in 1967 [Koestler, 1967]: every item is a whole as well as a component of a larger whole. At their inception, holonic systems were strongly inspired by evidence that many natural systems are organised into dynamic hierarchies; however, with time, these approaches have primarily become top-down solution strategies and consequently have become less suitable for facilitating rapid

adaptation. ADACOR [Leito, 2004, Leitao and Restivo, 2008] combines holonics a with the concept of self-organisation by using principles based on pheromone attraction for task attribution. Agile manufacturing systems are distributed autonomous systems. This paradigm was developed to cope with frequently changing requirements, low production volumes, multiple product variants, as well as perturbations and failures. Mechanical system recongurations are facilitated by modular hardware, but (re)programming manufacturing systems often remains as a tedious, manual, workintensive and error-prone procedure. The Architecture for Agile Assembly (AAA) [Rizzi et al., 1997, Kume and Rizzi, 2001, Hollis et al., 2003] considers a distributed system of self-representing cooperative modules equipped with information about their own capabilities and negotiation skills to communicate with their peers. The programming [Gowdy and Rizzi, 1999] is agent-based, but does not consider self-* properties. Recent advances [Hollis et al., 2003, Niemeyer, 2006] in AAA concern mechanic modules with a concept where not only the robot moves with two degrees of freedom, DoF ), but also the carriers are planner motors which move on a platen (two additional DoF). Research into AAA has mainly presented technological solutions for specic manufacturing tasks such as visual gripping, cascaded lenses and special algorithms for object recognition. A similar concept to AAA is seen in a German project known as MiniProd [Gaugel et al., 2004, Hanisch and Munz, 2008]1 , which involves a collaboration between several industrial and academic partners. Some system designers have taken inspiration from natural complex systems to build agile manufacturing systems [Ueda, 2006, Leito, 2008, Frei et al., 2007], a with additional inuence from Autonomic [Kephart and Chess, 2003], Pervasive Adaptation2 / Ubiquitous3 and Organic [Wuertz, 2008] computing concepts. An agile manufacturing system can be considered as a multi-agent system, which needs to full specic tasks. Manufacturing resources are agentied; similarly, product orders and parts are represented by agents. Numerous systems for multi-agent control systems in manufacturing have been proposed, reaching from enterprise resources management to order scheduling and shop-oor control [Marik et al., 2007, Shen et al., 2006, Vrba, 2003]. Some projects were deployed in industry [Bussmann, 2000]. Changes in the shop-oor can be automated through an ontology-based reconguration agent [Al-Sa and Vyatkin, 2010]; this is, however, a centralised top-down approach for managing an otherwise distributed system. For software agents which represent robotic modules to gain more autonomy in achieving their goals, they need a representation of their own body as well as their relations with their peers and the environment [Frei, 2010, Valle et al., 2009]. e The following introductory subsections explain one of the agile approaches currently being developed - self-organising evolvable assembly systems - which focuses on facilitating evolvability and self-organisation.

1 2 3

http://miniprod.com/frame\_01.html; website in German http://www.perada.eu http://sandbox.xerox.com/ubicomp

2.1 Evolvable Assembly Systems (EAS) Evolvable Assembly Systems (EAS) [Onori, 2002, Barata, 2005] consist of robotic modules of varying granularity. A module is either an entire industrial robot with several skills (i.e. screwing, rotating and linearly moving) or a simpler module such as a robotic axis, a gripper, a feeder, or a conveyor having a single skill only. Every module is an embodied agent with self-knowledge (about its skills and physical characteristics) as well as communication/coordination capabilities (to coordinate its work with other modules). Modules engage in coalitions (see Figure 1) to provide the composite skills necessary to assemble the product at hand. For instance, a gripper able to seize and release parts forms a coalition with a linear axis to compose a pick&place skill. Evolvability refers to a systems ability to continuously and dynamically undergo modications of varying importance in order to maintain or improve competitiveness: from small adaptations, e.g. in the timing and placement of component interactions, to larger changes in system behavior [Frei, 2010]. To understand evolvability in an assembly systems context, it is important to take into account the mutual causal relations between product design, assembly processes and the assembly system itself, as illustrated in Figure 2 and discussed in [Semere et al., 2008, Frei, 2010]. Each product belongs to a particular product class and each production process refers to a coherent suite of assembly operations which generate a nished product by assembling a set of parts. Production processes, the product design and the assembly system are intimately linked: any change in the product design has an impact on the processes to apply and on the actual assembly system to use. Similarly, any change in a process (for instance replacing a rivet by a screw) may imply a change in the product design, and will almost invariably impact assembly system requirements. Evolvability requires seamless integration of new modules independently of their brand or model. Modules that comprise an assembly system either include local controllers, or are associated with separate virtual agents; either way, the modules have some degree of autonomy to make decisions based on local information. The heterogeneity of the modules (nature, type, vendor) does not prevent them from forming a homogeneous agent society at the software level. Software wrappers (also called Agent Machine Interfaces, AMI, in [Barata, 2005]) allow the generic agents to represent any type of robotic module.

Fig. 1 EAS module coalition

Fig. 2 Product, processes and system

Feasible coalitions in EAS are statically created (o-line) by an engineer. Modifying a coalition implies redesigning and re-programming the whole assembly system.

2.2 Self-Organising Assembly Systems (SOAS) Self-Organising Assembly Systems [Frei et al., 2008, Frei, 2010] extend EAS in the following way: given a product order (generic assembly plan - GAP) provided as input, the modules read task specications and autonomously compose suitable coalitions with the goal of providing the required skills. Modules typically provide simple skills, and when forming coalitions, they provide composite skills, based on their compatibilities and specic composition rules (details in [Frei, 2010]). Once each task is associated with a coalition - and this has been conrmed by the user - the coalitions arrange themselves in accordance with the shop-oor layout. The modules also derive their layout-specic assembly instructions - LSAI themselves, based on the GAP. The result of this self-organising process is a new or recongured assembly system that will assemble the ordered product. An appropriate assembly system will emerge from a self-organisation process, modelled on the basis on the Chemical Abstract Machine [Berry and Boudol, 1998], as follows. Any new product order triggers a self-organising process, which eventually leads to a new appropriate system. There is no central control authority, although the user may intervene if necessary. Similar to the formation of complex molecular assemblies within a cell [Kurakin, 2009], robotic modules progressively aggregate with each other to full the product order [Frei et al., 2010] (illustrated in Figure 3). Because order specications dene the required task sequence, the self-organisation process becomes regulated so that, under ideal operating conditions, each formed coalition presents a required skill that is executed in the correct operation sequence. This automated process extends beyond layout creation. During production, whenever a failure occurs in one or more of the currently used modules of the system, the process may lead to three dierent outcomes: 1) the current modules adapt their behaviour (change speed, force, task distribution, etc.) to cope with the failure, possibly degrading performance in order to maintain functionality; 2) the module fails to achieve the task and it is replaced by another module, thereby leading to a repaired system; 3) the system is unable to compensate for the failure.

The actions taken by the system will depend on the availability of resources and on specic production constraints (cost / speed / precision). An SOAS is thus an EAS with two additional features: 1) modules self-organise to produce a suitable layout for the assembly of the ordered product and 2) the assembly system as a whole self-adapts to production conditions and self-manages its behaviour. The realisation of the SOAS paradigm requires pervasive adaptation in the face of several inter-related types of uncertainty. This uncertainty originates from a lack of perfect knowledge about future system and environmental states and can result in the emergence of sub-optimal system congurations or the creation of sub-optimality through unexpected changes in the environment. Resolutions to such challenges are broadly relevant across complex systems science in general and emergent engineering studies in particular. Importantly, SOAS must incorporate strategies that can allow a system to remain evolvable under complex and ever novel conditions. The remainder of this article outlines concepts that are intended to resolve several important evolvability preconditions. We argue that these concepts can help to facilitate adaptation at the conguration, operation, and design levels of assembly systems and thus could prove invaluable to agile manufacturing paradigms in general and SOAS in particular.

3 Complexity in biology and engineering In biological evolution, continued species survival requires that incremental adaptive design changes can be discovered that do not lead to a propagation of redesign requirements in other components in the system, i.e. macro-mutation is a negligible contributor to the evolution of complex species. Instead, single heritable (design) changes are found that lead to (possibly context-specic) novel interaction opportunities for a component, exible reorganisation of component interactions (that can robustly preserve other core system functionalities), and in some cases a subsequent compounding of novel opportunities within the system [Kurakin, 2009]. In other words, the requirement is one of incremental changes in design and compartmentalised, but not necessarily incremental, changes to system behaviour. While occasional slowdowns in the tempo of biological evolution are known to take place

Fig. 3 The chemical abstract machine (CHAM) applied to self-organising assembly systems (SOAS), where GAP stands for the generic assembly plan which is composed of a set of tasks. The blocks in the chemical soup represent the modules which react with the GAP and each other to provide the requested skills and, if necessary, therefore form suitable coalitions.

(e.g. under stabilising selection), there is no evidence from paleontology or population genetics studies to suggest that biological systems display the same built-up tension or sensitivity to incremental genetic changes as technological systems display towards incremental engineering design changes. The dynamic attributes of biological evolution are perplexing to engineers, especially considering that sophisticated services in biological systems involve the execution of many distinct sub-functions and process pathways. Importantly, the building blocks that generate these sophisticated biological services/traits are not single purpose devices with predened functionality but instead display a high degree of functional plasticity and degeneracy. Functional plasticity refers to the presence of multi-functional components (e.g. proteins, molecular assemblies, cells, organisms) that change what function they execute depending on their local context. Primarily observed in biological systems, degeneracy refers to the existence of functionally plastic components (but also modules and pathways) that can perform similar functions (i.e. are eectively interchangeable) in certain conditions, but can perform distinct functions in other conditions, i.e. components are partially overlapping in their functionalities; see Figure 4, [Whitacre et al., 2011]. Degeneracy contributes to local compensatory eects because it provides a biological system with dierent options for performing a given function, which can be used to compensate for the failure of a component class and helps in dealing with small changes in the requirements associated with the realisation of a particular functional outcome [Edelman and Gally, 2001]. As we discuss in section 4, degeneracy aords a weaker coupling between the functions performed and the components involved in achieving them [Kirschner and Gerhart, 1998], and can lead to emergent forms of system exibility that increase a systems options for dealing with novel stresses. Within an abstract design space or tness landscape, one could say that traditionally designed systems nd themselves on isolated adaptive peaks while biological systems reside on richly connected neutral plateaux. While complex systems research has repeatedly used the rugged tness landscape metaphor to advocate for greater emphasis in disruptive/explorative design changes, this is neither required nor observed in biological evolution. To clarify these points, we next discuss conicts that arise between a systems complexity and its adaptability in designed systems and then we discuss how these conicts might be resolved through degeneracy.

Fig. 4 a) Functional redundancy, b) functional plasticity

3.1 Evolvability-Complexity conicts in engineering While the term complexity generally relates to the interdependence of component behaviour / actions / functions, it is an otherwise ambiguous term and there is no consensus as to its meaning or measurement. In engineering, complexity is often used to describe sophisticated services involving several entities and their multilateral interactions [Brueckner, 2000, Leito, 2004, Frei and Di Marzo Serugendo, 2011b, a Frei and Di Marzo Serugendo, 2011a]. Below we recount a typical narrative surrounding the tension between complexity and design adaptation in systems engineering. Starting with a single device, the number and exactness of functional requirements / specications / constraints generally inuences the proportion of operating conditions that can meet these requirements. While the trade-o between operating constraints and operational feasibility is not always simple (e.g. linear, monotonic) even for a single device, it is widely acknowledged that multi-component services tend to become more sensitive to novel internal and external conditions as more components are added to the system that co-specify the feasible operating conditions of other system components. In particular, the operating requirements placed on each component become more exacting as its function becomes more reliant on the actions / states / behaviours of others, e.g. through direct interaction, through sharing or modifying the same local resources, or indirectly through failure propagation. These challenges are broadly observed and represent important heuristic knowledge for engineers in industries such as biotechnology (e.g. bioreactors, biologic purication), nanotechnology (e.g. production of electrostatically sensitive devices), precision assembly (e.g. propagation of tolerance exceeding), or rigidly automated production lines (e.g. one blocked machine can take the entire system down). Services achieved through complex engineering artifacts tend to be more fragile to atypical component behaviours or atypical events because a greater proportion of events will exceed the operational tolerance thresholds in at least one device, with the propagation characteristics of these threshold-crossing events determining the likelihood of sub-system and system-wide failure. These common operational challenges contribute to diculties associated with changing product specications, changing production processes and changing the design of assembly systems. With systems designed from single purpose devices that are uniquely suitable for a process-critical function, this establishes a tight coupling between system performance metrics, the reliability of a function, the continued normal operation of the device providing that function, and the continued compatibility of that device with other interacting devices. To reduce the frequency of failures, a design approach is often taken that assumes predictability (e.g. in requirements, operations, perturbations) or relies on empirically driven placement of backup devices, storage / maintenance / preservation facilities, and related fail-safe principles. These design principles often introduce ineciencies but have proven important in complex engineered systems for reducing failure propagation, for ensuring products are not lost, and for ensuring that adequate time is available to undertake any required system reconguration / retooling / repair / redesign. How to achieve adaptation while maintaining higher eciency in a complex operational setting is not straightforward or obvious. A number of discussions in

10

the literature have implied that technological artifacts reside near a Pareto optimal adaptability-eciency trade-o surface and that the comparatively higher propensity for adaptation in biology is only achievable due to lower levels of efciency. In the following sections, we discuss conceptually simple principles that are believed to facilitate ecient adaptation in complex biological systems. Along with reviewing these principles, we also discuss recent simulation studies that thus far support the relevance of these principles for adaptation within several classes of complex systems. We then discuss how these concepts can be directly transferred to assembly systems and also comment on their broader relevance within systems engineering.

4 Strategies of natural complex systems: adaptation through degeneracy and networked buering 4.1 Experimental evidence of degeneracy and networked buering To adapt, a system must be provided with access to many distinct options for changing its output or behavior and the system must be able to take these options and transform them into innovations that are useful within the context of a particular environment. Theoretical arguments have been put forth over the last decade suggesting that degeneracy supports both of these prerequisites for adaptation [Edelman and Gally, 2001, Whitacre and Bender, 2010a, Whitacre, 2010a] and recently there has been some evidence from simulation studies that has provided some empirical support for these conjectures. For instance, in simulations of genome:proteome mappings and in agent-based models, degeneracy has been found to considerably enhance the number of accessible design change options for a system (see heritable phenotypic variation in [Whitacre and Bender, 2009, Whitacre and Bender, 2010a]). Further studies have found an unexpectedly high proportion of these options can be utilised as positive adaptations [Whitacre, 2010b, Whitacre et al., 2010] and can sometimes aord further opportunities when these systems are presented with new environments [Whitacre, 2010c]. In attempting to understand how random novelties are transformed into adaptations, it was shown in [Whitacre and Bender, 2010b] that high levels of degeneracy lead to the emergence of pervasive exibility in how a system organises itself and in this way can allow a decoupling between the robustness of some functions and the modication of others. The means by which this can be achieved has been described as the networked buering hypothesis in [Whitacre and Bender, 2010b] and is conceptually illustrated using the diagrams in Figure 5. Shown in each of the panels in Figure 5 are systems of agents, which could represent proteins within a cell, species in a food-web, or devices comprising an assembly system. For educative purposes, we simplify the illustration so that each agent is only capable of performing one of two distinct functions. The agents are depicted as pairs of connected nodes and the nodes are positioned in such a manner such that spatial distance within the diagram indicates similarity in function. In Panels a-c, we show high levels of system degeneracy, i.e. many multi-functional agents that are partially related to one another in function, while Panel d displays a system with no degeneracy. In Panel b, we consider a

11

situation in which an agent has failed to perform function Z and the system now needs to attempt to perform this function by other means, i.e. by having another agent attempt to take its place. Because there are more agents assigned to function X than are needed, and because of the degeneracy in the system, the agents can undergo a series of role reassignments (as indicated by the arrows with the switch symbols), which provides the system with the means by which to attempt a response to this challenge. In other words, degeneracy allow extra resources related to one function to indirectly support entirely unrelated functions in a system. As shown in Panel c, depending on where we have extra agent resources, there are potentially many dierent ways in which the system could respond to this unexpected change (as indicated by the additional arrows with switch symbols) and thus there is a greater chance that the system can be recongured to deal with novel conditions. Conversely, consider a situation where we now have excess resources related to function Z. There are many dierent ways in which these resources can be used to support unrelated functions in the system, which is seen by reversing the ow of arrows in panel c. This implies that small amounts of excess resources can be used in a highly versatile manner with a multiplier eect on overall system robustness. In other words, there is dramatically lower amounts of ineciency that are required to achieve a high level of system robustness. We have

Fig. 5 The networked buering hypothesis illustrated for a multi-agent system

12

conrmed such attributes within simulation studies of genome:proteome mappings where it was found that degeneracy approximately doubles the robustness that is gained from excess resources in comparison with systems where degeneracy was entirely absent [Whitacre and Bender, 2010b]. In accordance with Ashbys Law of Requisite Variety [Ashby, 1956], robustness is intimately tied to the number of response options that are available to a system. One can immediately see from Panel d that the number of reconguration options becomes greatly reduced when degeneracy is replaced by pure redundancy. While pure redundancy is costly in technological systems because redundant components remain idle, degeneracy allows a system to use its components in dierent ways, so that they are more consistently utilised under dierent system-level requirements. On the other hand, systems which are composed of a wider range of components exhibiting degeneracy are more complex and thus may require greater eorts for system design and control. There are reasons however why degeneracy may not necessarily incur such high design and control costs, which we discuss in detail in [Whitacre et al., 2011] and mention here briey. First, manufacturing systems are often integrated within socio-technical systems and in some circumstances humans may interfere to manage decisions that the control system is not yet capable to handling. Second, at least for the simulation conditions have been explored thus far, it appears that the desirable properties associated with degeneracy can arise through local and boundedly rational decision making and does not require centralised control. This is signicant because technical components are becoming increasingly autonomous, (i.e., self-regulated, self-directed) and the eort for managing such systems might become reduced as a result of these technological advancements. Although design costs may increase from degeneracy (e.g. due to lost economies of scale), we have found that changes in only a small percentage of component designs can quickly lead to a networked buering eect [Whitacre et al., 2011]. Reductions in design costs may also arise from modularity-facilitated mass-customisation as briey discussed in Section 5. By exploring the concepts of degeneracy in several studies, we have found that the networked buering property shown in Figure 5 readily emerges whenever degeneracy is allowed to occur in a system that is forced to repeatedly adapt to novel changes in conditions. Importantly, this network-based exibility does not need to be explicitly encouraged or planned for in order to arise in these simulations. Such ndings are particularly relevant for systems that are forced to respond to novel conditions on a fairly regular basis. While no systems can adequately respond to all possible perturbations and all systems display degraded performance under conditions that are strongly disconnected from previous experiences [Whitacre, 2010b], these simulations have found that high rates of adaptation combined with quantitatively higher robustness [Whitacre and Bender, 2009] places degenerate systems at a considerable advantage in competitive environments that are not fully predictable [Whitacre et al., 2011]. One of the most important conclusions drawn from these studies of degeneracy and networked buering is that only certain types of robustness will support system evolvability. For instance, in recent experiments, we have found degeneracy provides opportunities for design and operational novelty that are not simply random variations [Whitacre et al., 2011]. Instead, the exibility aorded by degeneracy can facilitate the emergence of new highly adaptive system congurations that are responses to new environmental and internal requirements.

13

In other words, degeneracy may provide a means for what Kirschner and Gerhart [Kirschner and Gerhart, 1998] refer to as facilitated variation and might be described as a constrained but more evolvable version of Kaumans Adjacent Possible [Kaumann, 1993]. Together these ndings are potentially signicant to balancing the needs for stability and change in systems engineering contexts and they have been used to propose a mechanistic basis by which random variations can be transformed into useful innovations, as is discussed in [Whitacre et al., 2011]. We have argued that such ndings should be relevant to evolution theory and the application of evolutionary principles to other domains. One domain where we have explored these ideas is in evolution-inspired optimisation. Below we describe the role that degeneracy can play in problem representation evolvability and then we focus the remainder of the article to explain how these basic ndings can be transferred to the design of self-organising assembly systems.

4.2 Degeneracy and networked buering in nature-inspired computing Studies from several disciplines have attempted to determine those conditions that lead to the positive relationships observed in natural evolution between mutational robustness and evolvability. In computational intelligence, these issues relate directly to concepts of tness landscape neutrality and the search for highquality solutions. Fitness landscapes are used extensively in the eld of combinatorial optimisation to describe the structural properties of the problem to be optimised. The tness landscape results directly from the choice of representation as well as the choice of search operators. Subsequently, dierent representations lead to dierent tness landscapes and hence to problems of dierent diculty (see [Rothlauf, 2006] for an overview). Much research has focused on developing and analysing dierent problem representations. Inspired by earlier developments in theoretical biology, neutrality the concept of mutations that do not aect system tness has been integrated into problem representations using various approaches such as polyploidy, i.e. introducing multiple copies of the same gene (see [Banzhaf, 1994, Yu and Miller, 2001, Rothlauf and Goldberg, 2003, Knowles and Watson, 2003, Jin et al., 2009]). However, there are theoretical reasons as well as some experimental evidence to suggest that only particular representations of neutrality will support the discovery of novel adaptations. When considering discrete local changes (mutations) in the decision variables of a single solution, the number of distinct accessible solutions is trivially constrained by the dimensionality of the solution space. Under these conditions, any increase in tness neutrality i.e. mutational robustness will reduce the number of accessible alternative solution options [Jin and Trommler, 2010]. While more explorative/disruptive variation operators can increase solution options, nature almost always takes a dierent approach. In gene regulatory networks and other biological systems, mutational robustness often creates a neutral network that improves access to solution options over long periods of time, e.g. by drifting over neutral regions in a tness landscape [Ciliberti et al., 2007]. With solution options being a prerequisite of evolutionary adaptability, a strong case has been made that this positive correlation of mutational robustness and solution op-

14

tions is important to the evolvability of biological systems [Ciliberti et al., 2007, Whitacre and Bender, 2010a, Whitacre, 2010a]. Motivated by these developments in biology, some computational intelligence studies have investigated whether increasing neutrality (e.g. designing a many-toone mapping between genotypes and phenotypes) inuences the evolvability of a search process [Banzhaf, 1994, Yu and Miller, 2001, Rothlauf and Goldberg, 2003, Knowles and Watson, 2003, Jin et al., 2009]. A common approach is to introduce genetic redundancy so that more than one copy of a gene performs the same function, i.e. genes that impact the tness function in the same way [Banzhaf, 1994, Yu and Miller, 2001]. Although early studies suggested that redundant forms of neutrality improve evolvability, others have questioned the utility of tness landscape neutrality generated through redundant encodings [Knowles and Watson, 2003, Whitacre and Bender, 2009, Whitacre and Bender, 2010a]. One problem with previous representation studies is that neutrality was introduced as a means for exploring a largely already determined tness landscape and not as a property that arises as a consequence of development, i.e. genotype:phenotype mappings that are guided by feedback from an external environment. In biology, a considerable amount of neutrality (i.e. mutational robustness) is actively constructed through components that are partly interchangeable, i.e. conditionally compensatory or degenerate. This means that components might appear interchangeable in one environment or a particular genetic background, but lose this functional redundancy in other backgrounds, i.e. interoperability is context dependent. One important consequence is that dierent points in a neutral region within the tness landscape will have mutational access to different phenotypes [Whitacre and Bender, 2009, Whitacre and Bender, 2010a]. Recent Evolutionary Computation studies have found that phenotypes accessed in this way can have an adaptive signicance in both static and dynamic environments [Whitacre, 2010c, Whitacre, 2010b, Whitacre et al., 2010]. In the latter case, it was further shown that degeneracy enables the emergence of useful forms of genetic diversity in a population whereby few phenotypic dierences are observed in a stable environment but many phenotypic variants can be revealed in the same population after a change in the environment [Whitacre, 2010c]. This conditional robustness in traits is analogous to a phenomena observed in natural populations known as cryptic genetic variation [Whitacre, 2010c, Whitacre, 2011].

4.3 Degeneracy in assembly systems One of our primary claims in this article is that the same phenomena observed in the evolutionary simulations and Evolutionary Computation studies just discussed can be realised in a systems engineering context. For this to occur, it is an important requirement that agents are capable of functional plasticity and degeneracy. In EAS and SOAS, modules have a ne granularity, which means that the functionalities of an industrial robot are broken down into many small modules (as opposed to only a few in a system with coarse granularity), as illustrated in Figure 6. Finely granular modules may be dened at the level of tools or robotic axes; medium granularity is at robot or machine level, and coarse granularity at manufacturing cell level. Also conveyors may be divided into smaller or bigger

15

units. Logically, the ner the granularity, the more varied the possibilities for recombining the modules to build dierent systems. This means that, on the one hand, some modules can provide dierent functionalities in dierent contexts (functional plasticity), and on the other hand, several types of modules may provide the same functionality (functional redundancy). It all depends on the context and on how coalitions are composed. As an example of functional plasticity, a rotational axis and a vertical linear axis may provide a helicoidal movement (screwing movement), but they may also be part of a Scara-type robot, as illustrated in Figure 7, composed of two rotations around a vertical axis with a vertical translation (thus requiring more partners that can provide an additional rotational axis). As for functional redundancy, a Scaratype functionality may be provided by a coalition of simpler modules, as explained, or it may be provided by an industrial Scara robot as a whole. As another example, functional plasticity is demonstrated when a gripper, usually thought to grab a part between its ngers, lifts a part from inside (see Figure 8), or closes its ngers to push apart. Similarly, functional redundancy means here that not only a nger gripper can handle a part, but also a vacuum gripper (using suction) or an electromagnetic gripper (in cases where the part contains a magnetic material). Furthermore, a robot which is made for rapid pick&place operations could incorporate a riveting tool to temporarily take over for a failing riveting robot; this would make the robot slower, but not otherwise disturb the system. This is at the same time functional plasticity and redundancy, because the robot can execute functions it was not intended to, and the required function can be achieved in more than one way.

Fig. 6 Modules of varying granularity: a - tool or axis, b - robot or machine, c - cell, d conveyor tables

Fig. 7 Kinematic diagram of a Scara, where the cuboid stands for a vertical translation and the cylinders for rotations rotation around vertical axes.

16

Fig. 8 The same gripper grabbing parts, once delivered on a stick (left - the gripper ngers grab it from outside, which is the usual procedure) and once in a tube (right - the gripper ngers grab the part from inside).

4.4 Networked buering in manufacturing and assembly The concept of networked buering appears to be broadly applicable to manufacturing and systems engineering. Consider any system comprising a set of multifunctional entities or agents which interact with each other [Whitacre and Bender, 2010b]. Each agent performs a nite number of tasks where the types of tasks performed are constrained by an agents functional capabilities and by the environmental requirement for tasks (requests). A systems robustness is characterised by the ability to satisfy tasks under a variety of conditions. A new condition might bring about the failure or malfunctioning of some agents or a change in the spectrum of environmental requests. When a system has many agents that perform the same task then the loss of one agent can be compensated for by others, as can variations in the demands for that task. Stated dierently, having an excess of functionally similar agents (excess system resources) provides a buer against variations in task requests. While the utilisation of such local compensation appears to require ubiquitous excess resources, a buering network of partly related agents can allow for a distributed response to local perturbations that utilises a small number of excess resources to respond to a variety of seemingly unrelated stresses (see description of Networked Buering). Besides manufacturing systems, which are discussed subsequently, examples where networked buering could be applicable include self-deploying emergency task forces [Ulieru and Unland, 2004], self-organising displays [Puviani et al., 2011], supply networks [Choi et al., 2001], eets of transportation vehicles [Whitacre et al., 2011], as well as telecare for eldery persons and families [Camarinha-Matos and Afsarmanesh, 2004]. Each of these systems rely on a myriad of devices and/or persons which deliver services and interact on the basis of local rules and incomplete information. The system these agents form would be more stable under unexpected failures if other coalition members could substitute for failing ones and if the topology of these compensatory eects formed a connected network. Moreover, the exibility in who does what means that the addition of only a few excess agent resources can confer a system with exceptional versatility, i.e. robustness can be achieved at relatively higher eciencies. The translation of this concept to assembly systems is immediate, given the previously described functional redundancy and functional plasticity together with the agents interactions and self-organisation described in SOAS. Once a module fails (or its neighbour notices that it is no longer responsive), either the module itself or one of its coalition partners looks for a replacement - either only of the failing modules functionality, or of the entire coalitions functionality, depending

17

on the role of the failing module and the ease of replacing it. In many cases a user would need to conrm the action to be taken. The same basic procedures would also apply when requirements change, that is, when modied or entirely dierent skills are requested. Either the coalition is able to provide it by adding or exchanging some modules, or a new coalition will be formed using the modules that are available, as illustrated in Figure 9.

Fig. 9 Scenarios of networked buering in a manufacturing coalition composed of modules

Networked buering leads to a responsive, changeable system that is errortolerant and robust against disturbances of many types. As an example, consider a scenario where an assembly system needs to cope with changing requirements. The assembly of a product may usually require a rivet, whereas a variant of the product requires a screw, as illustrated in Figure 10. The product agent may therefore ask for a dierent process, which the robot setting the rivet may not be able to provide immediately. The robot (or the coalition of modules which compose the robot) may then check if another robot in the shop-oor layout is able to insert a screw, and if it is available to take over this task. Alternatively, the original robot may ask the user to replace the riveting tool by a helicoidal top-axis, and thus transform itself according to the requirements of the task at hand. More generally, having structural diversity amongst functionally similar agents provides greater exibility in how a function is achieved and consequently a better chance of nding a way of satisfying a task requirement. Another example of networked buering is when a module fails; say a gripper becomes blocked, and resetting it does not resolve the problem. The agents must quickly nd an alternative way of executing the task at question to avoid system down-time. An immediate solution may be provided by peers which, although also busy with other tasks, have the required skills to temporarily take over the task in question. In parallel, the blocked gripper - or in case it is not responding any more, one of its coalition partners - will alert the user, who will take further actions. The failing gripper may, for instance, be replaced by a similar one, which will quickly be integrated into the existing agent coalition and take up its functionality.

18

Fig. 10 Joining two parts by a rivet or a screw; parts 1 and 2 are not modied, but the process of joining them is changed, and thus the tools required by the process are dierent.

5 Discussion: Practical challenges in encouraging evolvability Degeneracy has many positive eects; however, there may also be challenges to overcome before the benets from these concepts can be realised within actual engineered systems. For instance, degeneracy increases a systems complexity, making global /centralised decision making more dicult. Because there are many components available which can achieve the same operation but in a potentially dierent way, it becomes more dicult for a central decision making authority to decide which component should execute which function at which moment. However, in multi-agent simulations it was found that distributed decision making with incomplete information can generate near optimal system performance, which indicates that there are many unique sequences of actions that can generate a benecial adaptive response at the system level. The realisation of such properties in practice would however need to be evaluated in the context of each specic application domain [Whitacre et al., 2011]. If degeneracy is benecial to engineered systems facing uncertain future conditions, then it would be important to consider how we might encourage such properties to arise. If we are starting with systems that were designed with an emphasis on reductionist principles and have an architecture that follows a decomposable hierarchy, it would not necessarily be obvious how one might proceed to transform such a system into an architecture with multi-functional agents and ecient buering networks. In keeping with the evolutionary paradigm, it would also be important that each step taken in modifying the system can be incremental if needed and that each intermediate form constitutes a viable and competitive system. One plausible heuristic approach would be to start by focusing on individual components / devices / robots that are infrequently used and to consider how the roles of these components could be expanded, either by applying existing skills to new tasks or through small redesign to enable the fulllment of related tasks. The general emphasis would be shifted from one where each component has a single task to one where component utility is dened by the ability to successfully take on any tasks possible, when and where they are needed. By looking at how agents can be better integrated with the system to satisfy its needs, degeneracy and networked buering should naturally arise without explicit planning as was observed in the simulation studies mentioned earlier.

19

While degeneracy is easily achieved in biology (e.g. through gene duplication and divergence), the diversity of degenerate systems could present a cost barrier to the implementation of these ideas. On the other hand, trends towards modularity and mass customisation suggest that requirements of multi-functionality and degeneracy would not need to necessarily be costly and might already be straightforward to implement within several industries producing individual goods such as customised watches, mobile phones with many variants, personalised medicine, or custom-made furniture. The cost of manufacturing systems can be broken down into two main parts: the cost of purchasing equipment, and the cost of its maintenance including recongurations. It is rather dicult to draw a precise comparison between a more traditional system and one with degeneracy because no one will be willing to build both systems. All that R&D scientists can usually do is compare system (reconguration) performances over hours or days and derive conclusions accordingly. However, such scenarios are unable to fully reveal the longer-term advantages of evolvable manufacturing systems. What can be concluded from limited experimentation and accumulated industry experience is that purchasing a set of smaller standard modules and combing them to build various systems according to upcoming needs is often considerably cheaper than acquiring custom-made specialised systems which are optimised to produce a specic set of products but are useless otherwise. It is also generally cheaper to perform maintenance on small interchangeable modules than to perform maintenance on a coarse-grained specialised system that needs to be taken o-line for the procedure. Finally, the cost of reconguring a system which is specically made for seamless reconguration will generally be cheaper than re-engineering a custom-made system. Although such issues must be explored in greater detail and validated within specic manufacturing applications, we suspect that there is a growing number of manufacturing domains where the implementation of degeneracy principles should well be worth it. Future research will explore design strategies for systematically introducing degeneracy into a system based on the types of localised and incomplete knowledge that one might expect to reasonably measure within an actual manufacturing system.

6 Concluding Remarks Accessing novelty: In both biology and engineering, the discovery of an improved component design necessitates the exploration of new design variants. We have provides arguments based on genotype:phenotype mappings and conditional neutrality in tness landscapes to explain how degeneracy enhances access to design options. In addition, degeneracy also enhances a systems access to design novelty because functionally redundant elements retain unique structural characteristics and thus have distinct options for how they can be modied. Transforming novelty into innovation: The availability of distinct design options is an important prerequisite for innovation, however new opportunities often come with new challenges. To transform a local novelty into a useful innovation, a system must be exible (e.g. structurally, behaviorally) to accommodate and use

20

a newly designed device eectively. For instance, design changes sometimes require new specications for interaction, communication, operation, etc. However, a system must accommodate these new requirements without the loss of other important capabilities and without sacricing the performance of other core system processes. In other words, the propensity to innovate is enhanced in systems that are robust in their core functions yet exible in how those functions are carried out [Kirschner and Gerhart, 1998]. Facilitating unexpected opportunities: Because design novelty is not predictable, the exibility needed to exploit design novelty cannot be pre-specied based on the anticipation of future design changes, either. To support innovation in systems engineering, it appears that this robust yet exible behaviour would need to be a property that is pervasive throughout the system. Yet within the wider context of a systems development - where each incremental design change involves a boundedly rational and ultimately myopic decision - it also seems that this exibility must be an emergent property that can readily emerge without foresight. These challenges are generic to system evolvability and are equally relevant in understanding the evolution of system behavior towards changing external requirements or the development of robust responses to unexpected failures. This article has described how a common biological property known as degeneracy can lead to the emergence of a highly exible and highly adaptive system. The principles are conceptually simple and could prove broadly relevant to the behavior of biological, social, and technological systems. Here we have outlined how these principles can be directly translated into an assembly system with the result being a highly adaptive and agile manufacturing system. The realisation of degeneracy may come with the investment cost of numerous robotic modules, however, the gained agility and the ability to let the system co-evolve with requirements is expected to compensate for this, and will make the principles discussed in this article economically sensible in the near future.

References
[Al-Sa and Vyatkin, 2010] Al-Sa, Y. and Vyatkin, V. (2010). Ontology-based reconguration agent for intelligent mechatronic systems in exible manufacturing. Robotics and Computer-Integrated Manufacturing, 26(4):381391. [Ashby, 1956] Ashby, W. (1956). An introduction to cybernetics. Chapman & Hall, London. [Banzhaf, 1994] Banzhaf, W. (1994). Genotype-phenotype-mapping and neutral variation-a case study in genetic programming. In Parallel Problem Solving from Nature III, volume 866, pages 322332. Springer. [Barata, 2005] Barata, J. (2005). Coalition based approach for shopoor agility. Edioes Orion, c Amadora - Lisboa. [Barata et al., 2005] Barata, J., Camarinha-Matos, L., and Onori, M. (2005). A multiagent based control approach for evolvable assembly systems. In 3rd IEEE Int. Conf. on Industrial Informatics (INDIN), pages 478483, Perth, Australia. [Berry and Boudol, 1998] Berry, G. and Boudol, G. (1998). The chemical abstract machine. Theoretical Computer Science, 96(1):217248. [Brueckner, 2000] Brueckner, S. (2000). Return from the ant - synthetic ecosystems for manufacturing control. PhD thesis, Institute of Computer Science, Humboldt-University, Berlin, Germany. [Bussmann, 2000] Bussmann, S.and Schild, K. (2000). Self-organizing manufacturing control: an industrial application of agent technology. In 4th IEEE Int. Conf. on Multi-Agent Systems, pages 8794, Boston, MA, USA.

21 [Buzacott and Shanthikumar, 1980] Buzacott, J. A. and Shanthikumar, J. G. (1980). Models for understanding exible manufacturing systems. AIIE transactions, 12(4):339350. [Camarinha-Matos and Afsarmanesh, 2004] Camarinha-Matos, L. M. and Afsarmanesh, H. (2004). A multi-agent based infrastructure to support virtual communities in elderly care. Int. J. of Networking and Virtual Organisations, 2(3):246266. [Choi et al., 2001] Choi, T., Dooley, K., and Rungtusanatham, M. (2001). Supply networks and complex adaptive systems: Control versus emergence. Operations Management, 19:351 366. [Ciliberti et al., 2007] Ciliberti, S., Martin, O. C., and Wagner, A. (2007). Innovation and robustness in complex regulatory gene networks. Proceedings of the National Academy of Sciences, USA, 104(34):1359113596. [Edelman and Gally, 2001] Edelman, G. and Gally, J. (2001). Degeneracy and complexity in biological systems. Proc. Natl. Acad. Sc. USA, 98(24):1376313768. [ElMaraghy, 2006] ElMaraghy, H. (2006). Flexible and recongurable manufacturing systems paradigms. Int. Journal of Flexible Manufacturing Systems, 17(4):261276. [ElMaraghy et al., 2008] ElMaraghy, H., AlGeddawy, T., and Azab, A. (2008). Modelling evolution in manufacturing: a biological analogy. CIRP Annals - Manufacturing Technology, 57:467472. [Frei, 2010] Frei, R. (2010). Self-organisation in Evolvable Assembly Systems. PhD thesis, Department of Electrical Engineering, Faculty of Science and Technology, Universidade Nova de Lisboa, Portugal. [Frei and Barata, 2010] Frei, R. and Barata, J. (2010). Distributed systems - from natural to engineered: three phases of inspiration by nature. Int. J. of Bio-inspired Computation, 2(3/4):258270. [Frei et al., 2007] Frei, R., Barata, J., and Di Marzo Serugendo, G. (2007). A complexity theory approach to evolvable production systems. In Sapaty, P. and Filipe, J., editors, Int. Workshop on Multi-Agent Robotic Systems (MARS), pages 4453, Angers, France. INSTICC Press. [Frei and Di Marzo Serugendo, 2011a] Frei, R. and Di Marzo Serugendo, G. (2011a). Advances in complexity engineering. Accepted for publication in Int. J. of Bio-Inspired Computation. [Frei and Di Marzo Serugendo, 2011b] Frei, R. and Di Marzo Serugendo, G. (2011b). Concepts in complexity engineering. Int. J. of Bio-Inspired Computation, 3(2):123139. [Frei and Di Marzo Serugendo, 2011c] Frei, R. and Di Marzo Serugendo, G. (2011c). Selforganising assembly systems. Accepted for publication in IEEE Transactions on Systems, Man and Cybernetics, Part C: Applications and Reviews. [Frei et al., 2008] Frei, R., Di Marzo Serugendo, G., and Barata, J. (2008). Designing selforganization for evolvable assembly systems. In IEEE Int. Conf. on Self-Adaptive and SelfOrganizing Systems (SASO), pages 97106, Venice, Italy. [Frei et al., 2010] Frei, R., Di Marzo Serugendo, G., and Serbanuta, T. (2010). Ambient intelligence in self-organising assembly systems using the chemical reaction model. J. of Ambient Intelligence and Humanized Computing, 1(3):163184. [Gaugel et al., 2004] Gaugel, T., Bengel, M., and Malthan, D. (2004). Building a miniassembly system from a technology construction kit. Assembly automation, 24(1):4348. [Gowdy and Rizzi, 1999] Gowdy, J. and Rizzi, A. (1999). Programming in the architecture for agile assembly. In IEEE Int. Conf. on Robotics and Automation (ICRA), volume 4, pages 31033108. [Hanisch and Munz, 2008] Hanisch, C. and Munz, G. (2008). Evolvability and the intangibles. Assembly Automation, 28(3):194199. [Hollis et al., 2003] Hollis, R., Rizzi, A., Brown, H., Quaid, A., and Butler, Z. (2003). Toward a second-generation minifactory for precision assembly. In Int. Advances Robotics Program Workshop on Microrobots, Micromachines and Microsystems, Moscow, Russia. [Jin et al., 2009] Jin, Y., Gruna, R., Paenke, I., and Sendho, B. (2009). Evolutionary multiobjective optimization of robustness and innovation in redundant genetic representations. In IEEE Symposium on Computational Intelligence in Multi-Criteria Decision Making, pages 3842, Nashville, USA. [Jin and Trommler, 2010] Jin, Y. and Trommler, J. (2010). A tness-independent evolvability measure for evolutionary developmental systems. In IEEE Symp. on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), pages 18, Montreal, Canada. [Katz, 2007] Katz, R. (2007). Design principles for recongurable machines. Int. J. of Advanced Manufacturing Technology, 34:430439.

22 [Kaumann, 1993] Kaumann, S. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press, Oxford, United Kingdom. [Kaula, 1998] Kaula, R. (1998). A modular approach toward exible manufacturing. Integrated Manufacturing Systems, MCB University Press, 9(2):7786. [Kephart and Chess, 2003] Kephart, J. and Chess, D. (2003). The vision of autonomic computing. IEEE Computer, 36(1):4150. [Kirschner and Gerhart, 1998] Kirschner, M. and Gerhart, J. (1998). Evolvability. Proc. Natl. Acad. Sc. USA, 95(15):84208427. [Knowles and Watson, 2003] Knowles, J. D. and Watson, R. A. (2003). On the utility of redundant encodings in mutation-based evolutionary search. LNCS, pages 8898. [Koestler, 1967] Koestler, A. (1967). The ghost in the machine. Hutchinson, London, UK. [Koren et al., 1999] Koren, Y., Heisel, U., Jovane, F., Moriwaki, T., Pritchow, G., Ulsoy, A., and Van Brussel, H. (1999). Recongurable manufacturing systems. CIRP Annals - Manufacturing Technology, 48(2):612. [Kume and Rizzi, 2001] Kume, S. and Rizzi, A. (2001). A high-performance network infrastructure and protocols for distributed automation. In IEEE Int. Conf. on Robotics and Automation (ICRA), volume 3, pages 31213126, Seoul, Korea. [Kurakin, 2009] Kurakin, A. (2009). Scale-free ow of life: on the biology, economics, and physics of the cell. Theoretical Biology and Medical Modelling, 6(1). [Leito, 2004] Leito, P. (2004). An agile and adaptive holonic architecture for manufacturing a a control. PhD thesis, Department of Electrical Engineering, Polytechnic Institute of Bragana, c Portugal. [Leito, 2008] Leito, P. (2008). A bio-inspired solution for manufacturing control systems. a a In IFIP Int. Federation for Information Processing, volume 266, pages 303314. Springer, Boston. [Leitao and Restivo, 2008] Leitao, P. and Restivo, F. (2008). Implementation of a holonic control system in a exible manufacturing system. Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 38(5):699709. [Marik et al., 2007] Marik, V., Vyatkin, V., and Colombo, A., editors (2007). Holonic and Multi-Agent Systems for Manufacturing. Springer, Heidelberg. [Mehrabi et al., 2000] Mehrabi, M., Ulsoy, A., and Koren, Y. (2000). Recongurable manufacturing systems: Key to future manufacturing. Journal of Intelligent Manufacturing, 11:403419. [Niemeyer, 2006] Niemeyer, C. (2006). Pick and Place in a Minifactory environment. PhD thesis, Swiss Federal Institute of Technology (ETH), Zrich, Switzerland. u [Onori, 2002] Onori, M. (2002). Evolvable assembly systems - a new paradigm? In 33rd Int. Symposium on Robotics (ISR), pages 617621, Stockholm, Sweden. [Onori and Groendahl, 1998] Onori, M. and Groendahl, P. (1998). MARK III, A new approach to high-variant, medium-volume Flexible Automatic Assembly Cells. Robotica, Special Issue, 16(3):357368. [Puviani et al., 2011] Puviani, M., Di Marzo Serugendo, G., Frei, R., and Cabri, G. (2011). A method fragments approach to methodologies for engineering self-organising systems. Accepted for publication in ACM Transactions on Autonomous and Adaptive Systems (TAAS). [Rizzi et al., 1997] Rizzi, A., Gowdy, J., and Hollis, R. (1997). Agile assembly architecture: an agent based approach to modular precision assembly systems. In Int. Conf. on Robotics and Automation (ICRA), pages 15111516, Albuquerque, New Mexico. [Rothlauf, 2006] Rothlauf, F. (2006). Representations for Genetic and Evolutionary Algorithms. Springer, 2nd edition. [Rothlauf and Goldberg, 2003] Rothlauf, F. and Goldberg, D. E. (2003). Redundant representations in evolutionary computation. Evolutionary Computation, 11(4):381415. [Semere et al., 2008] Semere, D., Onori, M., Maei, A., and Adamietz, R. (2008). Evolvable assembly systems: coping with variations through evolution. Assembly Automation, 28(2):126133. [Shen et al., 2006] Shen, W., Hao, Q., Yoon, H. J., and Norrie, D. H. (2006). Applications of agent-based systems in intelligent manufacturing: an updated review. Advanced Engineering Informatics, 20:415431. [Ueda, 2006] Ueda, K. (2006). Emergent synthesis approaches to biological manufacturing systems. In 3rd Int. CIRP Conf. on Digital Enterprise Technology (DET), Keynote paper, Setubal, Portugal.

23 [Ulieru and Unland, 2004] Ulieru, M. and Unland, R. (2004). Emergent e-logistics infrastructure for timely emergency response management. In Di Marzo Serugendo, G., Karageorgos, A., Rana, O., and Zambonelli, F., editors, Engineering Self-Organizing Systems: Nature Inspired Approaches to Software Engineering, volume 2977 of LNAI, pages 139156. Springer Berlin Heidelberg. [Valckenaers and Van Brussel, 2005] Valckenaers, P. and Van Brussel, H. (2005). Holonic manufacturing execution systems. CIRP Annals - Manufacturing Technology, 54(1):427432. [Valle et al., 2009] Valle, M., Kaindl, H., Merdan, M., Lepuschitz, W., Arnautovic, E., and e e Vrba, P. (2009). An automation agent architecture with a reective world model in manufacturing systems. In IEEE Int. Conf. on Systems, Man and Cybernetics (SMC), pages 305310. IEEE Press. [Vrba, 2003] Vrba, P. (2003). MAST: manufacturing agent simulation tool. In IEEE Int. Conf. on Emerging Technologies and Factory Automation (ETFA), pages 282287, Lisbon, Portugal. [Whitacre, 2010a] Whitacre, J. (2010a). Degeneracy: a link between evolvability, robustness and complexity in biological systems. Theoretical Biology and Medical Modelling, 7(6). [Whitacre, 2011] Whitacre, J. (2011). Genetic and environment-induced pathways to innovation: on the possibility of a universal relationship between robustness and adaptation in complex biological systems. Evolutionary Ecology, pages 111. [Whitacre and Bender, 2010a] Whitacre, J. and Bender, A. (2010a). Degeneracy: a design principle for achieving robustness and evolvability. J. Theoretical Biology, 263(1):143153. [Whitacre and Bender, 2010b] Whitacre, J. and Bender, A. (2010b). Networked buering: a basic mechanism for distributed robustness in complex adaptive systems. Theoretical Biology and Medical Modelling, 7(20). [Whitacre et al., 2011] Whitacre, J., Rohlfshagen, P., Bender, A., and Yao, X. (2011). Evolutionary mechanics: new engineering principles for the emergence of exibility in a dynamic and uncertain world. Accepted for publication in Journal of Natural Computing. [Whitacre, 2010b] Whitacre, J. M. (2010b). Evolution-inspired approaches for engineering emergent robustness in an uncertain dynamic world. In Conf. on Articial Life XII, pages 559560, Odense, Denmark. [Whitacre, 2010c] Whitacre, J. M. (2010c). Genetic and environment-induced innovation: complementary pathways to adaptive change that are facilitated by degeneracy in multi-agent systems. In Conf. on Articial Life XII, pages 431432, Odense, Denmark. [Whitacre and Bender, 2009] Whitacre, J. M. and Bender, A. (2009). Degenerate neutrality creates evolvable tness landscapes. In WorldComp-2009, Las Vegas, Nevada, USA. [Whitacre et al., 2010] Whitacre, J. M., Rohlfshagen, P., Yao, X., and Bender, A. (2010). The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments. In 11th Int. Conf. on Parallel Problem Solving from Nature (PPSN), pages 284293, Krakow, Poland. [Wuertz, 2008] Wuertz, R., editor (2008). Organic computing. Understanding Complex Systems. Springer, Berlin Heidelberg. [Yu and Miller, 2001] Yu, T. and Miller, J. F. (2001). Neutrality and the evolvability of boolean function landscape. In Proceedings of the 4th European Conference on Genetic Programming, pages 204217. Springer-Verlag London, UK.

The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems in Dynamic Environments
James M. Whitacre1 , Philipp Rohlfshagen1, Axel Bender2 , and Xin Yao1
CERCIA, School of Computer Science, University of Birmingham Birmingham B15 2TT, United Kingdom {j.m.whitacre,p.rohlfshagen,x.yao}@cs.bham.ac.uk Land Operations Division, Defence Science and Technology Organisation Edinburgh, Australia axel.bender@dsto.defence.gov.au
1

Abstract. It has been proposed that degeneracy plays a fundamental role in biological evolution by facilitating robustness and adaptation within heterogeneous and time-variant environments. Degeneracy occurs whenever structurally distinct agents display similar functions within some contexts but unique functions in others. In order to test the broader applicability of this hypothesis, especially to the eld of evolutionary dynamic optimisation, we evolve multi-agent systems (MAS) in time-variant environments and investigate how degeneracy amongst agents inuences the systems robustness and evolvability. We nd that degeneracy freely emerges within our framework, leading to MAS architectures that are robust towards a set of similar environments and quickly adaptable to large environmental changes. Detailed supplementary experiments, aimed particularly at the scaling behaviour of these results, demonstrate a broad range of validity for our ndings and suggest that important general distinctions may exist between evolution in degenerate and non-degenerate agent-based systems.

1 Introduction
The eld of evolutionary dynamic optimisation (e.g., [3,8]) is concerned with the application of evolutionary algorithms (EA) to dynamic optimisation problems (DOP). In DOP, conditions vary frequently, and optimisation methods need to adapt their proposed solutions to time-dependent contexts (tracking of the optimum). EA are believed to be excellent candidates to tackle this particular class of problems, partially because of their correspondence with natural systems the archetypal systems exposed to inherently dynamic environments. Here we examine the properties that are believed to facilitate the positive relationship between mutational robustness and evolvability that takes place in natural evolution. In computational intelligence, these issues relate directly to concepts of tness landscape neutrality and the search for high-quality solutions. Fitness landscapes are used extensively in the eld of combinatorial optimisation to describe the structural properties of the problem to be optimised. The tness landscape results directly from the choice of representation as well as the choice of search operators. Subsequently, different representations lead to different tness landscapes and hence to problems of different difculty (see [9] for an overview). Much research has focused on developing and analysing
R. Schaefer et al. (Eds.): PPSN XI, Part I, LNCS 6238, pp. 284293, 2010. c Springer-Verlag Berlin Heidelberg 2010

The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems

285

different problem representations. Inspired by earlier developments in theoretical biology, neutrality the concept of mutations that do not affect system tness has been integrated into problem representations using various approaches such as polyploidy (see [1,18,10,7,6]). However, there are theoretical reasons as well as some experimental evidence to suggest that only particular representations of neutrality will support the discovery of novel adaptations. Edelman and Gally have proposed that degeneracy, a common source of stability against genetic mutations and environmental changes, creates particular types of neutrality that increase access to distinct heritable phenotypes and support a systems propensity to adapt [5]. Before describing Edelman and Gallys hypothesis on the mechanics of evolution, we rst dene some biological concepts evolvability, robustness, redundancy and degeneracy with special emphasis on their meaning to optimisation. Evolvability in biology is concerned with the inheritance of new and selectively benecial phenotypes. It requires 1) phenotypic variety (PV), i.e. an ability to generate distinct heritable phenotypes, and 2) that some of this phenotypic novelty can be transformed into positive adaptations [16,17,14]. Similarly, evolvability in optimisation describes an algorithms ability to sample solutions of increasing quality. Robustness has several meanings in optimisation that mostly relate to the maintenance of adequate tness values. In robust optimisation, robustness refers to the insensitivity of a solutions tness to minor alterations of its decision variables. In dynamic optimisation, robustness is often dened as the insensitivity of a solutions tness to perturbations in the objective functions parameters over time. In biology, redundancy and degeneracy often contribute to the robustness of traits [5]. Redundancy means redundancy of parts and refers to identical components (e.g. proteins, people, vehicles, mechanical tools) with identical functionality (see Fig. 1b). Redundant components can often substitute for one another and thus contribute towards a fail-safe system. In contrast, degeneracy arises when similarities in the functions of components are only observed for certain conditions. In particular, while diverse components sometimes can be observed performing similar functions (many-to-one mapping), components are also functionally versatile (one-to-many mapping) with the actual function performed at a given time being dependent on the context. For degeneracy to arise, a component must have multiple context-induced functions of which some (but not all) are also observed in another component type. In a landmark paper [5], Edelman and Gally present numerous examples where degeneracy contributes to the stability of biological traits. They hypothesize that degeneracy may also fundamentally underpin evolvability by supporting the generation of PV. In particular, degenerate components stabilize conditions where they are functionally compensatory, however they also retain unique structural characteristics that lead to a multiplicity of distinct functional responses outside of those conditions. These differential responses can occasionally have distinct phenotypic consequences [16] that may emerge as selectively relevant adaptations when presented with the right environment, cf [5,16,14,15]. Edelman and Gallys hypothesis describes degeneracy as a mechanistic facilitator of both robustness and adaptation that, in principle, could be applied outside biological contexts [16]. As described in [5,17], degeneracy is ubiquitous throughout natural systems that undergo parallel problem-solving. Yet until recently, it has not

286

J.M. Whitacre et al.

informed the design and development of nature-inspired algorithms. Here we present evidence that degeneracy may provide a new (representational) approach to improve evolvability throughout EA execution in both static and dynamic environments. This approach could be applicable for many problems that are naturally modeled by systems with autonomous and functionally versatile agents that must survive within a heterogeneous environment.

2 The Role of Degeneracy in Evolution


When considering discrete local changes (mutations) in the decision variables of a single solution, the number of distinct accessible solutions is trivially constrained by the dimensionality of the solution space. Under these conditions, any increase in tness neutrality i.e. mutational robustness will reduce PV. While more explorative/ disruptive variation operators can increase PV, nature almost always takes a different approach. In gene regulatory networks and other biological systems, mutational robustness often creates a neutral network that improves access to PV over long periods of time, e.g. by drifting over neutral regions in a tness landscape [4]. With PV being a prerequisite of evolutionary adaptability, a strong case has been made that this positive correlation of mutational robustness and PV is important to the evolvability of biological systems [4,16,14]. Inspired by these developments, some computational intelligence studies have investigated whether increasing neutrality (e.g. designing a many-to-one mapping between genotypes and phenotypes) inuences the evolvability of a search process [1,18,10,7,6]. A common approach is to introduce genetic redundancy so that more than one copy of a gene performs the same function [1,18]. Although some researchers have indicated that redundant forms of neutrality improve evolvability, others have questioned the utility of tness landscape neutrality generated through redundant encodings [7,15,16]. In the next section we describe, in detail, the computational study used to evaluate Edelman and Gallys hypothesis, including the details for the experimental setup. The proposed model provides the basis for simulating the evolution of a population of multi-agent systems (MAS) and depends on a minimal set of parameters that provide sufcient degrees of freedom to study the system properties redundancy, degeneracy, robustness and evolvability that we are interested in. The model (including the tness function) is formally the same as the one developed in [15]. The study in [15] investigated degeneracys relationship to genetic neutrality and evolvability and found that degenerate forms of genetic neutrality increase PV while neutrality from redundancy does not. In [16] we expanded on these results and found evidence that neither mutational robustness nor the size of the neutral network in a tness landscape guarantees high PV, unless degenerate neutrality is present. The studies in [15,16] investigated PV only within the local vicinity of a static neutral network. While this allowed for comparisons with recent biologically-inspired models (e.g. [4]), it was not within their scope to assign a selective relevance to heritable phenotypic variations. Thus, while previous studies were promising for Edelman and Gallys hypothesis, there has yet to be direct evidence that PV facilitated by degeneracy leads to higher rates of adaptive improvement. In the following we outline a set of experimental

The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems

287

conditions that allow us, for the rst time, to evaluate Edelman and Gallys claim that degeneracy facilitates evolvability (and not just PV).

3 Computational Study and Experimental Setup


Each MAS M = (a1 , . . . , an ) consists of n = 30 agents and each agent is able to perform two types of tasks ai = (ai1 , ai2 ) where 0 < ai1 < ai2 m. We have chosen a value of m = 20. This simple model is sufcient to allow for measurable degrees of redundancy and degeneracy: Any two agents ai and aj , i = j, are considered unique with respect to one another if aik ai aik aj . Redundancy with respect to two / agents, on the other hand, is dened as ai k ai ai k aj . If a pair of agents is neither unique nor redundant, it is considered degenerate. A system-wide measure of degeneracy (redundancy) of M then corresponds to the fraction of all unique pair-wise comparisons of all agent pairings that are degenerate (redundant). Each agent may devote its resources (e.g., time or energy) to the two tasks it is able to carry out. For instance, if agent ai is able to carry out tasks 1 and 2, it could devote 30% of its resources to task 1 and 70% to task 2. We subsequently dene a global resource allocation vector R = (r1 , . . . , rn ), where each resource allocation ri is a pair (ri 1 , ri 2 ) with 0 ri j 1 and ri 1 + ri 2 = 1; the number ri j denotes the fraction of resources that agent ai devotes to its task aij . The available resources may be allocated dynamically using a local decision-making process without global control. In order to do so efciently, we discretise the continuous 1 range of each element rij into 11 segments {0, 10 , . . . , 1}. For each iteration of this procedure, we consider every element ri (without replacement) and perform a local 1 search that systematically increases or decreases the value ri 1 by 10 , doing the opposite for ri2 (such that ri1 +ri2 = 1). We do this as long as the tness of the MAS (see below) improves (or the bounds of ri j have been reached). This step is repeated until no further improvements may be made across all elements of R. Each MAS is exposed to s = 10 distinct scenarios at any one time: each scenario si species a set of demands for each of the m task types, si = (si 1 , . . . , si m ) where 0 sij n. We also impose that the sum of all demands equals the size of the MAS: n j=1 si j = n. In order to generate the s scenarios, a seed scenario s0 is generated randomly and the remaining s 1 scenarios are then generated by means of a random walk of length 10 (volatility) that always starts from s0 . For each step of the random walk, a pair of task-types is chosen uniformly at random and the demand for one of the chosen task-types is increased by a value of 1, the other is decreased by a value of 1 (subject to staying within bounds; if this operation should be unsuccessful, a new pairing of task-types would be chosen). It follows that the total demand of the scenario remains constant but its distribution changes. The set of environments changes every 200 generations (of the genetic algorithm; see below) either moderately or drastically. For moderate changes, the seed for the new set of scenarios is randomly selected from the previous set (excluding the original s0 ). For drastic changes, on the other hand, a new seed scenario is generated uniformly at random. The remaining scenarios are generated as before.

288

J.M. Whitacre et al.

The distribution of resources within a MAS (as described above) occurs as a direct response to the environmental conditions (i.e., demands) experienced by the system. We denote the output of a MAS by the vector O = (o1 , . . . , on ) where oi is the sum n 2 of resources dedicated to task-type i: oi = j=1 k=1 rjk [ajk = i] where [] returns 1 if the containing statement is true. The tness of a MAS under environment si is then the difference between its output O and the demand imposed by the enm 2 vironment si : F (M, si ) = j=1 max{0, sij oj } where oj O approximates an optimal allocation of resources under si given the capabilities of M. The robustness of the MAS is subsequently dened as the average tness across all scenarios, s R(M, {s1 , . . . , ss }) = 1 j=1 F (M, sj ). This measure was chosen for simplicity, s although we found that robustness measurements that incorporated tness thresholds did not appear to alter our basic ndings. The vector O is obtained on-the-y with respect to each si encountered. However, the optimality of the resource allocation is strictly dependent on the task-types contained within the MAS. We thus use a genetic algorithm (GA) based on deterministic crowding to evolve a population of MAS (i.e., M) towards a specic set of scenarios. Prior to the algorithms execution, m unique agent-types (i.e., pairing of task-types) are 2 constructed from the m = 20 task-types and stored in a set T . The initial population P , of size N = 20, is then created by sampling (with replacement) from T to obtain a MAS that consist exclusively of pairwise unique or redundant agent-types. During evolution, two parents are randomly selected from the population (without replacement) and subjected to uniform crossover (element-wise probability of 0.5) with probability 1. Each resulting offspring has exactly one element (agent-type) mutated and then replaces the genotypically more similar parent if its tness is at least as good. Mutation changes the functional capabilities of a single agent and thereby determines whether degeneracy may arise during evolution. The mutation operator has been designed with the following considerations in mind: (a) the search space is to be of the same size in all experiments; (b) in some experiments both redundancy and degeneracy can be selected for during the evolutionary process. Each position in M is occupied by a specic agent-type and the mutation operator replaces exactly one such agent-type with a new one. The agent-types available at each position are determined a priori and remain constant throughout the execution of the algorithm. In the fully restricted case (no degeneracy), the options at each position are given by the set T (which was also used to initialise the population). It follows that a purely redundant MAS remains redundant after mutation. For experiments in which the MAS can evolve degenerate architectures, each position i has a unique set of options Ti which closely resembles the set T but allows for a partial overlap in functions: Each Ti contains the same task-types as T but half its members (chosen randomly) have exactly one element per task-type pairing altered randomly. The mutation operator is illustrated in Fig. 1b: agents from both system classes have access to the same number of task type pairings (mutation options are shown as faded task type pairings), hence the search space sizes are identical. In the redundant case, mutation options are dened in order to prevent degeneracy. In the degenerate case, it is evident that the agents capabilities may be unique, redundant, or may partially overlap due to slightly altered task type pairings for each agent.

The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems

289

4 Experimental Results
In our experiments, a MAS architecture (i.e. the specication of agent task capabilities) evolves to maximise robustness within a set of environmental scenarios. To evaluate Edelman and Gallys hypothesis, we place different restrictions on the architectural properties that can evolve in a MAS (see mutation operator in Section 3), preventing degeneracy from arising in some cases. We then evaluate if degeneracy improves adaptation properties during static and dynamic environmental conditions. 4.1 Robustness, Evolvability in Static (Heterogeneous) Environments In Fig. 1a, for the 200 generations before the environment changes we see that, when degeneracy is allowed to emerge, the MAS evolves higher robustness towards the set of environmental scenarios. This nding is not intuitively expected considering that: systems are the same size (and solution spaces are constrained to identical sizes), MAS are presented with the same scenarios, agents have access to the same task types and, within a noiseless environment, all MAS evolve within a unimodal tness landscape that contains the same optimal tness value. In our view, there are two factors that primarily contribute to these observed differences in evolved robustness: 1) evolvability within the static noisy environment (discussed below) and 2) differences in the robustness potential of a system (discussed in the networked buffering hypothesis in [17]). Conceptually, evolvability is about discovering heritable phenotypic improvements. In Fig. 1d, we record the probability distribution for the time it takes the MAS population to nd a better solution. As can be seen, degenerate architectures are nding adaptive improvements more quickly. An analysis of improvement size vs tness nds this relationship is similar for the two types of MAS, thus suggesting the faster adaptation rate is largely responsible for the divergence in tness from generation 0 to 200. 4.2 Evolvability in Dynamic Environments In a dynamic environment, evolvability is no longer merely about a propensity for discovering improvements but is also about sustaining high tness throughout and after the environment changes. As can be seen from Fig. 1a and c, in both redundant and degenerate MAS, robustness drops every 200 generations when environmental change is imposed. This drop reects declines in tness across the population. However, we can make the following noteworthy observations. When evolution of degeneracy is enabled, MAS populations can adapt better to change than MAS with purely redundant architectures. Except for the decline of tness at generation 200, all subsequent drops (at generations 400, 600, etc.) are smaller in the degen experiments than in the redun experiments, irrespective of whether the scenario changes are moderate or drastic. With every change in the set of scenarios, MAS that cannot evolve degenerate architectures appear to drop in performance by similar amounts. The only exception is the rst adaptation after a change of environmental conditions (i.e. the time period from generation 201 to 400) where there is some overall improvement when the environmental change is moderate (Fig. 1a), and some overall deterioration when the change is drastic (Fig. 1c). MAS that can evolve degeneracy, on the other hand, have some capacity to adapt to the nature of change. From

290

J.M. Whitacre et al.

(a)

(b)

(c)

(d)

Fig. 1. Figure 1 (a): When degeneracy is/is not permitted in the MAS architecture, we label these as degen/redun. The main graph plots robustness evolved over time with smaller graphs of collective mean tness (top-left) and degeneracy (for the MAS where it is allowed to emerge, top-right). Environmental changes (every 200 gen.) are moderate (see Experimental Setup). (b): Degenerate and redundant MAS. Agents (depicted as pairs of connected nodes) can perform 2 different task types. Each MAS (top: redundant; bottom: degenerate) consists of 4 agents and the faded pairings indicate the predetermined set of options the mutation operator may choose from. (c): MAS evolve in conditions where environmental changes are dramatic. (d): histogram for the number of offspring sampled before an improvement is found (stability time length). Conditions are the same as (a) except environmental changes occur every 400 generations.

environmental change to environmental change, the drop in tness/robustness becomes smaller. When plotting the collective mean tness (i.e. the area under the tness/robustness curve between two consecutive environmental changes [8]), we do not only observe this adaptation in experiments with moderately changing environments (top-left graph

The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems

291

in Fig. 1a) but we also see overall adaptation levels improve over time even when the environmental changes are drastically different (top-left graph in Fig 1c). Comparing this with the amount of degeneracy integrated within the MAS architecture (top-right graphs in Fig. 1a and c), we see that the collective mean tness improves as degeneracy is integrated into the system. Furthermore, the degeneracy-enabled capacity to adapt is better when changes in the environment are moderate or correlated; a proposed precondition for continuous adaptation in DOP (see [2]). It is admittedly difcult however to directly evaluate changes in the rate of adaptation (e.g. as we did for a static environment in Fig. 1d) in the dynamic case because tness differences at the beginning of each epoch act to confound such an analysis. We note however that in somewhat similar MAS models, experimental conditions were established that can more clearly demonstrate an acceleration in adaptation rates during degenerate MAS evolution within a dynamic environment [12]. When we make the scale of our model larger (i.e. by increasing MAS size, T , and random walk size by the same proportion), the differences between degenerate and redundant MAS in robustness, evolvability and collective mean tness become accentuated. Future studies guided by selected MAS application domains will aim to further investigate the generality and limitations of these ndings by considering: restrictions in functional capability combinations in each agent, different classes of environment correlation, the speed of agent behavior modication, costs in agent behavior modication, and agent-agent functional interdependencies.

5 Discussion and Conclusions


In this paper, we investigated the potential for designing dynamic optimisation problem (DOP) representations that are robust to environmental conditions experienced during a solutions lifecycle and, at the same time, have the capacity to adapt to changing environments. Our investigation was motivated by a hypothesis formulated in the context of biological evolution namely that degeneracy facilitates robustness and adaptation in time-variant environments. In simulation experiments we evolved populations of multiagent systems (MAS) and compared the robustness and adaptation potentials of systems that could evolve degenerate architectures with those that could evolve redundant structures only. We found evidence that incorporating degeneracy into a problems representation can improve robustness and adaptiveness of dynamic optimisation in ways that are not seen in purely redundant problem representations. While our investigation was quite abstract, we can identify several features that make degeneracy suitable for dynamic optimisation. First, degenerate systems appear to exhibit a greater propensity to adapt. While we have not reported an analysis of tness landscape neutrality here, previous studies on the ensemble properties of similar models have shown degeneracy creates neutral regions in tness landscapes with high access to phenotypic variety [15,16]. In light of these earlier studies, the results presented here demonstrate that the discovery of adaptations in static neutral landscapes created by degeneracy can be surprisingly rapid. Theoretical arguments have suggested that long periods of time may be needed to discover a single adaptive phenotype from a neutral network [11], however the rapid adaptation in Fig. 1a,d suggests that little neutrality is

292

J.M. Whitacre et al.

ever traversed in these experiments before an improvement is discovered. As believed to also take place in biology, this fast pace of adaptation likely reects the existence of many alternative paths to adaptive change within neutral networks created by degeneracy. This means that little of the neutral network needs to be searched before new improvements are found, thus tness barriers are not being replaced with large entropic barriers during evolution, cf [11]. While optimal solutions are not guaranteed, the propensity to adapt in evolved degenerate systems appears to allow such a strategy to quickly nd highly t and highly robust solutions as needed when tackling DOP. A second desirable feature of degenerate systems is their enhanced capacity to deal with novel conditions. Compared with redundant architectures, degenerate systems have a greater potential to evolve innovative solution responses that account for small variations in environmental conditions. In a supplemental analysis of these systems we have found this robustness potential can extend to moderate degrees of environmental novelty, thus helping to explain the differences between system classes immediately after a change in the environment (Fig. 1a,c). However, a further reason that degenerate MAS exhibited highly effective responses to immediate environmental change was the emergence of population properties known in evolutionary biology as cryptic genetic variation (CGV). Many EA-based dynamic optimisation techniques aim to articially control population convergence based on a general understanding that low genetic diversity limits a populations adaptability when it encounters a changed tness landscape. The resulting genetic and phenotypic properties of EA populations differ signicantly however from that observed in natural populations. Genetic diversity within natural populations is maintained in a static environment by being phenotypically and selectively hidden. Trait differences across the population are mostly exposed only after an environment changes; a phenomena known as cryptic genetic variation (CGV). The present study focuses on how Edelman and Gallys hypothesis is relevant when applying neutral evolution theories to the topic of evolvable problem representations. However in [13] we also analyze the population properties from these experiments and report evidence that degeneracy generates hide and release mechanisms for genetic diversity that are analogous to the natural CGV phenomena just described. This evidence of CGV is presented as a separate supplemental report in [13] due to space limitations as well as its distinctive theoretical relevance.

Acknowledgements
This work was partially supported by a DSTO grant on Fleet Designs for Robustness and Adaptiveness and an EPSRC grant (No. EP/E058884/1) on Evolutionary Algorithms for Dynamic Optimisation Problems: Design, Analysis and Applications.

References
1. Banzhaf, W.: Genotype-phenotype-mapping and neutral variation-a case study in genetic programming. In: Davidor, Y., M nner, R., Schwefel, H.-P. (eds.) PPSN 1994. LNCS, a vol. 866, pp. 322332. Springer, Heidelberg (1994)

The Role of Degenerate Robustness in the Evolvability of Multi-agent Systems

293

2. Branke, J.: Memory enhanced evolutionary algorithms for changing optimization problems. In: Congress on Evolutionary Computation, vol. 3, pp. 18751882. IEEE Computer Society, Washington (1999) 3. Branke, J.: Evolutionary Optimization in Dynamic Environments. Kluwer, Dordrecht (2002) 4. Ciliberti, S., Martin, O.C., Wagner, A.: Innovation and robustness in complex regulatory gene networks. Proceedings of the National Academy of Sciences, USA 104(34), 1359113596 (2007) 5. Edelman, G.M., Gally, J.A.: Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences, USA 98(24), 1376313768 (2001) 6. Jin, Y., Gruna, R., Paenke, I., Sendhoff, B.: Evolutionary multi-objective optimization of robustness and innovation in redundant genetic representations. In: IEEE Symposium on Computational Intelligence in Multi-Criteria Decision Making, Nashville, USA, pp. 3842 (2009) 7. Knowles, J.D., Watson, R.A.: On the utility of redundant encodings in mutation-based evolutionary search. LNCS, pp. 8898. Springer, Heidelberg (2003) 8. Morrison, R.W.: Designing Evolutionary Algorithms for Dynamic Environments. Springer, Berlin (2004) 9. Rothlauf, F.: Representations for Genetic and Evolutionary Algorithms, 2nd edn. Springer, Heidelberg (2006) 10. Rothlauf, F., Goldberg, D.E.: Redundant representations in evolutionary computation. Evolutionary Computation 11(4), 381415 (2003) 11. van Nimwegen, E., Crutcheld, J.P.: Metastable evolutionary dynamics: Crossing tness barriers or escaping via neutral paths? Bulletin of Mathematical Biology 62(5), 799848 (2000) 12. Whitacre, J.M.: Evolution-inspired approaches for engineering emergent robustness in an uncertain dynamic world. In: Proceedings of the Conference on Articial Life XII (to appear) 13. Whitacre, J.M.: Genetic and environment-induced innovation: complementary pathways to adaptive change that are facilitated by degeneracy in multi-agent systems. In: Proceedings of the Conference on Articial Life XII (to appear) 14. Whitacre, J.M.: Degeneracy: a link between evolvability, robustness and complexity in biological systems. Theoretical Biology and Medical Modelling 7(1), 6 (2010) 15. Whitacre, J.M., Bender, A.: Degenerate neutrality creates evolvable tness landscapes. In: WorldComp 2009, July 13-16 (2009) 16. Whitacre, J.M., Bender, A.: Degeneracy: a design principle for achieving robustness and evolvability. Journal of Theoretical Biology 263(1), 143153 (2010) 17. Whitacre, J.M., Bender, A.: Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems. Theoretical Biology and Medical Modelling 7(20) (to appear 2010) 18. Yu, T., Miller, J.F.: Neutrality and the evolvability of boolean function landscape. In: Proceedings of the 4th European Conference on Genetic Programming, pp. 204217. Springer, London (2001)

The Origins of Cancer Robustness and Evolvability


Tianhai Tian, Sarah Olson, James M. Whitacre*, Angus Harding* *To whom correspondence should be addressed Insight Box Despite an intense global research effort, most adult cancers remain incurable. The challenge facing researchers is that cancer is a complex disease, displaying many trait properties that drive tumor progression. One such trait property is therapy resistance, widely regarded as the greatest obstacle preventing long-term patient survival. Here we integrate findings from mathematical models, experimental systems and clinical studies to provide an updated schema for the evolution of cancer therapy resistance. In this new paradigm, selectively filtered cellular and sub-cellular heterogeneity provides cancer with the crucial property of degeneracy, rendering tumors both robust and evolvable. We then explore the latest generation of conceptual and computational models that, by directly attacking tumor evolvability, have proposed new therapeutic paradigms that may help reduce or overcome therapy resistance in tumors.

Summary
Unless diagnosed early, many adult cancers remain incurable diseases. This is despite an intense global research effort to develop effective anti-cancer therapies, calling into question the use of rational drug design strategies in targeting complex disease states such as cancer. A fundamental challenge facing researchers and clinicians is that cancers are inherently robust biological systems, able to survive, adapt and proliferate despite the perturbations resulting from anti-cancer drugs. It is essential that the mechanisms underlying tumor robustness be formally studied and characterized, as without a thorough understanding of the principles of tumor robustness, strategies to overcome therapy resistance are unlikely to be found. Degeneracy describes the ability of structurally distinct system components (e.g. proteins, pathways, cells, organisms) to be conditionally interchangeable in their contribution to system traits and it has been broadly implicated in the robustness and evolvability of complex biological systems. Here we focus on one of the most important mechanisms underpinning tumor robustness and degeneracy, the cellular heterogeneity that is the hallmark of most solid tumors. Based on a combination of computational, experimental and clinical studies we argue that stochastic noise is an underlying cause of tumor heterogeneity and particularly degeneracy. Drawing from a number of recent data sets, we propose an integrative model for the evolution of therapy resistance, and discuss recent computational studies that propose new therapeutic strategies aimed at defeating the adaptable cancer phenotype.

Introduction
Although modern therapies have increased patient lifespan, the majority of adult cancers remain terminal diseases 1. This is because anti-cancer drugs generally lose efficacy due to the emergence of therapy resistance within tumors, which remains a significant obstacle to long-term patient survival. Some cancers, such as acute myeloid leukemia and ovarian and breast cancers, show an initial response to chemotherapeutics but invariably relapse, with the recurrent cancer often resistant to any further therapeutic intervention 2. Other cancers such as melanoma and pancreatic and colon cancers contain

fewer proliferating cells during therapy, but the tumor mass nonetheless remains stable within the patient throughout treatment 2. Tumors utilize many mechanisms to avoid and/or overcome chemotherapeutics. The diversity of drug evasion mechanisms that are observed in tumors, combined with the challenge of effective in vivo drug delivery, renders the identification and targeting of therapyresistance mechanisms difficult. Trait robustness is a ubiquitous and fundamental property at all organizational scales in biology and is prevalent for instance in gene expression, protein folding, metabolic flux, physiological homeostasis, development, and organismal fitness 3, 4. Here we define robustness as a property that allows a system to maintain its function despite internal and external perturbations 5. Robustness requires the maintenance of system function as opposed to simply maintaining a stable state 4, and biological systems often achieve this robustness through adaptation, a principle dramatically illustrated in the anhydrobiosis of tardigrade, which can suspend their metabolism under conditions of extreme dehydration, surviving for years in a dormant state 6. Adaptive change is not a unique property of extremophiles as cancers, an inherently robust disease system, are able to adapt to and accommodate many different physiological insults, such as low oxygen and metabolic stress 7. In this review we argue that selection occurring over cellular trait heterogeneity is one of the fundamental causes of cancer robustness. In cancer, cell trait heterogeneity originates from under-regulated stochastic processes at the genetic, epigenetic and protein expression levels. Studies of tumor heterogeneity have revealed extensive cellular trait variation within tumors with respect to size, morphology, antigen expression and membrane composition 8. Individual tumor cells also display diverse functional behaviors in terms of proliferation rate, cell-cell interactions, metastatic potential and sensitivity to therapy 8. Sequencing studies have demonstrated surprising levels of genetic diversity between individual patient tumors of the same type 9. Heritable intra-tumoral heterogeneity increases the probability of tumors harboring a therapy-resistant phenotype and has been hypothesized to endow tumors with the necessary adaptability to survive and recur after treatment 7, 10, 11. More generally, we propose that tumor heterogeneity provides the phenotypic diversity necessary for the rapid evolution that occurs in many cancers. Evolvability is defined as the capacity to generate heritable, selectable phenotypic variation 12. Since the seminal paper from Nowell in 1976 describing cancer as an evolutionary system 13, many studies support the idea that tumors are indeed evolving systems 14. In this paradigm individual cancer cells become the reproductive units within the population, and those cells that have acquired a survival advantage through random genetic or heritable epigenetic change are selected through multiple rounds of clonal expansion, during which they acquire further alterations that combine to produce a malignant phenotype 15. For evolution to occur there must be some form of selection pressure combined with sufficient heritable variation within the population. Many such selection pressures exist for tumor cells in vivo, such as limited nutrients, oxidative stress, and competition for space, as well as extrinsic factors such as immuno-surveillance and anti-cancer therapeutics 14. In melanoma 16, colon cancer 17 and esophageal cancer 18, an increasing number of genetic mutations characterize different phases of neoplastic progression, suggesting a model of sequential mutation acquisition during tumor evolution. This has been best characterized in adenocarcinomas of the large intestine, in which the number of oncogeneic mutations correlates with tumor grade and stage 19. Notably, when isolating different stages of neoplasia in the same tumor specimen, Vogelstein and colleagues demonstrated that although identical ras mutations were present in both regions, the more aggressive carcinomatous regions contained at least one mutation that was absent in the less aggressive adenomatous region 17. This sequential model of tumor progression supports the idea that successive mutation enhances the fitness of the tumor cells, followed by positive selection and clonal expansion.

However as with any evolutionary process, the sequential progression observed in these studies is likely providing a selection-biased perspective on an underlying stochastic process involving genetic drift, hitchhiking and other dynamic population properties 8, 20. More direct evidence for tumor evolution was provided recently in breast cancer. By comparing all somatic coding mutations in a metastatic breast cancer relative to the original primary tumor resected nine years previously, Shah et al revealed that only five of 32 coding mutations found in the metastases were present in the original tumor, demonstrating that evolution had occurred during disease progression 21. A recent genetic analysis of breast tumors has so far provided the clearest picture of tumor evolution in vivo. By studying the cellular composition of breast tumors, Wigler and colleagues identified multiple clonal subpopulations 22. The observation that all subpopulations within a single tumor shared many of the same chromosome breakpoints provides additional supporting evidence for an evolutionary model of tumor growth, with new clones evolving out of pre-existing tumor cells 22. Further support for the idea of evolution driving tumor development comes from studying the emergence of tumor therapy resistance. In lung cancer, mutant clones containing point mutations within the epithelial growth factor (EGF) receptor drive tumor recurrence that is resistant to further EGF receptor inhibition 23. Likewise, resistance to Glivec in chronic myeloid leukemia is often due to mutant clones with point mutations within the BCR-ABL tyrosine kinase 24, 25. In the case of Glivec resistance in chronic myeloid leukemia, there is evidence that resistant mutants are present in patient tumors before drug treatment 26, suggesting that cancer therapies select for pre-existent resistant mutants within tumors. This is reminiscent of natural selection acting on standing genetic variation which is believed to occur in many biological contexts 27 such as in bacterial populations, where preexisting resistant mutants drive the evolution of bacterial phage resistance 28. Given these findings, we propose that a better understanding of the principles of tumor evolvability will allow for the design of new therapeutic paradigms that minimize or inhibit tumor evolution, and thus prevent the emergence of therapy resistance. As heritable phenotypic variation is a prerequisite of tumor evolution, we first focus our attention on four mechanisms thought to be responsible for generating tumor heterogeneity in patients: genetic instability, epigenetic instability, stochastic protein dynamics and tumor micro-environments. Next we provide a brief analysis of the crucial relationship between robustness and evolvability. We then present an integrative model of tumor evolution in which we propose how heterogeneity at different biological scales can facilitate evolution and the generation of complex tumor properties. Finally, we explore how a systems biology approach may be used to help overcome robust, evolvable tumors in patients.

Sources of heterogeneity 1. Genomic instability


Heterogeneity within tumor genomes The Cancer Genome Atlas has published DNA sequences from numerous cancers, providing many fundamental insights into tumor biology. The identification of mutant genes driving transformation was the primary goal of these sequencing studies 29. The systematic characterization of cancer genomes revealed the unexpected finding that most cancer types display significant intertumor mutational heterogeneity 15. Most solid tumors contain on average 50 non-silent mutations in the coding regions of

different genes, with only a small fraction of these genes being mutated across tumors 15. For example, when Greenman et al sequenced 518 protein kinases in 210 tumors of different origins and identified 1007 likely driver mutations 30, very few commonly mutated genes were identified. This finding also holds true within individual tumor types, as sequencing of brain, pancreatic and colon cancers has revealed that only a few common mutations exist for each tumor type 29, 31-34. Studies in breast and colon cancer have confirmed the diverse mutational heterogeneity within these tumors underpins their immunological heterogeneity 35-37. Sequencing technologies are now sufficiently advanced to allow researchers to begin assessing intratumor genomic heterogeneity. Recent sequencing of a primary breast tumor using next-generation sequencing confirmed that intermediate grade breast tumors do indeed contain clonal subpopulations 21. By developing a new technology termed sector-ploidy-profiling (SPP) Navin et al 22 revealed that primary breast carcinomas consist of either a single major clonal population, or several primary clonal subpopulations 22. Given that this technology currently lacks the sensitivity to detect uncommon clones within populations 22, this first pass almost certainly underestimates the true level of genetic heterogeneity present in solid tumors. Nevertheless these initial studies provide the important proof-ofprinciple that intra-tumor genetic heterogeneity exists, and begin to shed light on the evolutionary dynamics occurring within tumors 21, 22. The mutator phenotype model What is driving genomic heterogeneity within tumors? The mutator model of tumor initiation posits that mutations that increase genomic instability (the so called mutator phenotype) drive tumorigenesis by allowing tumor cells to rapidly acquire the portfolio of mutations required for cellular transformation through an increase in random mutation events. This idea, pioneered and championed by Loeb and colleagues, was initially based on the analysis of DNA polymerases and DNA repair enzymes 38, but has subsequently been expanded to include other sources of genomic instability common to tumors 39-41. For example, chromosomal instability generates many chromosomal defects in tumors, including aneuploidy, translocations, inversions, interstitial deletions, amplifications and lossof-heterozygosity 42. Multiple mechanisms drive chromosome instability including activation of key oncogenes known to drive cellular transformation (reviewed in 42, 43), providing tumors with ample capacity to generate genomic instability. A firm theoretical foundation supporting the mutator model has been provided by recent modeling studies. The first study, undertaken by Beckman and Loeb, assumed that all potential mechanisms of tumorigenesis are in operation. The major insight of this work was the novel idea that mechanisms that produce malignant lineages most efficiently should be considered the most likely to generate clinical cancers 44. In this model, efficiency is defined as the number of malignant lineages generated in a given time. This schema allowed the first direct comparison of the relative efficiencies of mutator and nonmutator pathways in cancer lineage production 44, and demonstrated that mutator mutations increase the efficiency of tumorigenesis under many realistic simulation conditions 44. Most compellingly, the mutator phenotype became increasingly important as the number of mutations required for cellular transformation rose 44. Thus, for tumors requiring only two mutations, the mutator phenotype offers no advantage 44. However, when the number of mutations required for transformation exceeds six, then the mutator phenotype imparts a significant advantage in the efficiency of tumorgenesis 44. As stated by Jarle Breivik Each random mutation may be regarded as a bet, and the odds are always unfavorable, simply because there are more ways to damage a genome than improve it. As for roulette,

you may get a lucky strike, but the more bets you make, the more certain it is that you will lose 45. Clonal extinction due to reduced fitness is known as negative clonal selection and is one of the most serious criticisms leveled against the mutator model 45. To directly address whether negative clonal selection negates the mutator hypothesis, Beckman recently developed an updated model that incorporated the mutators reduced fitness within its underlying assumptions 46. This model revealed that even with negative clonal selection, mutator cells still provide the most efficient route to tumorigenesis 46. The model made an important new prediction: that there exists an optimal mutation rate for tumors, above which the deleterious effects of reduced fitness do lead to negative clonal selection and tumor collapse 46. Support for this idea has recently been generated using bacterial competition models, where Loeb and colleagues experimentally confirmed that mutator strains of E.coli displaying a high mutation rate invariably suffer negative clonal selection and die out, whereas those mutator strains that fall within a narrow optimal range of mutation rates consistently prevail and survive in evolutionary competition assays 47. Importantly, these results suggest that mutator cells are not necessarily doomed to extinction due to reduced fitness 47. Beckmans updated model also confirmed two predictions of the earlier model: that mutator mutations are most likely to occur early in tumorigenesis, and that the mutator phenotype becomes increasingly important as more oncogenic mutations are required for transformation 44, 46. In an independent study, Zhou et al explored evolutionary dynamics in silico using a numerical model based on Highly Optimized Tolerance (HOT) 48 , a mathematical paradigm used to understand how selectively acquired robustness can lead to the evolution of complexity 49. In this model, mutators played a primary role in adaptation, whereas lowmutators preserved well-adapted phenotypes 48. Taken together, these three models support the hypothesis that mutator phenotypes both increase the efficiency of tumorigenesis (and thus increase the probability of tumor initiation) and drive tumor adaptation throughout disease progression. The correlation between tumor incidence and advanced age has been used to estimate that the minimum number of genetic mutations that drive oncogenesis is five to six 50-52. Pediatric tumors such as retinoblastoma require significantly fewer mutations for transformation 53-55, whereas late-onset adult tumors may require as many as 10-12 events 56. Experimental models initially suggested that 3-4 mutations were sufficient to generate transformed cells 57. However recent experimental data indicates that more mutations are in fact required. Mahale et al explored the efficiency of transformation using a combination of four oncogenes in a human fibroblast model of transformation 58. Using this established experimental system they made the striking observation that tumorigenicity significantly increased after serially passaging tumor cells either in vitro or in vivo. The observed increase in tumorigenicity correlated with the selection of dominant clones, suggesting that malignant transformation is a stochastic process initiated by the four defined oncogenes, but that full transformation involves clonal selection of tumors harboring further transforming mutations 58. Nicholson and Duesberg then extended these analyses to reveal fundamental differences in the karyotypes and phenotypes of clones derived from a single parent cell with four oncogenes, providing direct evidence that evolution had indeed occurred during expansion both in vitro and in vivo 59. These authors went on to demonstrate that individual clones evolve further during serial passaging in culture, generating either enhanced tumorigenicity or drug resistance in vitro 59. These findings are consistent with recent reports showing a correlation between genomic instability and drug resistance 60, 61, supporting the idea that the rate of tumor evolution, including the acquisition of therapy resistance, is significantly enhanced by genomic instability. Taking a complementary approach and working in parallel, the systematic and comprehensive analysis of Ye et al showed that tumorigenicity is positively associated with genomic diversity in five independent models of tumor progression 62. These combined data sets provide experimental evidence for a direct causal relationship between genomic instability and cancer evolution.

Direct confirmation of this relationship came recently using a mouse model of genomic instability 63. Sotillo et al induced lung tumor formation in mice by doxycyline-mediated expression of oncogeneic Ras either alone or in combination with Mad2, a known mediator of chromosomal instability 63. The expression of oncogenic Ras alone generated lung tumors, whereas Mad2 expression alone did not. However the addition of chromosome instability through Mad2 co-expression markedly enhanced Ras tumorigenicity, as revealed by a more rapid disease onset and mortality, a more aggressive tumor phenotype, and a two-fold increase in the size of the Ras+Mad2 tumors compared to tumors expressing Ras alone 63. The increased aggressiveness of Ras+Mad2 tumors correlated with increased aneuploidy (a marker of chromosomal instability) and a rise in the diversity of Ras+Mad2 tumor sub-types compared to the Ras-only control 63. When the oncogenic protein expression was ablated by doxycycline withdrawal, both the Ras and the Ras+Mad2 tumors collapsed, consistent with the idea of oncogenic Ras expression driving tumor growth. Ras-only tumors never recurred after doxycycline withdrawal. In striking contrast, approximately half of the Ras+Mad2 tumors returned, with recurrent tumors displaying both increased aneuploidy and new signal transduction pathway activation 63. The most plausible interpretation of these results is that the increased genomic instability of the the Ras+Mad2 tumor allowed the evolution of Ras-independent tumor cells, which then drove tumor recurrence despite the loss of oncogenic Ras expression.

2. Epigenetic instability
It has been cogently argued that as a single genome is capable of generating the diversity of cell phenotypes present in metazoan organisms, the same mechanisms that underpin normal cell diversity may also drive tumor heterogeneity and contribute to tumor evolution 64. Cell diversity in somatic organisms is regulated through epigenetic mechanisms. By epigenetics we mean the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained by changes in DNA sequence 65. This fulfils the biologists definition of a stably heritable phenotype resulting from changes in a chromosome without alterations in the DNA sequence 66 while accommodating the existence of pseudo-stable states driven by network dynamics and long-lived stochastic fluctuations. Deregulated chromatin structure as a source of epigenetic heterogeneity The structure of chromatin determines how genetic information is organized within the cell 67. Chromatin consists of repeating units of nucleosomes, which in turn are made up of ~146 bp of DNA wrapped around an octomer of four core histone proteins (H3, H4, H2A and H2B) 68. The organization of the genome into discrete structures provides mechanisms for regulating whether genes are active or silent. Epigenetic mechanisms used to modify chromatin structure, and hence control gene expression, can be divided into three categories: DNA methylation, covalent histone modification, and noncovalent mechanisms such as incorporation of novel histone variants (reviewed in 69). These modifications work together to alter the structural dynamics of chromatin to create an epigenetic profile that sits atop the genetic base, shaping the output of the mammalian genome to regulate developmental stages and define discrete cell types 69. This epigenomic profile is extensively distorted in cancers 69. Cancers display global changes in DNA methylation, covalent modifications of histones, extensive non-covalent changes, and altered expression profiles of chromatin-modifying enzymes 67. These epimutations can silence tumor-suppressing genes and activate oncogenes, and are likely to be functionally equivalent to genomic cancer mutations 70. Indeed, we now know that hundreds of genes are either silenced or activated in cancers due to epigenetic changes, and the list continues to grow 67, 71. Although epigenetic alterations are reversible, they are mitotically heritable, and therefore available to natural selection and able to actively participate in tumor evolution 70.

Intriguingly, multiple epigenetic aberrations have been directly linked to genomic instability (reviewed in 70). DNA hypomethylation at repeat sequences increases genomic instability by promoting chromosomal rearrangements 72. Hypomethylation of transposable elements can lead to their activation and translocation, increasing genomic instability 73. Genomic instability can also be caused by epigenetic inactivation of genes encoding DNA repair proteins 74, 75. Indeed, the silencing of the DNA repair protein MGMT and global DNA demethylation are thought to be early initiating events in tumor formation 76-78. Thus epigenetic modifications are well placed to contribute directly to genomic instability and tumor heterogeneity, thereby enabling tumor evolution. Deregulated regulatory RNA as a source of epigenetic heterogeneity It is now clear that regulatory RNAs are directly linked to tumorigenesis and progression by acting as either oncogenes or tumor supressors 79. The best characterized regulatory RNAs contributing to cancers are microRNAs (miRNAs), small RNAs that are generally 22 residues long and form the sequence-specific part of the RNA-induced silencing complex (RISC) that binds to mRNAs and inhibits their translation and stability 79. Shortly after their discovery in mammals, a series of studies revealed that a significant number of miRNAs display altered expression in various tumors 80-85. It has now been experimentally confirmed that some miRNAs induce or accelerate tumorigenesis, acting as oncogenes 86, 87. Other miRNAs function as tumor suppressors, displaying anti-proliferative or proapoptotic functions in experimental models 88, 89. One of the primary functions of miRNAs in normal development is the stabilization of cellular phenotypes 79. Regulatory loops play a central role in maintaining robust phenotypic reproducibility of developmental programs 90. This type of system control comprises a collection of feedback loops that monitor and quantitatively regulate the output of signaling networks 91. Negative feedback loops embedded within signaling networks are prevalent and are known to stabilize pathway dynamics 91. Experimental evidence suggests that an important role of miRNAs is to impose stabilizing negative feedback loops during development 92, 93. In line with this function, miRNA expression is reduced in late-stage tumors and correlates with tumor aggressiveness, which is thought to be due to an increase in tumor heterogeneity resulting from destabilized pathway dynamics 79. Deregulated network dynamics as a source of epigenetic heterogeneity Normal lineage commitment and differentiation is regulated through complex regulatory networks. Networks containing large numbers of mutually regulating components can generate multiple stable equilibrium states, called attractors 94, 95. These stable states have been proposed to correspond to different differentiated cell types within an organism by driving cell-type-specific gene expression patterns 94. A pure attractor is a stable state driven by the balance between regulatory loops within a genetic network and is not the result of covalent modifications. In reality, attractors are likely to result from a combination of regulatory loops within kinase cascades, non-coding RNA networks, genetic networks and chromatin remodeling. An epigenetic landscape contains all the possible stable attractor states and the unstable transition states 96. Normal cellular differentiation is governed by growth factors, cellular contextual cues, cell cycle regulators and complex regulatory loops, which define the attractors in normal tissue differentiation. In cancer, although many of these regulatory signals are deregulated, there remains an epigenetic landscape littered with pathological attractors that represent cancer cell states. Tumor cells therefore have access to a variety of connected attractor states, allowing tumors to display some of the characteristics of a complex developmental phenotype 96. The cancer stem-cell model in light of the epigenetic landscape

An epigenetic landscape populated with diverse attractor states provides an integrative view of cancer that accommodates many disparate observations, making it a powerful paradigm to understand tumor biology. For example, some of the controversies about the cancer stem cell model of cancer (see 97) may be reconciled using the epigenetic landscape paradigm. The cancer stem cell hypothesis states that a subset of tumor cells with stem cell-like properties drive tumor initiation, progression, and recurrence 97 . These cancer stem cells have the purported ability to self-renew indefinitely, generate rapidly cycling progenitor cells which then differentiate into all cell types within the tumor, thereby generating the tumor bulk and intercellular trait heterogeneity. In effect, tumors represent a pathological simulacrum of normal tissue growth, with chaotic tumor differentiation a parody of the controlled differentiation program that occurs in healthy tissue. Tumor progression arises from the metastatic spread of cancer stem cells and, importantly, disease recurrence is then thought to be due to the accelerated repopulation of cancer stem cells that are inherently therapy resistant 98. The generality of this hypothesis has, however, recently been questioned 99, 100. A possible explanation for the disparate results in the literature regarding the existence or otherwise of cancer stem cells could rest with what could be called the stability of the epigenetic landscape of individual tumors. Cancers with a stable epigenetic landscape have correspondingly stable attractors, locking cancer cells into defined states. Here cancer cells occupying attractors with stem cell properties are predicted to play a central role in driving the tumor biology, including tumor initiation and therapy resistance, even though they are genetically identical to their non-stem cell counterparts. From an epigenetic landscape perspective, the cancer stem cells would sit atop a hierarchy of connected attractors that radiate outward to stable attractors representing distinct cell fates. This statement follows from the assumption that the initial epigenetic landscape has been carefully shaped over evolutionary time to establish distinct cell-fate pathways, and that the tumor has maintained some of this developmental architecture. However, very aggressive tumors with high levels of genetic and epigenetic instability would be expected to display a progressive deregulation of the epigenetic landscape, resulting in attractors that become increasingly less stable and that no longer connect with the same clear hierarchical architecture (see Figure 1). As a result of deregulation, the cancer stem cell attractor should be less critical in driving disease, because cancer cells are able to move more freely between different attractor states, including transitions to what might have once been a cancer stem cell phenotype. Recent experimental evidence from malignant melanoma provides support for this model 101. Roesch et al recently identified a slow-cycling subpopulation of cells within melanoma tumors that functions as the tumor-maintaining cell population 101. These slow-cycling tumor-maintaining cells express JARID1B, a histone 3 K4 (H3K4) demethylase that in healthy organs is highly expressed in regenerative tissues 102-104. In melanoma, JARID1B is associated with negative regulation of the cell cycle 105, 106. Knockdown of JARID1B initially stimulated tumor growth, however growth could not be maintained in the absence of JARID1B 101, revealing the JARID1B cell subpopulation as crucial in maintaining continual tumor growth. Intriguingly, JARID1B expression appears to be dynamic, with JARID1B-negative cells able to spontaneously generate JARID1B-positive cells and vice versa 101. Together these results and other findings reviewed in 107 argue against a hierarchical cancer stem cell model, instead suggesting that melanoma tumor-initiating cells are generated spontaneously or induced by environmental cues within the melanoma tumor bulk, consistent with the idea of pseudo-stable attractor states driving tumor growth in aggressive cancers. Unstable epigenetic states add another layer of complexity to tumor biology, the phenomenon of transient therapy resistance. The existence of a transiently resistant drug state within tumors was first proposed based on the observation that some drug-resistant tumors become responsive after a break

from treatment 108-110. A recent study has provided experimental confirmation that transient drug resistance does occur in lines derived from multiple cancers, driven by epigenetic modification 111. Transient treatment of tumor cells with various chemotherapeutics has identified a small fraction of quiescent cells that are ~500 fold less sensitive to anti-cancer drugs than their parental cells 111. In clonally derived populations, drug tolerance emerges de novo and is reversible, although it can become stabilized over time 111. Transient drug resistance is driven by activation of the insulin growth factor 1 (IGF-1) receptor and an altered chromatin state requiring histone demethylase RBP2 111. Importantly, the transiently drug-resistant subpopulation can be ablated using inhibitors of IGF-1 or chromatinmodeling agents, suggesting an avenue of therapeutic redress for future studies 111. In combination with clinical studies 108-110 the study of Sharma et al provides support for the existence of a transient drugresistant attractor states that reflect states within a deregulated epigenetic landscape. Moreover, with each cell displaying slightly unique epigenetic landscape properties and different positioning within the landscape, tumors could be endowed with an enormous repertoire of transient cell responses that enhances the tumors overall robustness in the face of therapy. This idea is well established in bacterial populations where phenotypic outliers contribute to population fitness, one relevant example being the so called persisters in bacterial populations that express an increased resistance to penicillin that can then be inherited 112-115.

3. Stochastic protein dynamics


Recent work has focused attention on the role of stochastic protein expression fluctuations in generating trait heterogeneity within clonal populations 116. When analyzed by flow-cytometry, protein abundance in clonal mammalian cell populations can vary by as much as three orders of magnitude due to stochastic fluctuation 64, 116. This variation imparts several important characteristics on the clonal population. First, the outliers of the population can display very different biological properties 116, showing that purely stochastic effects can generate functionally diverse subpopulations within a clonal group of cells 116. In mammalian cells, such stochastic fluctuations in protein expression can be reasonably long lived, lasting up to 11 days in culture 116, meaning that they can impart phenotypic variety over clinically relevant timeframes. Recent work provides support that these types of stochastic fluctuations do indeed afford tumors protection from anti-cancer drugs, with Cohen and colleagues discovering significant cell-to-cell variability in the temporal behavior of drug-induced protein expression, which correlated with the ability of cells to resist drug-induced apoptosis 117. This finding, together with the study of Sharma et al described in the previous section, provide the first evidence that transient non-genetic phenotypic states contribute to therapy resistance in tumors and may explain historical studies showing tumors that repopulate after a drug treatment can sometimes remain sensitive to that drug 118. Even though transient states are, by definition, relatively short lived and therefore invisible to natural selection, transient resistant states can potentially contribute to the evolution of therapy resistance. While several possible pathways to inheritance exist, one plausible mechanism would involve clonal expansions originating from mutations that alter regulation of the epigenetic landscape and stabilize the drug resistant phenotype. This model fits with the recent observation of transient drug resistant states becoming stabilized over time 111, and may help explain the biology of chronic myeloid leukemia (CML), where a rare subpopulation of quiescent CML cells resists conventional therapy and is thought to drive disease relapse 119.

4. The tumor micro-environment

Genetic, epigenetic and stochastic protein dynamics sources of heterogeneity do not act in isolation, as important cross-scale interactions also take place within tumors. Tumor cells continually interact with the surrounding tumor microenvironment, a relationship that has crucial roles in tumor initiation and progression. One relevant example is regions of low oxygen (hypoxic regions) commonly found within solid tumors that are the result of an imbalance between supply and consumption of oxygen 120. Many cancers including squameous cell carcinoma of the uterine, head and neck cancers, breast cancers and brain tumors have regions of low oxygen in contrast to normal adjacent tissue 121, 122. Patients with hypoxic tumors have significantly lower overall survival 121, with hypoxia an independent prognostic factor for poor clinical outcome in many tumors 118, indicating that hypoxic regions play an active role in tumor malignancy. Hypoxic regions within tumors contribute to tumor heterogeneity in at least three ways. First, tumor cells in hypoxic environments display reduced expression of DNA repair genes and corresponding increased levels of genomic instability 120, 123. Thus hypoxia can contribute directly to the mutator phenotype and might enhance tumor evolution. Consistent with this idea, hypoxic tumor cells display increased resistance to radiation and drugs 124, 125 as well as an increased incidence of both apoptosisresistant 126 and invasive clones 127, supporting the hypothesis that hypoxic environments drive tumor cell evolution towards more aggressive phenotypes. Second, hypoxic environments can directly regulate the epigenetic state of tumor cells. Cells in hypoxic regions are dependent on anaerobic glycolytic metabolism 128, which in turn acidifies the hypoxic region through the generation of lactic acid 128. The combination of low oxygen and low pH triggers tumor cell cycle arrest and quiescence 120, 128 , increasing phenotypic heterogeneity and rendering tumor cells insensitive to many anti-cancer therapies as described above. A third confounding factor is that the low vascularization responsible for hypoxic regions reduces the concentrations of drugs within hypoxic regions 129, a condition known to favor selection of drug-resistant clones 130. The tumor microenvironment is composed of many non-transformed cell types such as endothelial cells, fibroblasts, and immune cells, all of which interact with tumor cells and modulate the tumor microenvironment 131. There is a large body of research demonstrating that tumorigenesis is strongly influenced by the non-malignant cells within the tumor microenvironment (reviewed in 132). It is likely that tumor cells co-evolve with their micro-environments, and during the course of disease progression changes in micro-environment create local differences in selection pressure, thereby driving some of the heritable differences that are observed across cancer cells within a single tumor. From this perspective, at least some of the phenotypic, genetic and epigenetic diversity observed at the cell population level is likely to be a natural consequence of tumor-microenvironment interactions. These ideas are discussed in detail in recent reviews 14, 131, 133.

Heterogeneity and degeneracy as an enabler of tumor robustness and evolvability


Above we explored how tumor heterogeneity provides the phenotypic variation required for natural selection to act upon, thereby increasing tumor evolvability. Next we briefly examine how tumor heterogeneity may enhance tumor robustness and evolvability by endowing tumors with the system property of degeneracy. Degeneracy is the ability of structurally dissimilar system components to perform the same function or generate the same output 134. Like robustness, degeneracy is also a ubiquitous property of biological systems 134. It is important to note that degeneracy is distinct from the simpler design principle of redundancy. In redundant systems, multiple identical components are present within the system, one important example being the multiplicity of pacemaker cells that robustly regulate heartbeat. Redundancy is common both in engineering and biology, where it provides

robustness in response to very specific perturbations, e.g. compensating for the loss or failure of an identical component. In contrast to redundant components, degenerate components perform similar functions within certain contexts but distinct and separate functions in others. For degeneracy to arise, system components must display functional plasticity, i.e. context-sensitivity in the different functional responses generated by each component. Recent analyses indicate that the conditionally overlapping functionality of degenerate components plays a fundamental role in reconciling requirements of robustness and evolvability in nature 135. For instance, recent in silico simulation experiments have revealed that networks composed of redundant multi-functional proteins (i.e. proteins having either identical or completely unique functions) are robust but do not provide a system with mutational access to very many distinct heritable phenotypes 135. Allowing multi-functional proteins to partially overlap in functionality (i.e. protein degeneracy) resulted in networks that were both exceptionally robust and exceptionally evolvable 135. This relationship between degeneracy, robustness and evolvability appears to arise at many different scales in biological systems but has yet to be fully understood 134. On the one hand, having diversity amongst functionally similar components will enhance robustness in a manner that is straightforward to understand. If components are somewhat different, they also have somewhat different weaknesses: a perturbation or attack on the system is less likely to present a risk to all components at once 136. Edelman and Gally have documented numerous biological examples of this relationship between degeneracy and robustness 134. One clinically important example of this relationship between degeneracy and tumor cell robustness comes from receptor tyrosine kinase (RTK) coactivation in tumor cells 137. The epidermal growth factor receptor (EGFR) is amplified, mutated or rearranged in over 40% of Glioblastoma multiforme (GBM) tumors 138, 139, the most common and aggressive primary brain cancer in adults. Nevertheless most GBM patients whose tumors are driven by oncogenic EGFR fail to respond to specific EGFR inhibitors, despite the fact that oncogenic activation of the EGFR is a crucial transforming event for these tumors 140. It was discovered that multiple RTKs are co-activated in GBM tumors, and as many RTKs share common downstream components, co-activation of multiple RTKs allows GBM tumors to maintain robust signaling simply by switching RTK usage in the presence of specific inhibitors 141, 142. Combinations of RTK inhibitors were required to overcome degenerate RTK usage in GBM tumor cells 141, 142. The RTK coactivation strategy has subsequently been observed in other tumor types, suggesting that degenerate RTK usage may represent a general pathway by which tumor cells evade targeted therapies (reviewed in 137). The example above describes how robustness can be achieved through direct functional compensation. While this mechanism is intuitively obvious, degenerate components might also collectively contribute to the stability of many traits simultaneously, distributing robustness throughout a system 143. As reviewed in 136, 143 there is some evidence to suggest that degenerate components can allow systems to establish networked compensatory pathways whose inherent versatility in resource usage enables buffering against a much greater variety of perturbations than can be accounted for by direct functional compensation alone 135. In some respects, the theoretical relationship between degeneracy and evolvability is also straightforward to understand. Because degenerate components are only conditionally similar, circumstances can arise where the components display unique functions and these can contribute to measurable trait differences 134. At the molecular level, this is observed in the conditional silencing of single nucleotide polymorphisms within protein coding genes. A conditional similarity affords synonymous codons mutational access to amino acids that are the same for some mutations but

different for others. For example, in the synonymous arginine codons CGG and CGT, the former can access amino acids {Leu; Pro; Gly; Gln; Trp} through single point mutation, while the latter can reach {Leu; Pro; Gly; His; Ser; Cys}. On the one hand, this provides synonymous codons with higher mutational robustness. On the other hand, by drifting over silent mutations, this also increases mutational access to amino acid residues. It has recently been shown that this conditional silencing can be exploited to enhance the evolvability of bacterial cell lines 144. Degeneracy might also facilitate evolvability in more complex and less direct ways. For instance, it has been proposed that the compensatory actions of degenerate proteins can lead to cryptic differences between cell states that only become realized as measurable trait differences at some later time, e.g. when thresholds for trait stability are crossed 143. In either scenario, it is the conditional similarity amongst degenerate components that is believed to afford robustness while providing the requisite variety of distinct phenotypes that is necessary for evolvability 143. While these theoretical developments and supporting studies appear promising, the complexity of biological systems has so far precluded a thorough experimental assessment of this proposed role of degeneracy in facilitating robustness and evolvability. However the functional divergence of redundant genes in many organisms, combined with large-scale gene deletion studies in yeast, worms and plants provides compelling support for the role of degeneracy as an enabler of robustness and evolution (reviewed in 145). Degeneracy within a cancer cell appears to play an important role in tumor robustness as seen by the ability of tumor cells to co-activate multiple RTKs 141, 142. However, individual cancerous cells within a heterogeneous tumor are also likely to express both distinct and overlapping functional outputs thereby establishing degeneracy at a higher organizational level within the tumor. This idea is supported by the ability of tumor cells to stochastically switch from one cell state to another, such as alternating between tumor-initiating versus proliferative cell states 101, or adopting transient drug-resistant states 111, 117. These studies provide the first evidence that individual tumor cells can functionally replace other cell types within the tumor. Cells that switch to a new cell state will not necessarily be identical to those cells being replaced, and therefore could harbor heritable trait differences with survival characteristics, such as therapy resistance. In this way degeneracy within the cell population could directly facilitate tumor evolvability, and may provide a general explanation for the evolution of therapy resistance in aggressive cancers.

The relationship between robustness and evolvability in tumors


A cohesive paradigm characterizing the intimate relationships between robustness and evolvability, which is fundamental to our understanding of biology, has until recently eluded theoreticians 12, 143, 146148 . While evolvability is repeatedly seen to support the robustness of higher level traits, it is not clear that robustness always supports evolvability. For instance, it is apparent that evolvability increases a cell populations robustness by enabling the population to adapt 5, 12. On the other hand, trait robustness within the cell seems to oppose evolvability, as cells that are robust to mutational change would be expected to have difficulty discovering distinct heritable phenotype that allow for adaptation to environmental change 149. A series of computational studies have resolved this tension by showing how robustness and evolvability arise at different timescales and furthermore showing phenotypic robustness to be a precondition to evolution 147, 150-152. Robust phenotypes allow a population to accumulate neutral mutations, increasing genotypic diversity 147. Because many of these neutral genotypes harbor distinct phenotypically consequential sensitivities to further genetic modification, mutational robustness enhances access to phenotypic diversity over time, facilitating evolution 147. The idea that robustness

facilitates evolution has strong experimental support. Bloom et al found that only robust (thermostable) protein variations could tolerate the destabilizing mutations needed to confer novel activities, whereas non-robust (thermosensitive) proteins could not evolve new activities 153. Measuring evolution of thermotolerance in an RNA virus, McBride and co-workers found that populations derived from robust clones evolved greater resistance to heat shock relative to populations founded by non-robust clones 154 . These studies provide direct empirical evidence that robustness can facilitate evolvability. Chaperone proteins such as Hsp90 function to buffer the expression of genetic and epigenetic variation, increasing an organisms robustness to mutation 155. Chaperones have been shown to function as an enabler of evolution in both eukaryotes 156 and prokaryotes 157. Tumors make good use of the buffering ability of chaperones; Hsp90 buffers tumor cells against mutations that impinge signaling 158, 159, or are lethal 160, 161. These combined studies support the hypothesis that molecular and cellular robustness facilitates tumor evolvability.

An integrative model of tumor robustness and evolvability


1. Increasing evolutionary potential through tumor degeneracy The studies presented in this review emphasize several research trends that we propose could form the basis of a new integrated model of tumor robustness and evolvability. First, a tumor precursor cell that acquires a mutator phenotype within a narrow optimal range has a reasonable probability of acquiring sufficient transforming mutations to become a mature, malignant cancer cell before accumulating deleterious mutations and suffering negative clonal selection. The mutator phenotype is also predicted to generate a destabilized epigenome, allowing cells to transiently adopt multiple discrete cell fates and further increase genomic instability (positive feedback). Stochastic protein dynamics within individual cells can in some cases help to further enhance the diversification of cell states within the tumor, amplifying the phenotypic heterogeneity generated through (epi)genomic instability. The net effect of this three-tiered destabilization is the generation of a high degree of cellular trait heterogeneity within the tumor, which affords the cancer a greater repertoire of responses to the perturbations it encounters during growth, thus rendering it a more adaptable and robust system. For instance, with individual tumor cells able to transiently adopt a variety of cell fates, individual cells of one type can functionally replace other cell types through environment-induced trait plasticity. On the other hand, these compensatory cell transitions do not result in identical cell states (the cells are degenerate) and cells of a similar type will display unique strengths and weaknesses that play out in a competitive environment to the overall benefit of tumor robustness. With many traits having a heritable basis, transient resistance can transform into persistent tumor properties under sustained selective pressure, i.e. genetic assimilation 162, 163. In short, we propose that enhanced tumor robustness and evolvability is conferred through the development of degenerate selective repertoires that arise naturally in cell populations presented with genetic instability, epigenetic instability, and stochastic protein dynamics. These proposed relationships between tumor robustness, degeneracy, and evolvability are summarized in Figure 2. While the studies reviewed in this article support the model described above, there are aspects of the proposed model that could be modified or elaborated upon and still be supported by the accumulated evidence within these studies. For instance, under the mutation-selection balance that constrains the likelihood of initial disease onset within the mutator hypothesis, a mutator phenotype would not be highly maladaptive if preceded by (selectively passive) mutations that elevate the mutational robustness (i.e. attenuated phenotypic effects from mutations) through so called capacitance or genetic buffering of a pre-cancerous cell. Examples of common tumor mutations that can increase genetic capacitance

include p53 loss-of-function (= loss of apoptotic response to DNA damage), constitutive PI3-kinase signaling (pushing cells into a proliferative anti-apoptotic state) and increased chaperone expression (direct increase in genetic buffering). Stochastic fluctuations in protein expression may also serve as a genetic buffering system by compensating for loss-of-function mutations or suppressing deleterious protein expression. With elevated levels of genetic buffering, a clonal population could subsequently acquire a mutator phenotype under nearly neutral conditions, thereby enhancing the overall likelihood of the mutator phenotype pathway. Even in the absence of elevated genetic buffering, mutational robustness is exceptionally high in eukaryotic genomes compared to the more compact genomes of viruses and bacteria 164, and this genetic neutrality should increase the plausibility of a mutator phenotype pathway beyond the conditions suggested from the simulation studies reviewed earlier. 2. Increasing evolutionary potential through cryptic heritable variation in tumors Counter-intuitively, high levels of mutational robustness 163 within cancer cells may also have a direct positive impact on a tumors ability to evolve therapy resistance. While heritable phenotypic diversity is a precondition for evolutionary adaptation, the competitive environment within a tumor will suppress trait differences that are deleterious to cell survival and fecundity and will thereby impose some constraints on the type and amount of phenotypic heterogeneity that can arise; both within a cell population microenvironment and across the entire tumor. However, due to the cells inherent genetic and epigenetic capacitance, mutations can readily accumulate in a cell population that appear phenotypically cryptic (or selectively hidden) under stabilizing selection but that become expressed or released under perturbed (e.g. stressful) environmental conditions. This conditional exposure of trait variation is often discussed as a phenomena known as cryptic genetic variation (CGV) or hidden reaction norms (for reviews see 165, 166). CGV describes heritable phenotypic variation that is hidden under normal conditions but that is released in the presence of novel alleles or novel environments. Given the high mutational robustness within the human genome, a mutator phenotype should facilitate an accelerated accumulation of high levels of this cryptic genetic variation that can be subsequently (partially) released in ways that depend on the stressful environments encountered. This scenario, inherent mutational robustness combined with elevated (epi)genetic instability, is particularly promising because it would provide the necessary fuel for tumor adaptation under new stressful environments while bypassing the negative clonal selection that limits phenotypic variability under stable conditions. While cryptic genetic variation has been observed in a variety of species and cell populations 165, 166, the origins of cgv are not fully understood, making it unclear when or how genetic buffering mechanisms will permit cgv to arise. Very recent simulation studies have found that each of the hallmark features of cgv are readily observed in biological simulations when mutational robustness is achieved through biomolecular degeneracy 167. Because degeneracy is ubiquitous at protein, complex assembly, and molecular pathway levels within the cell, it would thus seem plausible that cgv can accumulate in tumor cell populations. Moreover, under the mutator phenotype scenario, cgv should accumulate rapidly and strongly influence the evolvability of cancer.

Using a systems biology approach to attack robust and evolvable tumors


Keeping firmly in mind Horrobins fears that biomedical research is becoming a glass bead game with little contact with reality 168, we now focus on the design of new therapeutic strategies that combat tumor evolvability in an effort to mitigate therapy resistance. Our overarching hypothesis is that an

understanding of the principles of tumor evolvability will allow the design of general therapeutic paradigms that minimize tumor evolution, in the hope of preventing or delaying the emergence of therapy resistance. There has been a significant body of research devoted to developing mathematical models of the evolution of therapy resistance, with the aim of developing general dosing strategies to inhibit tumor evolution 169-173. Below we focus on three recently published modeling approaches that illustrate how combining simulations, theory, and empirical evidence could help in the development of therapeutic strategies that can overcome the evolution of therapy resistance. As even a small number of resistant cells at the start of the therapy can prevent a cure, Foo and Michor recently modeled the worst-case scenario of the inevitable emergence of therapy resistance due to a single (epi)genetic mechanism 174. Importantly, they have taken into account the effects of drug toxicity and side effects 174 in an approach designed to give the best outcome for the patient by comparing continuous or pulsed therapy regimes, determined by the maximal time before tumor recurrence occurs 174 . The assumptions used in this analysis are consistent with the high probability of resistance experienced in clinical trials. Foo and Michor found that strategies involving drugs delivered in high dose pulses, effectively slowing the net growth of resistant cells, provided the best outcome for patients in silico with respect to delay of tumor recurrence and drug toxicity 174. This high-dose-pulse approach may be useful in identifying optimum therapy schedules to avoid or delay resistance driven by a single (epi)genetic event 174. In their recent manuscript, Silva and Gatenby have taken an ecological approach in their war on cancer 175 . Inspired by successful biological control of pest species in ecosystems, they seek to stabilize rather than cure patient tumors, thereby avoiding the introduction of strong selective forces that drive the evolution of therapy resistance. Fundamental to their model is the assumption, based on experimental data sets and historic modeling results, that resistant cells are present within the tumor at low numbers due to their reduced fitness compared to sensitive tumor cells 175. The aim of their therapeutic approach is to maintain a sufficient number of the rapidly proliferating, sensitive cells throughout therapy to compete with and suppress the emergence of the slower-cycling resistant clones. An additional insight in the work of Silva and Gatenby is the incorporation of the role of the tumor microenvironment, specifically hypoxic regions, in modulating drug accessibility to resistant tumor cells within the hypoxic zone. To overcome the hypoxic barrier to therapy, Silva and Gatenby took advantage of tumor cell dependence on glycolytic metabolism by using the glucose competitor 2-deoxy-glucose to target resistant cells within the hypoxic tumor core. This was combined with a standard chemotherapy that targeted sensitive, proliferating cells on the tumor edge. How well do patients do on this adaptive therapy strategy compared to traditional therapy regimes? In silico simulations suggest that the patients would survive significantly longer when the two therapies were administered as separate doses, with the best results obtained when the resistant cells were first targeted with 2-deoxy-glucose then sensitive cells attacked with the chemotherapy 175. This approach managed to eradicate the resistant subpopulation, heralding the possibility of tumor elimination and patient remission 175. A potential criticism of this work is the untested biological assumptions that underpin the model. However, previous work by the same group has shown adaptive therapy maintains a significantly lower tumor burden than conventional therapy approaches in an established animal tumor model 176, providing promising experimental support for the efficacy of this approach. Our model of tumor evolution introduced in the previous section highlights important relationships arising in natural evolution that could inform the development of new therapeutic paradigms. For instance, the cgv pathway outlines a process by which tumor adaptation arises due to drug therapy induced traits that are otherwise selectively hidden within the extant genetic and epigenetic diversity. Even with the high (epi)genetic instability that is associated with a mutator phenotype, any

accumulation of cgv will take time and this imposes important restrictions on the adaptive response capabilities of a tumor. For instance, if drug therapies cause the release of cgv under directional selection, this would also act to momentarily reduce cgv and transiently lower the tumors evolvability to additional stresses. Assuming the cgv pathway significantly contributes to tumor evolution, we propose that a drug regimen that cycles through drug therapy sequences with a timing that maximizes the rate of cgv release could drive tumors to a more fragile state and help lead to their ultimate demise. While the perspectives on tumor evolution proposed in our model are all reliant upon the onset of degenerate heritable phenotypes through genetic and epigenetic destabilization, there are differences in the timing and conditions for trait heterogeneity expression that could have significant implications to therapeutic strategies. As robustness arises from the presence of multiple partially overlapping pathways for the establishment and maintenance of traits, we predict that this would confer a predisposition towards single target resistance because suppressed pathways are compensated for by degenerate pathways. For polygenic traits that have a large and distributed mutational target, directed selection (under new stress conditions) is more likely to evolve cells with enhanced degenerate pathways 134, 135, which according to one study could potentially lead to multiplicative effects on cellular robustness over time 143. While degeneracy at the cell population level (cgv) can be theoretically eliminated using the sequential drug strategy suggested above, this would be less effective against late stage cancers if a mutually supporting network of new degenerate pathways were to become fixated within the cancer genome. In these circumstances of newly adapted cellular robustness, multi-target therapies acting on complementary pathways might provide the only promising avenue for complete eradication.

Conclusion
Cancer is a complex disease, displaying emergent properties that are driven by an evolvable (epi)genome that is fueled by stochastic noise and the contextual, dynamic interactions that occur within tumor environments. One such emergent property is therapy resistance, widely regarded as the greatest obstacle to long-term patient survival. Recent studies using mathematics, cell biology, animal models and clinical data have started to unravel mechanisms underpinning evolvability in tumors. These ideas have in turn inspired the development of mathematical models that, by integrating an understanding of the mechanisms of tumor robustness, therapy resistance and tumor evolvability, are providing a new tool in the identification of novel dosing strategies that may help to delay or prevent the emergence of therapy resistance in human cancer patients. By viewing cancer as a robust, evolvable system, a number of researchers are now coming to the conclusion that single therapeutic targets might be fundamentally unsuitable as a general treatment strategy because the inherent heritable variation in cancer makes it a moving and elusive target. As emphasized in 107, targeted therapy approaches are likely to fail if the molecular targets are present in only a subset of proliferating cancer cells. Instead, we propose that directly attacking the origins of cancer evolvability using therapeutic strategies that reduce heritable variation could provide a rational alternative approach.

Acknowledgements: TT and AH are supported by the Australian research Council. SO and AH are supported by the Australian National Health and Medical Research Council (NHMRC). AH is an NHMRC CJ Martin Research Fellow. JW is supported in part by the Australian Defence Science and Technology Organization (DSTO). AH thanks Prudence Donovan and Rohan Tweedale for their insightful comments and revisions.

Figure 1: Left panel) An illustration of hierarchy within an epigenetic landscape for a single transition pathway from stem cell (C) to final cell type (A). (Right panel) Deregulation and modification of the epigenetic landscape as a consequence of genetic instability and the mutator phenotype. The illustration is modified from 177 and was originally created to illustrate the recently discovered existence of hierarchy within the conformational landscape of large proteins.

Figure 2 Three tiers of noise - genetic instability, epigenetic instability, stochastic protein dynamics, and the feedback between these tiers - provides a strong source of divergence in the internal and external properties of cancer cells, i.e. the mutator phenotype. Cellular robustness achieved (in part) through degeneracy allows for high amounts of heritable heterogeneity to accumulate in a cell population. While the mutator phenotype introduces new heritable variants at a rapid rate, canalization will hide, and selection will filter, the phenotypic diversity that is actually observed in a microenvironment-dependent fashion. Other factors such as genetic drift and tumor expansion can also influence the speed and extent that heritable variation accumulates. When presented with novel environmental conditions such as the administration of a new drug therapy, directional selection will then act on any of the standing genetic variation that is expressed as selectively relevant phenotypic differences. Some of this phenotypic variation is pre-existing and some is conditionally exposed by the new therapy. The overall extent of heritable phenotypic variation will influence the propensity to evolve a persistent therapy resistance and thus impact the robustness of the cancer.

1. 2. 3.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

22. 23. 24. 25.

B. Tran and M. A. Rosenthal, J Clin Neurosci, 17, 417-421. S. Rottenberg and J. Jonkers, Drug Resist Updat, 2008, 11, 51-60. J. A. de Visser, J. Hermisson, G. P. Wagner, L. Ancel Meyers, H. Bagheri-Chaichian, J. L. Blanchard, L. Chao, J. M. Cheverud, S. F. Elena, W. Fontana, G. Gibson, T. F. Hansen, D. Krakauer, R. C. Lewontin, C. Ofria, S. H. Rice, G. von Dassow, A. Wagner and M. C. Whitlock, Evolution, 2003, 57, 1959-1972. H. Kitano, Molecular Systems Biology, 2007, 3, 137. H. Kitano, Nat Rev Genet, 2004, 5, 826-837. J. H. Crowe and L. M. Crowe, Nature Biotechnology, 2000, 18, 145-146. H. Kitano, Nature Reviews, 2004, 4, 227-235. A. Marusyk and K. Polyak, Biochimica et Biophysica Acta, 2010, 1805, 105-117. L. L. Campbell and K. Polyak, Cell Cycle, 2007, 6, 2332-2338. J. A. Tischfield and C. Shao, Nature Genetics, 2003, 33, 5-6. B. Baisse, H. Bouzourene, E. P. Saraga, F. T. Bosman and J. Benhattar, International Journal of Cancer, 2001, 93, 346-352. M. Kirschner and J. Gerhart, Proceedings of the National Academy of Sciences of the United States of America, 1998, 95, 8420-8427. P. C. Nowell, Science, 1976, 194, 23-28. L. M. Merlo, J. W. Pepper, B. J. Reid and C. C. Maley, Nature Reviews, 2006, 6, 924-935. J. J. Salk, E. J. Fox and L. A. Loeb, Annual Review of Pathology, 2010, 5, 51-75. G. B. Balaban, M. Herlyn, W. H. Clark, Jr. and P. C. Nowell, Cancer Genetics and Cytogenetics, 1986, 19, 113-122. B. Vogelstein, E. R. Fearon, S. R. Hamilton, S. E. Kern, A. C. Preisinger, M. Leppert, Y. Nakamura, R. White, A. M. Smits and J. L. Bos, The New England Journal of Medicine, 1988, 319, 525-532. M. T. Barrett, C. A. Sanchez, L. J. Prevo, D. J. Wong, P. C. Galipeau, T. G. Paulson, P. S. Rabinovitch and B. J. Reid, Nature Genetics, 1999, 22, 106-109. E. R. Fearon and B. Vogelstein, Cell, 1990, 61, 759-767. M. F. Greaves, J. Rao, G. Hariri, W. Verbi, D. Catovsky, P. Kung and G. Goldstein, Leuk Res, 1981, 5, 281-299. S. P. Shah, R. D. Morin, J. Khattra, L. Prentice, T. Pugh, A. Burleigh, A. Delaney, K. Gelmon, R. Guliany, J. Senz, C. Steidl, R. A. Holt, S. Jones, M. Sun, G. Leung, R. Moore, T. Severson, G. A. Taylor, A. E. Teschendorff, K. Tse, G. Turashvili, R. Varhol, R. L. Warren, P. Watson, Y. Zhao, C. Caldas, D. Huntsman, M. Hirst, M. A. Marra and S. Aparicio, Nature, 2009, 461, 809-813. N. Navin, A. Krasnitz, L. Rodgers, K. Cook, J. Meth, J. Kendall, M. Riggs, Y. Eberling, J. Troge, V. Grubor, D. Levy, P. Lundin, S. Maner, A. Zetterberg, J. Hicks and M. Wigler, Genome Research, 2010, 20, 68-80. S. Kobayashi, T. J. Boggon, T. Dayaram, P. A. Janne, O. Kocher, M. Meyerson, B. E. Johnson, M. J. Eck, D. G. Tenen and B. Halmos, The New England Journal of Medicine, 2005, 352, 786792. M. E. Gorre, M. Mohammed, K. Ellwood, N. Hsu, R. Paquette, P. N. Rao and C. L. Sawyers, Science, 293, 876-880. N. P. Shah, J. M. Nicoll, B. Nagar, M. E. Gorre, R. L. Paquette, J. Kuriyan and C. L. Sawyers, Cancer Cell, 2002, 2, 117-125.

26. 27. 28. 29.

30.

31.

32. 33.

34.

35. 36. 37.

38. 39.

C. Roche-Lestienne and C. Preudhomme, Seminars in Hematology, 2003, 40, 80-82. R. D. Barrett and D. Schluter, Trends Ecol Evol, 2008, 23, 38-44. S. E. Luria and M. Delbruck, Genetics, 1943, 28, 491-511. S. Jones, X. Zhang, D. W. Parsons, J. C. Lin, R. J. Leary, P. Angenendt, P. Mankoo, H. Carter, H. Kamiyama, A. Jimeno, S. M. Hong, B. Fu, M. T. Lin, E. S. Calhoun, M. Kamiyama, K. Walter, T. Nikolskaya, Y. Nikolsky, J. Hartigan, D. R. Smith, M. Hidalgo, S. D. Leach, A. P. Klein, E. M. Jaffee, M. Goggins, A. Maitra, C. Iacobuzio-Donahue, J. R. Eshleman, S. E. Kern, R. H. Hruban, R. Karchin, N. Papadopoulos, G. Parmigiani, B. Vogelstein, V. E. Velculescu and K. W. Kinzler, Science, 321, 1801-1806. C. Greenman, P. Stephens, R. Smith, G. L. Dalgliesh, C. Hunter, G. Bignell, H. Davies, J. Teague, A. Butler, C. Stevens, S. Edkins, S. O'Meara, I. Vastrik, E. E. Schmidt, T. Avis, S. Barthorpe, G. Bhamra, G. Buck, B. Choudhury, J. Clements, J. Cole, E. Dicks, S. Forbes, K. Gray, K. Halliday, R. Harrison, K. Hills, J. Hinton, A. Jenkinson, D. Jones, A. Menzies, T. Mironenko, J. Perry, K. Raine, D. Richardson, R. Shepherd, A. Small, C. Tofts, J. Varian, T. Webb, S. West, S. Widaa, A. Yates, D. P. Cahill, D. N. Louis, P. Goldstraw, A. G. Nicholson, F. Brasseur, L. Looijenga, B. L. Weber, Y. E. Chiew, A. DeFazio, M. F. Greaves, A. R. Green, P. Campbell, E. Birney, D. F. Easton, G. Chenevix-Trench, M. H. Tan, S. K. Khoo, B. T. Teh, S. T. Yuen, S. Y. Leung, R. Wooster, P. A. Futreal and M. R. Stratton, Nature, 2007, 446, 153158. D. W. Parsons, S. Jones, X. Zhang, J. C. Lin, R. J. Leary, P. Angenendt, P. Mankoo, H. Carter, I. M. Siu, G. L. Gallia, A. Olivi, R. McLendon, B. A. Rasheed, S. Keir, T. Nikolskaya, Y. Nikolsky, D. A. Busam, H. Tekleab, L. A. Diaz, Jr., J. Hartigan, D. R. Smith, R. L. Strausberg, S. K. Marie, S. M. Shinjo, H. Yan, G. J. Riggins, D. D. Bigner, R. Karchin, N. Papadopoulos, G. Parmigiani, B. Vogelstein, V. E. Velculescu and K. W. Kinzler, Science, 2008, 321, 18071812. T. C. G. A. R. Network, Nature, 2008, 455, 1061-1068. T. Sjoblom, S. Jones, L. D. Wood, D. W. Parsons, J. Lin, T. D. Barber, D. Mandelker, R. J. Leary, J. Ptak, N. Silliman, S. Szabo, P. Buckhaults, C. Farrell, P. Meeh, S. D. Markowitz, J. Willis, D. Dawson, J. K. Willson, A. F. Gazdar, J. Hartigan, L. Wu, C. Liu, G. Parmigiani, B. H. Park, K. E. Bachman, N. Papadopoulos, B. Vogelstein, K. W. Kinzler and V. E. Velculescu, Science, 2006, 314, 268-274. L. D. Wood, D. W. Parsons, S. Jones, J. Lin, T. Sjoblom, R. J. Leary, D. Shen, S. M. Boca, T. Barber, J. Ptak, N. Silliman, S. Szabo, Z. Dezso, V. Ustyanksky, T. Nikolskaya, Y. Nikolsky, R. Karchin, P. A. Wilson, J. S. Kaminker, Z. Zhang, R. Croshaw, J. Willis, D. Dawson, M. Shipitsin, J. K. Willson, S. Sukumar, K. Polyak, B. H. Park, C. L. Pethiyagoda, P. V. Pant, D. G. Ballinger, A. B. Sparks, J. Hartigan, D. R. Smith, E. Suh, N. Papadopoulos, P. Buckhaults, S. D. Markowitz, G. Parmigiani, K. W. Kinzler, V. E. Velculescu and B. Vogelstein, Science (New York, N.Y, 2007, 318, 1108-1113. R. T. Prehn and J. M. Main, Journal of the National Cancer Institute, 1957, 18, 769-778. N. H. Segal, D. W. Parsons, K. S. Peggs, V. Velculescu, K. W. Kinzler, B. Vogelstein and J. P. Allison, Cancer Research, 2008, 68, 889-892. R. J. Leary, J. C. Lin, J. Cummins, S. Boca, L. D. Wood, D. W. Parsons, S. Jones, T. Sjoblom, B. H. Park, R. Parsons, J. Willis, D. Dawson, J. K. Willson, T. Nikolskaya, Y. Nikolsky, L. Kopelovich, N. Papadopoulos, L. A. Pennacchio, T. L. Wang, S. D. Markowitz, G. Parmigiani, K. W. Kinzler, B. Vogelstein and V. E. Velculescu, Proceedings of the National Academy of Sciences of the United States of America, 2008, 105, 16224-16229. L. A. Loeb, C. F. Springgate and N. Battula, Cancer Research, 1974, 34, 2311-2321. Y. Ionov, M. A. Peinado, S. Malkhosyan, D. Shibata and M. Perucho, Nature, 1993, 363, 558561.

40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60.

61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71.

72.

R. Fishel, M. K. Lescoe, M. R. Rao, N. G. Copeland, N. A. Jenkins, J. Garber, M. Kane and R. Kolodner, Cell, 1993, 75, 1027-1038. C. Lengauer, K. W. Kinzler and B. Vogelstein, Nature, 1998, 396, 643-649. J. M. Schvartzman, R. Sotillo and R. Benezra, Nature Reviews, 2010, 10, 102-115. S. L. Thompson, S. F. Bakhoum and D. A. Compton, Curr Biol, 2010, 20, R285-295. R. A. Beckman and L. A. Loeb, Proceedings of the National Academy of Sciences of the United States of America, 2006, 103, 14140-14145. J. Breivik, Seminars in Cancer Biology, 2005, 15, 51-60. R. A. Beckman, PloS One, 2009, 4, e5860. E. Loh, J. J. Salk and L. A. Loeb, Proceedings of the National Academy of Sciences of the United States of America, 2010, 107, 1154-1159. T. Zhou, J. M. Carlson and J. Doyle, Journal of Theoretical Biology, 2005, 236, 438-447. J. M. Carlson and J. Doyle, Phys Rev Lett, 2000, 84, 2529-2532. P. Armitage and R. Doll, British Journal of Cancer, 1954, 8, 1-12. P. Armitage and R. Doll, British Journal of Cancer, 2004, 91, 1983-1989. P. Armitage and R. Doll, International Journal of Epidemiology, 2004, 33, 1174-1179. D. Duke, J. Castresana, L. Lucchina, T. H. Lee, A. J. Sober, W. P. Carey, D. E. Elder and R. L. Barnhill, Cancer, 1993, 72, 3239-3243. A. G. Knudson, Journal of Cancer Research and Clinical Oncology, 1996, 122, 135-140. A. G. Knudson, Nature Reviews, 2001, 1, 157-162. M. J. Renan, Molecular carcinogenesis, 1993, 7, 139-146. A. Rangarajan, S. J. Hong, A. Gifford and R. A. Weinberg, Cancer cell, 2004, 6, 171-183. A. M. Mahale, Z. A. Khan, M. Igarashi, G. J. Nanjangud, R. F. Qiao, S. Yao, S. W. Lee and S. A. Aaronson, Cancer Research, 2008, 68, 1417-1426. J. M. Nicholson and P. Duesberg, Cancer Genetics and Cytogenetics, 2009, 194, 96-110. C. Swanton, B. Nicke, M. Schuett, A. C. Eklund, C. Ng, Q. Li, T. Hardcastle, A. Lee, R. Roy, P. East, M. Kschischo, D. Endesfelder, P. Wylie, S. N. Kim, J. G. Chen, M. Howell, T. Ried, J. K. Habermann, G. Auer, J. D. Brenton, Z. Szallasi and J. Downward, Proceedings of the National Academy of Sciences of the United States of America, 2009, 106, 8671-8676. S. E. McClelland, R. A. Burrell and C. Swanton, Cell Cycle, 2009, 8, 3262-3266. C. J. Ye, J. B. Stevens, G. Liu, S. W. Bremer, A. S. Jaiswal, K. J. Ye, M. F. Lin, L. Lawrenson, W. D. Lancaster, M. Kurkinen, J. D. Liao, C. G. Gairola, M. P. Shekhar, S. Narayan, F. R. Miller and H. H. Heng, J Cell Physiol, 2009, 219, 288-300. R. Sotillo, J. M. Schvartzman, N. D. Socci and R. Benezra, Nature, 2010, 464, 436-440. A. Brock, H. Chang and S. Huang, Nat Rev Genet, 2009, 10, 336-342. V. E. A. Russo, R. A. Martienssen and A. D. Riggs, Cold Spring Harbor Lanoratory Press, Woodbury, 1996. S. L. Berger, T. Kouzarides, R. Shiekhattar and A. Shilatifard, Genes & Development, 2009, 23, 781-783. P. A. Jones and S. B. Baylin, Cell, 2007, 128, 683-692. K. Luger, A. W. Mader, R. K. Richmond, D. F. Sargent and T. J. Richmond, Nature, 1997, 389, 251-260. S. Sharma, T. K. Kelly and P. A. Jones, Carcinogenesis, 2010, 31, 27-36. E. S. McKenna and C. W. Roberts, Cell Cycle, Tex, 2009, 8, 23-26. M. F. Fraga, E. Ballestar, A. Villar-Garea, M. Boix-Chornet, J. Espada, G. Schotta, T. Bonaldi, C. Haydon, S. Ropero, K. Petrie, N. G. Iyer, A. Perez-Rosado, E. Calvo, J. A. Lopez, A. Cano, M. J. Calasanz, D. Colomer, M. A. Piris, N. Ahn, A. Imhof, C. Caldas, T. Jenuwein and M. Esteller, Nature Genetics, 2005, 37, 391-400. A. Eden, F. Gaudet, A. Waghmare and R. Jaenisch, Science, 2003, 300, 455.

73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84.

85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100.

G. Howard, R. Eiges, F. Gaudet, R. Jaenisch and A. Eden, Oncogene, 2008, 27, 404-408. F. V. Jacinto and M. Esteller, Mutagenesis, 2007, 22, 247-253. C. Sawan, T. Vaissiere, R. Murr and Z. Herceg, Mutation Research, 2008, 642, 1-13. M. Esteller, S. R. Hamilton, P. C. Burger, S. B. Baylin and J. G. Herman, Cancer research, 1999, 59, 793-797. K. Suzuki, I. Suzuki, A. Leodolter, S. Alonso, S. Horiuchi, K. Yamashita and M. Perucho, Cancer cell, 2006, 9, 199-207. A. P. Feinberg, R. Ohlsson and S. Henikoff, Nat Rev Genet, 2006, 7, 21-33. P. M. Voorhoeve, Biochimica et Biophysica Acta, 2010, 1805, 72-86. G. A. Calin, C. D. Dumitru, M. Shimizu, R. Bichi, S. Zupo, E. Noch, H. Aldler, S. Rattan, M. Keating, K. Rai, L. Rassenti, T. Kipps, M. Negrini, F. Bullrich and C. M. Croce, Proceedings of the National Academy of Sciences of the United States of America, 2002, 99, 15524-15529. G. A. Calin, C. Sevignani, C. D. Dumitru, T. Hyslop, E. Noch, S. Yendamuri, M. Shimizu, S. Rattan, F. Bullrich, M. Negrini and C. M. Croce, Proceedings of the National Academy of Sciences of the United States of America, 2004, 101, 2999-3004. J. Lu, G. Getz, E. A. Miska, E. Alvarez-Saavedra, J. Lamb, D. Peck, A. Sweet-Cordero, B. L. Ebert, R. H. Mak, A. A. Ferrando, J. R. Downing, T. Jacks, H. R. Horvitz and T. R. Golub, Nature, 2005, 435, 834-838. N. Yanaihara, N. Caplen, E. Bowman, M. Seike, K. Kumamoto, M. Yi, R. M. Stephens, A. Okamoto, J. Yokota, T. Tanaka, G. A. Calin, C. G. Liu, C. M. Croce and C. C. Harris, Cancer Cell, 2006, 9, 189-198. S. Volinia, G. A. Calin, C. G. Liu, S. Ambs, A. Cimmino, F. Petrocca, R. Visone, M. Iorio, C. Roldo, M. Ferracin, R. L. Prueitt, N. Yanaihara, G. Lanza, A. Scarpa, A. Vecchione, M. Negrini, C. C. Harris and C. M. Croce, Proceedings of the National Academy of Sciences of the United States of America, 2006, 103, 2257-2261. S. Ambs, R. L. Prueitt, M. Yi, R. S. Hudson, T. M. Howe, F. Petrocca, T. A. Wallace, C. G. Liu, S. Volinia, G. A. Calin, H. G. Yfantis, R. M. Stephens and C. M. Croce, Cancer Research, 2008, 68, 6162-6170. L. He, J. M. Thomson, M. T. Hemann, E. Hernando-Monge, D. Mu, S. Goodson, S. Powers, C. Cordon-Cardo, S. W. Lowe, G. J. Hannon and S. M. Hammond, Nature, 2005, 435, 828-833. S. Costinean, N. Zanesi, Y. Pekarsky, E. Tili, S. Volinia, N. Heerema and C. M. Croce, Proceedings of the National Academy of Sciences of the United States of America, 2006, 103, 7024-7029. H. Hermeking, Cancer Cell, 2007, 12, 414-418. P. P. Medina and F. J. Slack, Cell Cycle, 2008, 7, 2485-2492. J. Stelling, U. Sauer, Z. Szallasi, F. J. Doyle and J. Doyle, Cell, 2004, 118, 675-685. M. Freeman, Nature, 2000, 408, 313-319. N. J. Martinez, M. C. Ow, M. I. Barrasa, M. Hammell, R. Sequerra, L. Doucette-Stamm, F. P. Roth, V. R. Ambros and A. J. Walhout, Genes & Development, 2008, 22, 2535-2549. J. Tsang, J. Zhu and A. van Oudenaarden, Molecular Cell, 2007, 26, 753-767. S. Kauffman, Nature, 1969, 224, 177-178. S. A. Kauffman, Journal of Theoretical Biology, 1969, 22, 437-467. S. Huang, I. Ernberg and S. Kauffman, Seminars in Cell & Developmental Biology, 2009, 20, 869-876. M. H. Tomasson, Journal of Cellular Biochemistry, 2009, 106, 745-749. A. Trumpp and O. D. Wiestler, Nature Clinical Practice, 2008, 5, 337-347. P. N. Kelly, A. Dakic, J. M. Adams, S. L. Nutt and A. Strasser, Science, 317, 337. E. Quintana, M. Shackleton, M. S. Sabel, D. R. Fullen, T. M. Johnson and S. J. Morrison, Nature, 2008, 456, 593-598.

101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129.

A. Roesch, M. Fukunaga-Kalabis, E. C. Schmidt, S. E. Zabierowski, P. A. Brafford, A. Vultur, D. Basu, T. Vogt and M. Herlyn, Cell, 2010, 141, 583-594. T. Vogt, M. Kroiss, M. McClelland, C. Gruss, B. Becker, A. K. Bosserhoff, G. Rumpler, T. Bogenrieder, M. Landthaler and W. Stolz, Lab Invest, 1999, 79, 1615-1627. A. Roesch, B. Becker, S. Meyer, C. Hafner, P. J. Wild, M. Landthaler and T. Vogt, Mod Pathol, 2005, 18, 565-572. A. Roesch, B. Becker, S. Meyer, P. Wild, C. Hafner, M. Landthaler and T. Vogt, Mod Pathol, 2005, 18, 1249-1257. A. Roesch, B. Becker, W. Schneider-Brachert, I. Hagen, M. Landthaler and T. Vogt, J Invest Dermatol, 2006, 126, 1850-1859. A. Roesch, A. M. Mueller, T. Stempfl, C. Moehle, M. Landthaler and T. Vogt, International Journal of Cancer, 2008, 122, 1047-1057. M. Greaves, Seminars in Cancer Biology, 2010, 20, 65-70. S. Cara and I. F. Tannock, Ann Oncol, 2001, 12, 23-27. T. Kurata, K. Tamura, H. Kaneda, T. Nogami, H. Uejima, G. Asai Go, K. Nakagawa and M. Fukuoka, Ann Oncol, 2004, 15, 173-174. S. Yano, E. Nakataki, S. Ohtsuka, M. Inayama, H. Tomimoto, N. Edakuni, S. Kakiuchi, N. Nishikubo, H. Muguruma and S. Sone, Oncology Research, 2005, 15, 107-111. S. V. Sharma, D. Y. Lee, B. Li, M. P. Quinlan, F. Takahashi, S. Maheswaran, U. McDermott, N. Azizian, L. Zou, M. A. Fischbach, K. K. Wong, K. Brandstetter, B. Wittner, S. Ramaswamy, M. Classon and J. Settleman, Cell, 2010, 141, 69-80. A. C. Dean and C. Hinshelwood, Proceedings of the Royal Society of London. Series B, Containing Papers of a Biological Character, 1954, 142, 45-60. N. Q. Balaban, J. Merrin, R. Chait, L. Kowalik and S. Leibler, Science, 2004, 305, 1622-1625. W. J. Blake, G. Balazsi, M. A. Kohanski, F. J. Isaacs, K. F. Murphy, Y. Kuang, C. R. Cantor, D. R. Walt and J. J. Collins, Molecular Cell, 2006, 24, 853-865. M. C. Smith, E. R. Sumner and S. V. Avery, Molecular Microbiology, 2007, 66, 699-712. H. H. Chang, M. Hemberg, M. Barahona, D. E. Ingber and S. Huang, Nature, 2008, 453, 544547. A. A. Cohen, N. Geva-Zatorsky, E. Eden, M. Frenkel-Morgenstern, I. Issaeva, A. Sigal, R. Milo, C. Cohen-Saidon, Y. Liron, Z. Kam, L. Cohen, T. Danon, N. Perzov and U. Alon, Science, 2008, 322, 1511-1516. J. J. Kim and I. F. Tannock, Nature Reviews, 2005, 5, 516-525. D. J. Barnes and J. V. Melo, Cell Cycle, 2006, 5, 2862-2866. P. Vaupel, A. Mayer and M. Hockel, Methods in Enzymology, 2004, 381, 335-354. P. Vaupel and A. Mayer, Cancer Metastasis Rev, 2007, 26, 225-239. Y. Kim, Q. Lin, P. M. Glazer and Z. Yun, Current Molecular Medicine, 2009, 9, 425-434. R. S. Bindra, M. E. Crosby and P. M. Glazer, Cancer Metastasis Rev, 2007, 26, 249-260. E. K. Rofstad, K. Sundfor, H. Lyng and C. G. Trope, BritishJjournal of Cancer, 2000, 83, 354359. M. Wartenberg, F. C. Ling, M. Muschen, F. Klein, H. Acker, M. Gassmann, K. Petrat, V. Putz, J. Hescheler and H. Sauer, Faseb J, 2003, 17, 503-505. T. G. Graeber, C. Osmanian, T. Jacks, D. E. Housman, C. J. Koch, S. W. Lowe and A. J. Giaccia, Nature, 1996, 379, 88-91. S. J. Lunt, N. Chaudary and R. P. Hill, Clin Exp Metastasis, 2009, 26, 19-34. R. A. Gatenby and R. J. Gillies, Nature Reviews, 2004, 4, 891-899. J. Lankelma, H. Dekker, F. R. Luque, S. Luykx, K. Hoekman, P. van der Valk, P. J. van Diest and H. M. Pinedo, Clin Cancer Res, 1999, 5, 1703-1707.

130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140.

141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161.

W. S. Dalton, B. G. Durie, D. S. Alberts, J. H. Gerlach and A. E. Cress, Cancer Research, 1986, 46, 5125-5130. K. A. Rejniak and L. J. McCawley, Experimental Biology and Medicine, 235, 411-423. M. Hu and K. Polyak, Current Opinion in Genetics & Development, 2008, 18, 27-34. K. Polyak, I. Haviv and I. G. Campbell, Trends Genet, 2009, 25, 30-38. G. M. Edelman and J. A. Gally, Proceedings of the National Academy of Sciences of the United States of America, 2001, 98, 13763-13768. J. Whitacre and A. Bender, Journal of Theoretical Biology, 2009, 263, 143-153. J. M. Whitacre and A. Bender, Theoretical Biology & Medical Modelling, 2010, 7, 20. A. M. Xu and P. H. Huang, Cancer Research, 2010, 70, 3857-3860. M. Nagane, F. Coufal, H. Lin, O. Bogler, W. K. Cavenee and H. J. Huang, Cancer Research, 1996, 56, 5079-5086. A. El-Obeid, E. Bongcam-Rudloff, M. Sorby, A. Ostman, M. Nister and B. Westermark, Cancer Research, 1997, 57, 5598-5604. J. N. Rich, D. A. Reardon, T. Peery, J. M. Dowell, J. A. Quinn, K. L. Penne, C. J. Wikstrand, L. B. Van Duyn, J. E. Dancey, R. E. McLendon, J. C. Kao, T. T. Stenzel, B. K. Ahmed Rasheed, S. E. Tourt-Uhlig, J. E. Herndon, 2nd, J. J. Vredenburgh, J. H. Sampson, A. H. Friedman, D. D. Bigner and H. S. Friedman, J Clin Oncol, 2004, 22, 133-142. P. H. Huang, A. Mukasa, R. Bonavia, R. A. Flynn, Z. E. Brewer, W. K. Cavenee, F. B. Furnari and F. M. White, Proceedings of the National Academy of Sciences of the United States of America, 2007, 104, 12867-12872. J. M. Stommel, A. C. Kimmelman, H. Ying, R. Nabioullin, A. H. Ponugoti, R. Wiedemeyer, A. H. Stegh, J. E. Bradner, K. L. Ligon, C. Brennan, L. Chin and R. A. DePinho, Science, 2007, 318, 287-290. J. M. Whitacre, Theoretical Biology & Medical Modelling, 2010, 7, 6. G. Cambray and D. Mazel, PLoS Genet, 2008, 4, e1000256. A. Wagner, Bioessays, 2005, 27, 176-188. J. M. Carlson and J. Doyle, Proceedings of the National Academy of Sciences of the United States of America, 2002, 99 Suppl 1, 2538-2545. A. Wagner, Proceedings, 2008, 275, 91-100. L. S. Yaeger, HFSP Journal, 2009, 3, 328-339. J. A. Draghi, T. L. Parsons, G. P. Wagner and J. B. Plotkin, Nature, 463, 353-355. M. A. Huynen, P. F. Stadler and W. Fontana, Proceedings of the National Academy of Sciences of the United States of America, 1996, 93, 397-401. S. Ciliberti, O. C. Martin and A. Wagner, PLoS Computational Biology, 2007, 3, e15. B. C. Daniels, Y. J. Chen, J. P. Sethna, R. N. Gutenkunst and C. R. Myers, Current Opinion in Biotechnology, 2008, 19, 389-395. J. D. Bloom, S. T. Labthavikul, C. R. Otey and F. H. Arnold, Proceedings of the National Academy of Sciences of the United States of America, 2006, 103, 5869-5874. R. C. McBride, C. B. Ogbunugafor and P. E. Turner, BMC Evolutionary Biology, 2008, 8, 231. S. L. Rutherford and S. Lindquist, Nature, 1998, 396, 336-342. L. E. Cowen and S. Lindquist, Science, 2005, 309, 2185-2189. N. Tokuriki and D. S. Tawfik, Nature, 2009, 459, 668-673. Y. Xu and S. Lindquist, Proceedings of the National Academy of Sciences of the United States of America, 1993, 90, 7074-7078. S. F. Falsone, S. Leptihn, A. Osterauer, M. Haslbeck and J. Buchner, Journal of Molecular Biology, 2004, 344, 281-291. S. Takayama, J. C. Reed and S. Homma, Oncogene, 2003, 22, 9041-9047. D. D. Mosser and R. I. Morimoto, Oncogene, 2004, 23, 2907-2918.

162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177.

M. J. West-Eberhard, Proceedings of the National Academy of Sciences of the United States of America, 2005, 102 Suppl 1, 6543-6549. M. Pigliucci, C. J. Murren and C. D. Schlichting, J Exp Biol, 2006, 209, 2362-2367. R. Sanjuan and S. F. Elena, Proceedings of the National Academy of Sciences of the United States of America, 2006, 103, 14402-14405. G. Gibson and I. Dworkin, Nat Rev Genet, 2004, 5, 681-690. C. D. Schlichting, Ann N Y Acad Sci, 2008, 1133, 187-203. J. Whitacre, M., ALIFE XII, 2010, in press. D. F. Horrobin, Nat Rev Drug Discov, 2003, 2, 151-154. A. J. Coldman and J. H. Goldie, Bulletin of Mathematical Biology, 1986, 48, 279-292. A. J. Coldman and J. M. Murray, Mathematical Biosciences, 2000, 168, 187-200. R. S. Day, Cancer Research, 1986, 46, 3876-3885. Y. Iwasa, F. Michor and M. A. Nowak, Proceedings, 2003, 270, 2573-2578. M. P. Little, Biology Direct, 2010, 5, 19; discussion 19. J. Foo and F. Michor, PLoS Computational Biology, 2009, 5, e1000557. A. S. Silva and R. A. Gatenby, Biology Direct, 5, 25. R. A. Gatenby, A. S. Silva, R. J. Gillies and B. R. Frieden, Cancer Research, 2009, 69, 48944903. A. Kurakin, Theoretical Biology & Medical Modelling, 2009, 6, 6.

Вам также может понравиться