Вы находитесь на странице: 1из 136

New Generation Network Architecture AKARI Conceptual Design

AKARI Project Original Publish (Japanese) April 2007 English Translation October 2007 Copyright 2007 NICT

AKARI Project Members: Masaki Hirabaru, Masugi Inoue, Hiroaki Harai, Toshio Morioka, Hideki Otsuki, Kiyohide Nakauchi, Sugang Xu, Ved Kafle, Hiroko Ueda, Masataka Ohta, Fumio Teraoka, Masayuki Murata, Hiroyuki Morikawa, Fumito Kubota, and Tomonori Aoyama This document presents the conceptual design of a new generation network architecture. It is based on the discussions at 14 meetings and 2 seminars, which were held during an 11-month period beginning in May 2006 and attended primarily by the Network Architecture Group of the New Generation Network Research Center of the National Institute of Information and Communications Technology (NICT).

What the name of AKARI indicates The codename for New Generation Network R&D in NICT "A small light in the dark pointing to the future"

AKARI Conceptual Design Summary


AKARI Project Goals and Conceptual Design
The future holds a computing environment characterized by embedded and pervasive computing and networking that will benefit society worldwide, not just the current state in which computers and networks are proliferating widely. The current Internet, which was not designed with this kind of pervasive information-networked society in mind, cannot handle this societal transition, leaving it unable to further mankind's potential. To realize this kind of information-networked society envisioned for the next two or three decades, a new generation network must be created before the current Internet reaches its limits. This new generation network must seamlessly integrate real-world computing and networking with virtual space. The primary goal of the AKARI Project is to design a network of the future. The Akari Project aims to implement a new generation network by 2015, developing a network architecture and creating a network design based on that architecture. Our philosophy is to pursue an ideal solution by researching new network architectures from a clean slate without being impeded by existing constraints. Once these new network architectures are designed, the issue of migration from today's conditions can be considered using these design principles. Our goal is to create an overarching design of what the entire future network should be. To accomplish this vision of a future network embedded as part of societal infrastructure, each fundamental technology or sub-architecture must be selected and the overall design simplified through integration. The AKARI Project, which was launched one year ago, identifies a list of societal requirements and the design principles needed to support them. It also introduces future basic design technologies and associated design principles. It also includes conceptual design examples of several key portions based on the design principles as well as requirements for testbeds that must be built for verifying them. Some parts of Chapters 2 and 4 are extracted and introduced below. These parts include societal and design requirements of the new generation network era (Chapter 2) and basic design principles for a new generation network architecture and network architecture design based on an integration of science and technology (Chapter 4).

Societal Considerations and Design Requirements of the New Generation Network Era
Network requirements and considerations for the Internet of tomorrow include: (1) Peta-bps class backbone network, 10Gbps FTTH, e-Science (2) 100 billion devices, machine to machine (M2M), 1 million broadcasting stations (3) Principles of competition and user-orientation (4) Essential services (medical care, transportation, emergency services), 99.99% reliability (5) Safety, peace of mind (privacy, monetary and credit services, food supply traceability, disaster services) (6) Affluent society, disabled persons, aged society, long-tail applications (7) Monitoring of global environment and human society

(8) Integration of communication and broadcasting, Web 2.0 (9) Economic incentives (business-cost models) (10) Ecology and sustainable society (11) Human potential, universal communication To deal with these societal requirements, our goal is to contribute to human development by designing a new generation network architecture based on the following design principles. (1) Large capacity. Increased speed and capacity are required to satisfy future traffic needs, which are estimated to be approximately 1000 times current requirements in a decade. (2) Scalability. The devices that are connected to the network will be extremely diverse, ranging from high-performance servers to single-function sensors. Although little traffic is generated by a small device, their number will be enormous, and this will affect the number of addresses and states in the network. (3) Openness. The network must be open and able to support appropriate principles of competition. (4) Robustness. High availability is crucial because the network is relied on for important services such as medical care, traffic light control and other vehicle traffic services, and bulletins during emergencies. (5) Safety. The architecture must be able to authenticate all wired and wireless connections. It also must be designed so that it can exhibit safety and robustness according to its conditions during a disaster. (6) Diversity. The network must be designed and evaluated based on diverse communication requirements without assuming specific applications or usage trends. (7) Ubiquity. To implement pervasive development worldwide, a recycling-oriented society must be built. A network for comprehensively monitoring the global environment from various viewpoints is indispensable for accomplishing this. (8) Integration and simplification. The design must be simplified by integrating selected common parts, not by just packing together an assortment of various functions. Simplification increases reliability and facilitates subsequent extensions. (9) Network model. To enable the information network to continue to be a foundation of society, the network architecture must have a design that includes a businesscost model so that appropriate economic incentives can be offered to service providers and businesses in the communications industry. (10) Electric power conservation. As network performance increases, its power consumption continues to grow, and as things stand now, a router will require the electrical power of a small-scale power plant. The information-networked society of the future must be more Earth friendly. (11) Extendibility. The network must be sustainable. In other words, it must have enough flexibility to enable the network to be extended as society develops.

Basic Design Principles for a New Generation Network Architecture


We identified the following three principles as our core design principles for creating a new generation network architecture: KISS (Keep It Simple, Stupid), Sustainable and Evolutionary, and Reality Connection. (1) KISS principle The KISS principle is an important guide for increasing Internet diversity, expandability, as well as reliability, thereby reducing possible complications that can easily arise. We have chosen the following design principles to support the KISS principle. End-to-End: A basic principle of Internet architecture is that a network should not be constructed based on a specific application or with the support of a specific application as its objective. Crystal Synthesis: When selecting from among many technologies and integrating them in order to enable diverse uses, simplification is the most important principle. The design must incorporate "crystal synthesis," a kind of simplification of technologies to reduce complexity even when integrating functions. Common Layer: In a network model with a layer structure, each layer's independence is maintained. Each layer is designed independently and its functions are extended independently. However, one of the reasons for the success of the Internet is that the IP layer is a common layer. If we assume that the network layer exists as a common layer, other layers need not have the functions that are implemented in that common layer. Therefore, we concluded that the design of the new generation network architecture will have a common layer and will eliminate redundant functions in other layers to degenerate functions in multiple layers. (2) Sustainable and Evolutionary principle The new generation network architecture must be designed as a sustainable network that can evolve and develop in response to changing requirements. It is important for the network to have a simple structure and for service diversity to be ensured in end or edge nodes. To accomplish this, the following network control or design methods must be followed to enable a sustainable network to be continuously developed over 50 or 100 years. Self-* properties: To construct a sustainable network that can be continuously developed, that network must be adaptive. Therefore, the network must be designed so that individual entities within the network operate in a self-distributed manner and that intended controls are implemented overall. In other words, a selforganizing network must be designed. Also, the hierarchical structure of the network will continue to be an important concept in the future from the perspectives of function division and function sharing. A network must be designed having an adaptable control structure for upper and lower layer states without completely dividing the hierarchy as is traditionally done. In other words, a selfemergent network must be designed. Robust large-scale network: As the scale or complexity of a system increases, multiple simultaneous break-downs normally occur, rather than single independent failures. In addition, the factors in which software bugs are introduced are larger and human error is more likely to occur when managing operation. The new

generation network architecture must be designed to handle simultaneous or serious failures that may occur. Controls for a topologically fluctuating network: In a mobile network or P2P network, communication devices are frequently created, eliminated, or moved. It is essential for mobility to be taken into consideration when designing a network. For example, when the topology frequently changes, controls for finding resources on demand are more effective than controls for maintaining routes or addresses. However, since the overhead for on-demand control is high, it is important to enable routing to be implemented according to conditions of topology fluctuation. Controls based on real-time traffic measurement: Failures become more commonplace as the scale of a network increases. As a result, precision-optimized real-time traffic measurements over the time scale required for control are important, and these must be applied to routing. Also, to pursue more autonomous actions in end hosts, it is important to actually measure or estimate the network status. Scalable, distributed controls: To sufficiently scale controls even in large-scale or topologically varying networks, it is important to introduce self-organizing controls or pursue autonomous actions at each node. Openness: Providing openness to users to facilitate the creation of new applications is also important to the network. (3) Reality Connection principle Internet problems occur because entities in space on the network are disassociated from real-world society. To smoothly integrate relationships between these entities and society, addressing must be separated into physical and logical address spaces and then mappings must be created between them and authentication or traceability requests based on those mappings must be satisfied. Separation of physical and logical addressing: We must investigate the extent to which physical and logical addressing should be separated. Various problems have been caused on the Internet by the appearance of new types of host connection scenarios that had not previously existed such as mobility or multi-homing scenarios and by handling physical and logical addresses in the same way. Bi-directional authentication: A network should be designed so that bidirectional authentication is always possible. Also, authentication information must be located under control of the particular individual or entity. Traceability: Individuals or entities must be traceable to reduce attacks on the network. Traceability must be a basic principle when designing addressing and routing as well as transport over them. To reduce spam, systems must be traceable from applications to actual society.

Network Architecture Design Based on an Integration of Science and Technology


To build a new generation network architecture, it is important to design the network architecture by integrating technological techniques and theoretical (scientific) techniques. Setting up the architecture technologically based on properties that were

obtained by scientific methods is the essence of architecture construction. Specifically, the following procedure is required. (1) One architecture that can be entirely optimized and can flexibly adopt new functions is constructed. (2) Then, to refine that architecture, a model is created based on network science, and its system properties are discovered according to mathematical analysis or actual inspections. (3) Specific methods for achieving further global optimization (such as moderate interactions between layers or moderate interactions between different modules in the same layer) are created and new functions are adopted. This causes the network system to grow. (4) The entire process in which new properties for that system are discovered from a scientific standpoint and new technologies are adopted is repeatedly executed. In other words, network development can be promoted through a feedback loop containing repeated scientific and technological processes. Network science provides basic theories and methodologies for network architectures. However, the network system itself must be understood. New discoveries or principles can be obtained and system limitations can be learned by understanding system behavior through basic theories and methodologies. These theories and methodologies can also help clarify what makes good protocols or control mechanisms. When a network architecture is designed through network science research, whether or not the architecture is truly useful is clarified and implementation is promoted based on the following five criteria. (1) Has a new design policy been developed? (2) Has a new communication method been implemented? (3) Was a new abstraction, model, or tool conceived? (4) Were results commercialized and accepted by the user community? (5) Were solutions given for real-world problems?

Summary
The AKARI Conceptual Design is a first step towards implementing a new generation network architecture. As mentioned earlier, this paper introduces societal considerations, future basic technologies, and design principles to be used when designing a new network architecture. It also includes conceptual design examples of several key portions based on the design principles as well as requirements for testbeds that must be built for verifying them. Our approach is to focus our energy on continuing to design a new generation network and to use testbeds to investigate and evaluate the quality of that design. Therefore, the existence of design principles is crucial to achieving a globally optimized, stabilized architecture. Until the final design is completed, even the design principles themselves are not fixed, but can be changed according to feedback through repeated design and evaluation. The network architecture is positioned between the top-down demands of solving societal problems and the bottom-up conditions of future available component

technologies. Its role is to maximize the quality of life for the entire networked society and to provide it with sustainable stability. A new sustainable design must support human development for 50 or 100 years, not just 2 or 3 decades as it functions as the information infrastructure underlying our society. This new architecture must avoid the same dangers confronting the current Internet.

CONTENTS
Preface Chapter 1 Goals of the New Generation Network Architecture Design Project AKARI1 1.1 AKARI Project Objective 1.2 AKARI Project Targets 1.3 AKARI Project Themes 1.4 Network Architecture Definitions and Roles 1.5 Opportunity for Redesigning Network Architecture from a Clean Slate 1.6 Conceptual Positioning of New Generation Network and Its Approach 1.7 Two Types of NGN: NXGN and NWGN 1.8 Comparison of NXGN and NWGN Chapter 2 Current Problems and Future Requirements11 2.1 Internet Limitation 2.2 Future Frontier 2.3 Traffic Requirements 10 Years Into the Future 2.4 Societal Requirements and Design Requirements Chapter 3 Future Enabling Technologies 26 3.1 Optical Transmission 3.2 New Optical Fiber 3.3 Wavelength and Waveband Conversion 3.4 Optical 3R 3.5 Optical Quality Monitoring 3.6 Optical Switch 3.7 Optical Buffer 3.8 Silicon Photonics 3.9 Electric Power Conservation 3.10 Quantum Communication 3.11 Time Synchronization 3.12 Software-Defined Radio 3.13 Cognitive Radio 3.14 Sensor Networks 3.15 Power Conservation for Wireless Communications in the Ubiquitous Computing Era Chapter 4 Design Principles and Techniques 40 4.1 Design Principles for a New Generation Network 4.2 Network Architecture Design Based on an Integration of Science and Technology 4.3 Measures for Evaluating Architectures 4.4 Business Models Chapter 5 Basic Configuration of a New Network Architecture 54 5.1 Optical Packet Switching and Optical Paths 5.2 Optical Access 5.3 Wireless Access

5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13

PDMA Transport Layer Control Addressing and Routing Layering Security QoS Routing Network Model Robustness Control Overlay Network Layer Degeneracy

Chapter 6 Testbed Requirements117 Chapter 7 Related Research119 7.1 NewArch 7.2 GENI / FIND 7.3 Euro-NGI / Euro-FGI Chapter 8 Conclusions122 Appendix Definitions of Terms 123

Preface
Packet switching was invented over 40 years ago. This technology, which gave rise to the Internet, is the information foundation of society today. About a century before the invention of packet switching, the telephone was invented as an improvement over telegraph, and the telephone network based on circuit switching came to occupy a firmly entrenched position within society. Through the failure of Asynchronous Transport Mode (ATM), the telephone network became the Next Generation Network (NGN) and an attempt is now being made to absorb it into a network based on packet switching. Through the transition from a simple network for connecting telephones to an information network for connecting computers, the network not only has supported societal aims, but has also become an indispensable part of our world today. In the ubiquitous computing society of the future, an information network will permeate our society and its terminals will be processing devices that are neither telephones nor computers. As the complexity and diversity of human society increases in the future and people and information become more closely interconnected, the network itself cannot help but reflect this diversity and complexity. Computers and networks will be ubiquitous and information networks will be embedded in the real world to benefit society en masse. The information network that supports the diversification of human life will give birth to a new culture and science. The network will enable real-world society to incorporate virtual space so that the two spaces are integrated seamlessly and people will be unaware of the passing back and forth between these spaces. The current Internet, which was not designed with this kind of pervasive information network-oriented society in mind, cannot handle this societal transition, leaving it unable to further mankind's future potential. Actually, we are already experiencing problems associated with the gap between the real world and virtual space. To realize this kind of information networkoriented society envisioned for the next two or three decades, we must have a new generation network that can integrate the real world and virtual space and deal with them seamlessly. Improvements have often been made to the Internet by the Internet Engineering Task Force (IETF), its standards organization. Because of improvements that were made spanning dozens of years, its protocols have become more complex. Also, innovative ideas are not accepted in Internet technologies that have already been established. IPv6 simply broadens the address space, and we cannot expect the IETF to produce a new network architecture. Our vision is that we must create this new generation network before the Internet reaches its limits. The aim of new generation network research is to create a network for people of the next generation, not to create a network based on next generation technologies. A network architecture, which is a set of design principles for designing a network, is consistent with the general rules of human society. The Internet architecture was developed along with competition based on market principles and globalization, which the Internet supported. Both the rules of society and the Internet try to welcome and engage turning points. A sustainable society increasingly demands not only liberalization, but also peace of mind and safety. To apply technologies that will be available in the future to resolve both social problems that cannot be resolved by modifying the current network as well as problems that are expected to become serious in the future, we must select, integrate, and simplify

techniques and technologies based on a network architecture designed according to new design principles. The network architecture is positioned between the top-down demands of solving societal problems and the bottom-up conditions of future available component technologies. Its role is to maximize the quality of life of the entire network-oriented society and to provide it with sustainable stability. New generation network research must design the network from a clean slate regardless of current technologies. A new sustainable design must support human development for the following 50 or 100 years. We should design an ideal network that can be realized at a future point in time and then consider the issue of migration from existing conditions later. We must not improve the current technology without looking at future courses of action. This conceptual design is a collection of techniques and technologies that were selected and simplified based on design principles conforming to its concepts. Since the techniques and technologies that are included have not yet been evaluated, they are only suggestions to be included in a new generation network and act simply as guidelines indicating the first step in advancing our research. This conceptual design is organized as follows. Chapter 1 introduces the aims of the new generation network architecture design project AKARI. To clarify the current problems and future requirements, Chapter 2 describes the design requirements that are called for in this conceptual design. Chapter 3 describes future component technologies that can be used by the new generation network. Chapter 4 discusses design principles and techniques that are used in this conceptual design. Chapter 5 deals with the basic configuration of the new generation network architecture and various related technical areas. Chapter 6 describes the requirements for testbeds to be used as prototypes for verifying the new generation network architecture. Chapter 7 introduces related research and Chapter 8 presents conclusions.

Chapter 1. Goals of the New Generation Network Architecture Design Project AKARI [Hirabaru, Otsuki, Aoyama,
Kubota] This chapter initially describes the objectives and targets of the AKARI Project. Then, to clarify the aims of the project, it describes the importance of the network architecture definitions and roles, the conceptual positioning and approach of the AKARI Project, and the differences between a next generation and new generation network.

1.1. AKARI Project Objective


The objective of the AKARI Project is to design the network of the future. The AKARI Project aims to implement a new generation network by 2015 by establishing a network architecture and creating a network design based on that architecture. Our motto is "a small light (akari in Japanese) in the dark pointing to the future." Our philosophy is to pursue an ideal solution by researching new network architectures from a clean slate, without being impeded by existing constraints. Then the issue of migration from existing conditions can be considered. Our goal is to create an overarching design of what the entire future network should be. To accomplish this vision of a future network embedded as part of societal infrastructure, each fundamental technology or sub-architecture must be selected and the overall design simplified through integration.

1.2. AKARI Project Targets


The targets of the AKARI project are to develop a new generation network architecture and to design a new generation network based on it. The design will take into consideration the various design requirements discussed in Chapter 2 and present assessment evidence. Our first year goal is to create a conceptual design and present the initial design principles. These initial design principles will be revised to create a more detailed design in the second year. A development plan will be determined in the third year, a prototype will be developed in the fourth year, and demonstration experiments will be conducted and evaluated in the fifth year to show the effectiveness of this design.

1.3. AKARI Project Themes


The AKARI project will create a blueprint for a new generation network to be incorporated throughout Japan. This network will be based on future leading-edge technologies and will act as a foundation for supporting all communication services. The blueprint not only will be a design of the entire new generation network, but it will also indicate the directions of next generation network technologies for the industrial world with which the network will be interacting. The AKARI Project will evaluate the network using testbeds through cooperation with universities and industries and lead the way towards future standardization. To accomplish this, we identified the following guidelines: Lead by indicating future actions and ensuring neutral innovations for competitive industries

Design based on basic principles that are common overall, not local improvements of efficiency or progress in specific component technologies Create an overarching vision of what the future network should be for more than a decade hence and utilize established design capabilities based on practical experience

1.4. Network Architecture Definitions and Roles


A network architecture is a set of abstract design principles. These design principles become criteria for making decisions when confronted with choices from among many design alternatives. Expressed in another way, a network architecture is a fusion of science and technology. Although we can evaluate whether or not a specific network satisfies certain requirements, there is no general methodology for designing a network that satisfies these requirements. Therefore, a network architecture aims to assist the design process so that the requirements are met more satisfactorily through repeated trials or more stable results are obtained. It is conceptually positioned at an intermediate location to match user requests with the development of component technologies. An excellent network architecture fills the gaps between these requests and developments to bring about a more optimized and stable network.

Flexible to adopt a new user requirement


No vertical division. Common infrastructure

Future requirements from diverse users and society For the information infrastructure of society -Global optimization - Sustainable stability

Network Architecture

Enjoy fundamental technology advances

Evolving, future fundamental technologies

Fig. 1.4.1. Roles of the Network Architecture

Many component technologies, sub architectures Select, integrate and simplify by design principles Architecture Protocol engineering New generation network

Proof-of- concept Simulation, Testbed Feedback

Fig. 1.4.2. Network Architecture Research and Development Process

1.5. Opportunity for Redesigning Network Architecture from a Clean Slate


When the Internet was first conceived and its original users were members of the research community, it functioned well. However, with the commercialization of the Internet, optimizations for commercial purposes decreased reliability, and inconsistent functions, such as NAT, IPsec, and MPLS, were added based on case-by-case circumstances. Also, the addition of a bundle layer is also being investigated. Functions were added on top of each other for almost 30 years, and the number of layers continued to increase. Adding more functions is already troublesome, and it is difficult to ensure reliability for the entire incompatible, complex system. Therefore, in its current state, the Internet cannot provide the services envisioned for our future world. Projects have been started in various countries to fundamentally solve the problems facing the Internet and to conduct research and development for designing a future Internet that will replace the current one. Although these projects use different names, such as the future Internet or new generation network, common goals of their research include long-term planning and "design from scratch," or "beginning from a clean slate." In the US, The NewArch project started in 2000, and the Future Directions in Network Architecture (FDNA) workshop was held at SIGCOMM a while later, which was linked with the NSF projects named FIND and GENI. In Europe, the Euro-NGI and Autonomic Communication projects began in 2003, and Future Internet Research and Experimentation (FIRE) is being planned by the Seventh Framework Programme (FP7). In Japan, the UNS Strategic Programs, which were released in 2005 by the Ministry of Internal Affairs and Communications, wrote about a new generation network based on innovative new concepts using photonic network technology to extend into the post-IP era. It also stated that a network integration architecture will be created with a long-term perspective into the future. The National Institute of Information and Communications Technology (NICT) is also conducting research and development on future networks. This organization has inaugurated the New Generation Network Research Center, which is mainly carrying out research on network architectures.

Functions were rapidly added on top of each other


universal communication? small devices? dependability? authentication? guaranteed service?

Layers were rapidly inserted


L4.5:P latfor L3.5:I

loca h al hierarc g l ad in dres ddress a sing anycast IPSE C c om LS plica MP ro u t t ing ed ast ltic mobility mu

flowlabel

rl a y :Ove LX.5 dle :Bun L4.5

L4: Transport Layer PSEC L3.5:Mobile IP L3: Internet Layer


L2.5 L2: Datalink Layer S :MPL

2000

Fig. 1.5.2. Initiatives for Recreating a Network Architecture from a Clean Slate

1.6. Conceptual Positioning of New Generation Network and Its Approach


We must have a vision for the network of the future. Although it is difficult to predict what it will be like 10 or 15 years in the future, there should be an ideal network-oriented society, and research and development should be conducted concerning the network for implementing it. This network should only be accountable to the ideal future that it aims to achieve and should not be tied to network systems that are currently in use in our present-day society or to the technological assets involved in those systems. The new 4

GMPLS

Original Internet Architecture

Fig. 1.5.1. Problems with the Internet Architecture

NewArch (DARPA) 100x100 Clean Slate Project (NSF) SIGCOMM FDNA


GENI Initiative Announced

NAT

The time for redesigning the Internet from a clean slate is approaching!

2001

2002

2003

2004

2005

2006

2007

2008

2009

FIND (NSF) Euro-NGI (EU) Autonomic Communication (EU) UNS Strategic Programs (JP) non-IP New Generation Network Future (> New) Generation Network Architecture (NICT)

generation network will not be able to be implemented immediately, but will act as a reference for future research and development and point to a course of action for research and development in this field. There are concerns that if research and development is performed based on current technologies, the direction taken by the development process for the network-oriented society will reflect corporate interests or be reduced to local optimizations. In addition, a large gap may occur between research and development based on current technologies and the next generation technologies when the limits of the current Internet are reached. However, we believe that milestones for current network research and development projects can be determined and steps towards the future can be taken with an ideal solution in mind. Many current network research and development projects end up adhering to piecemeal improvements of Internet technologies or the spread of the Internet. There is a strong tendency to carry out development with the current Internet in mind, which inhibits movement towards new innovation. It is our philosophy that network research and development that is linked to future innovations is only possible by starting from a clean slate with no concept of the current Internet in mind.

New Generation Network (NWGN)

Revised NXGN

1) new paradigm

2) modification

Past Network

Present Network

Next Generation Network (NXGN)

2005

2010

2015

Fig. 1.6. Conceptual Positioning of New Generation Network

1.7. Two Types of NGN: NXGN and NWGN


We propose that the next generation network that is based on IP be referred to as NXGN and that the new generation network of at least a decade into the future, which will not necessarily adhere to IP, be referred to as NWGN. In addition, we want to point out that a new paradigm is likely to be introduced in NWGN. Next Generation Network (NXGN) The basic architecture and service conditions of the Internet are maintained and quadruple-play services (telephone, data, broadcasting, and mobile devices) are implemented. New Generation Network (NWGN) Future ubiquitous services are conceived of in a form that differs from current Internet architecture and services, using a new paradigm called the New Paradigm Network (NPN). The NGN that is the focus of the ITU-T, a typical example of an NXGN, is a shortterm research and development project covering the next five years with an aim to improve current technologies. On the other hand, AKARI, a typical example of an NWGN, is a long-term research and development project that begins with a clean slate and aims to design a network for more than a decade into the future. The following figure explains the typical configuration of an NWGN. At the center is the common network (shaded portion in the figure), which is a common layer that will be newly developed to replace IP. An underlay network below this will have several technologies and will provide diverse means of transmission or access. On the other hand, an overlay network above the common network will provide a flexible, customizable layer on which applications will run. A cross-layer control mechanism will operate among the layers to enable the layers to cooperate and to provide users with services in the appropriate layer such as A, B, and C in the figure.

User A

User B

User C

User interface
Universal access

Application

Customizable

Overlay network
C

Fig. 1.7. Conceptual Diagram of the New Generation Network Configuration

1.8. Comparison of NXGN and NWGN


NGN is the next generation network architecture for which the ITU-T is conducting standardization work. The core of this architecture consists of a function architecture called the service stratum and a transmission network called the transport stratum, which are linked by IP. NGN aims to create a carrier network architecture that can not only provide multi-media and mobile network services from telephone services by using the conventional IP network as infrastructure and adding security, authentication, and QoS functions but can also provide new services extending into the future. The goal for implementation is roughly by 2010, and the targeted services are tripleplay services that encompass existing telephone and IMS-based multi-media services. Session management based on service definitions in the service stratum will be performed for all NGN services. IMS-based multi-media services will also be sessionbased services. Since the infrastructure is an IP network, services that are implemented on the Internet are expected to be fully supported by the transport stratum. However, since a carrier network is assumed, the degree to which it will interconnect with the Internet as infrastructure is still not known. Also, one technology that is gaining attention is application services via Application Network Interface (ANI). This not only enables services to be provided by carriers, but also enables users to receive extended services. However, since this technology does not directly control the infrastructure, its prospects will not only depend on ANI functions that may be created in the future, but will also be significantly affected by the degree to which they are made publicly available. Although the possibility of future growth of NGN as a carrier network is anticipated and it is expected to be used as infrastructure instead of traditional communication networks, the following concerns cannot be ignored.

Cross-layer control mechanism

Flexible

Command network: Replaces IP


B

Broad band Ubiquitous

Underlay network Photonic A Mobile Sensor

Scale-free Secure

QoS tasks IP network limits could be reached by using of IP for QoS tasks. In particular, it is difficult to guarantee QoS. Although applications are preferentially controlled for each class, it is obvious that bandwidth is difficult to guarantee. Scalability and Capacity Since all services undergo session management, scalability is in a concern. There are also uncertainties concerning transaction management required for authentication of terminals and individuals to ensure security. Management information search scalability in location information databases used for mobility is also uncertain. These kinds of uncertainties are worrisome because control is centralized even though these services take a distributed form using IP. Since existing terminals and applications are integrated, a tera-bps to peta-bps class network is probably required in terms of capacity. However, we will be unable to create a greater capacity if these kinds of scalability uncertainties are not resolved. This is a major concern for future implementation technologies. Electric Power Since the infrastructure is based on IP routers, router performance is directly related to QoS or network performance. If we consider peta-bps class processing based on high-end IP routers, several hundred routers consuming kilowatts of power per node are required, resulting in megawatt-class power requirements. Flexibility, Robustness, and Sustainability The possibility of future growth may be inhibited by the ANI implementation and non-technical limitations. On the other hand, robustness can be ensured since security will be under the strict control of business. Also, support for emergency calls is obligatory and these calls will be processed with high priority. Since the replacement of existing services by services that will guarantee a certain degree of flexibility is a worthy goal, sustainability (exceeding 50 or 100 years) is not a primary goal. Application provider ANI Service stratum User User
IP IP

Network

Transport stratum UNI Fig. 1.8. NGN Configuration


NNI

Table 1.8. Differences Next Generation Network (NGN) and New Generation Network.

Attribute Assumed implementation time Creation method Trunk line capacity

Next Generation Network By 2010

New Generation Network 2015 or later

Add QoS and authentication to Create new network without existing IP being committed to IP O-E-O conversion: Less than All-Optical: Greater than petapeta-bps capacity bps capacity Unknown but highly diverse ranging from devices acting in conjunction with massive information servers to tiny communication devices such as a sensor

Assumed terminals Integration and creation of and applications advanced versions of existing terminals and applications such as triple- or quadruple-play services Power consumption

Power consumption at several Power conservation by a factor megawatts of at least 1/100 according to multi-wavelength optical (transformer substation scale) switching Successive violations of Control spam or DoS attacks by principles such as firewalls, address tracing and end-to-end IPSec, and IP traceback and inter-network security Supported by enhancement of Robustness is provided by the management function by network itself businesses Distributed centralized control following IP, MPLS required for high-speed rerouting, long fault detection time Introduction of complete distributed control, increase in failure-resistance and adaptability, inclusion of sensor nets or ad-hoc nets

Security

Robustness

Routing control

Relationship Although there are some Provides openness from a neutral between users and constraints on openness standpoint, and users can bring the network stipulated by UNI, ANI, and new services NNI, reliability is increased Quality assurance Priority control for each class Quality assurance that includes by using IP bandwidth for each flow using packet switching or paths appropriately

Layer configuration

Thick layer structure

Layer degeneracy and crosslayer control centered around a thin common layer Vertical or horizontal integration possible

Integration model Basic principles Sustainable evolution Access Wired-wireless convergence Mobile Number of terminals

Vertical integration orientation

Set from a business standpoint Set from a clean slate to match while using IP future requirements Has limitations due to IP Has sustainable evolution capability that can adapt to a changing society Over 10Gpbs for each user Context aware ID locator separation Over 100 billion

Up to 1Gbps for each user IMS (Under investigation) Up to 10 billion

References
[1-1] David Clark, et al., NewArch Project: Future-Generation Internet Architecture, http://www.isi.edu/newarch/, 2003. [1-2] Larry Peterson, et al., GENI: Global Environment for Network Innovations, http://www.geni.net/, 2006. [1-3] Daniel Kofman, et al., Euro NGI, http://eurongi.enst.fr/, 2006. [1-4] Mikhail Smirnov, et al. Autonomic Communication, http://www.autonomiccommunication.org, 2006. [1-5] The Telecommunications Council. Research and Development for Ubiquitous Network Society. UNS Strategic Programs, http://www.soumu.go.jp/snews/2005/pdf/050729_7_2.pdf, July 29, 2005. [1-6] Hirabaru et al. Network Architecture Group, http://nag.nict.go.jp/, 2006.

10

Chapter 2. Current Problems and Future Requirements [Ohta,


Hirabaru, Nakauchi, Aoyama, Morikawa, Inoue, Kubota]

2.1. Internet Limitations


Loss of transparency on the Internet is often attributed to the widespread use of Network Address Translation (NAT) because of an insufficient number of IP addresses. However, this is not the only problem. Many parts of the Internet are already breaking down. When a new protocol is introduced and an attempt is made to use it together with existing protocols or with other newly introduced protocols whose interactions are unknown, the new protocol may gradually become incompatible with protocols that previously worked together efficiently. To prevent Internet limitations, relationships between protocols must be reassessed and protocols must be redesigned without regard to past usage. This not only seems to be occurring in lower layers, but is also seen in upper layers. For example, Session Initiation Protocol (SIP), which is supposed to be used to match media formats between end users in NGN will also used for reserving resources between network providers. However, if a lower layer is designed appropriately, either an upper layer can easily be redesigned or, in many cases, the protocol will be unnecessary. For example, SIP need not be used between business users if resources can be reserved appropriately in a lower layer (transport layer). Therefore, this section focuses entirely on lower layer limitations.

2.1.1. Multicast Routing Limitations


The limitation of multicast routing is an obvious example of Internet protocol limitation. The original concept (grand design) of multicast routing followed unicast routing and permitted various types of routing methods within a domain. The routes of multiple multicast groups were aggregated to curb the growth of routing tables, and an attempt was made to integrate these control methods in a common inter-domain multicast routing protocol. Various types of routing protocols (DVMRP, MOSPF, CBT, PIM-DM, PIM-SM, etc.) that are available within a domain fail when the domain grows larger or the number of groups increases because the number of route advertisements increases dramatically. However, it is generally impossible to aggregate the routes of multiple multicast groups. For unicast routing, when only a region having a certain address range is used, routing table entries are conserved at a distant location from that region by using the same route for all addresses in that address range. However, for multicast routing, the transmission destination is not a host, but a set of hosts spanning the entire Internet. Therefore, separate routing tables are required for different members even if the set of destinations are similar or many members are common. Moreover, the similarity of multicast destinations and similarity of multicast addresses are generally unrelated just like viewers receiving adjacent channels of a TV are generally not alike. The aggregation of routes of multiple multicast groups is generally impossible. In other words, the grand design of multicast routing has been a limitation from the start, and the inter-domain multicast routing protocol BGMP, which was proposed to aggregate routes, has not accomplished its goal.

11

Currently, only PIM-SM, for which the number of advertisements does not increase even if the domain gets larger, is used. However, it is limited to use in each domain by statically configuring so that the number of advertisements does not dramatically increase even if the number of groups increases. Although the resource reservation protocol RSVP was designed to support all types of multicast protocols that have currently failed, it contains the same problems as those protocols. Another limitation of multicast routing has been the introduction of IGMP. IGMP was introduced so that end terminals could be supported only by IGMP without regard to the multicast routing method. However, this not only made the multicast routing method (that only a router understands) unnecessarily complicated, but also moved functions that the terminal should have to the network, which is an overt violation of the end-to-end principle. Actually, IGMP has been functionally extended twice because of the introduction of new multicast routing methods, although it was supposed to be unrelated to individual multicast routing methods. IGMP has obviously failed.

2.1.2. ATM Limitations


At one time, even part of the Internet community expected ATM to be the foundation of telecommunication of the future. However, guaranteeing QoS on ATM was equally as complex as on a packet-switched network such as the Internet, and ATM failed in a similar manner as RSVP. Since its average packet (cell) length was 1/10 that of the Internet and it produced a speed that was also only approximately 1/10 that of the Internet, it is hardly used anymore. However, former trials in which ATM coexisted with the Internet placed strains on Internet protocols. Some of these were broadcast avoidance and the excessive expectations for multicasting. When an ATM network is selected as the data link layer and the Internet runs on top of it, the assumption of the basic model (CATENET model) of the Internet (the data link consists of a small number of devices) is not realized. Therefore, an IP broadcast for the data link (that is, for the ATM network), for example, is certainly not realistic. However, point to multi-point (P to MP) communications using ATM are realistic, and since this is misunderstood as being equivalent to IP multicasting, multicasting is mistakenly considered to be realistic. This results in excessive expectations for multicasting. Actually, a method should have been used that did not break down the CATENET model and broadcasting should have been simulated by virtually building a small data link on the ATM network. If multicasting is used excessively, IGMP traffic is uselessly generated, and if the IGMP query interval is extended, support for changes in group members having reduced IGMP traffic is delayed. Therefore, it is effective to only use broadcasting in an environment where most terminals only use IP as is done today.

2.1.3. Inter-Domain Routing Limitations


BGP, which is an inter-domain routing protocol, selects an alternate path when a failure occurs. Although most users are not aware that recovery takes a long time (often in terms of minutes), this is clearly a limitation where mission-critical uses are concerned. The reasons that recovery takes so long are that free policies are permitted at each AS and that there are too many ASs. Another limitation lies in the large number of global routing table entries, currently 12

exceeding 200,000 entries. Since multihoming is currently performed using routing, a multihomed site requires individual independent entries in BGP global routing tables. Most of advertised global routing entries are used for multihoming. As long as multihoming depends on routing, the number of global routing table entries is expected to continue to increase quickly in the future. Attempting to perform inter-domain routing using BGP leads to another limitation. For example, a method in which the MPLS path assignment depends on a BGP advertisement causes a significant increase in BGP advertisement information.

2.1.4. Network Layer-Specific Time Interval Limitations


One common factor shared by many Internet limitations is the introduction of time in the network layer. Since a network layer that is constructed using the connectionless IP, which does not have the concept of a timeout, it is inefficient as well as violation of original Internet design principle to introduce the concept of time. On the other hand, the data link layer or transport layer often relies on timeouts for resending packets in response to packet loss for detecting a failure based on the lack of a response from the destination and has had the concept of time from the start. The property of not having time is well maintained by the IP protocol itself. Although the TTL in IPv4 originally indicated a number of seconds, since it was actually used as a number of hops, it lost its meaning as time. Also, the concept of time was officially eliminated from TTL in IPv6. However, because of inappropriate protocol design or operation related to layers above or below the network layer, time is introduced everywhere in the network layer, which has caused limitations to occur. The most striking example is NAT. To recover addresses in an attempt to forcibly share addresses among terminals, addresses that are unused for a long time are recovered according to a network-layer timeout without regard to a transport-layer timeout. Therefore, the transport layer, which may still be active, may end up being disconnected. Another example is a routing protocol timeout. To verify network-layer connectivity, a routing protocol generally also monitors the data link layer. Since its timeout must differ according to the data link, the routing protocol generally should not have a fixed timing. On a slow-speed data link, monitoring packets cannot be transmitted very frequently for detailed monitoring, while on a high-speed data link, monitoring packets should be frequently exchanged to increase monitoring precision. Also, on a long distance data link, the wait for a response must be longer than the RTT. However, since the current mainstream routing protocols were designed for a time when data links were slow, either a timeout cannot be set according to the data link or if it can be set, it often can only be set in terms of seconds. Therefore, connectivity cannot be frequently verified. Similarly, routing advertisements also can only be spaced in terms of seconds or longer intervals. This causes failure recovery by changing routes to be unnecessarily delayed.

2.1.5. IPSec Limitations


IPSec was an attempt to provide a common security method for various types of protocols. However, since the functions that were required for the security method varied according to the application and the applications were ignored when standardizing those

13

functions, IPSec contained inconsistencies from the beginning. Security cannot be standardized by concentrating on a specific layer, but must be implemented in an appropriate layer according to application requirements. IPSec also contains public key encryption limitations. Generally, to implement security, it must be theoretically impossible for secret information that must be shared between specific parties to be shared by an unknown third party. However, in attempting to resolve this problem according to public key encryption, an additional third party called a certification authority (CA) was also introduced without taking the reliability of the CA into consideration (although the CA can be trusted to the same degree as the ISP, if the ISP can be trusted from the start, then IPsec would be unnecessary). This is inconsistent. The IPSec protocol that was actually defined by a compromise is unsuitable for most applications, and is of little use.

2.1.6. IPv4 Limitations


Since the IPv4 address length is only 32 bits, the number of Internet devices was quickly recognized to be limited to approximately 4 billion. In this sense, this was clearly a failing of IPv4. Therefore, various means of creating address hierarchies to use the limited number of addresses more efficiently were designed. Since addresses were being held back at the same time, the address conservation technique, NAT, prevailed, and the end-to-end transparency of the Internet was considerably compromised. A drastic solution to this problem was to extend the address length, which was implemented as IPv6. However, since IPv6 and the protocol group related to it have directly inherited the many limitations of the current Internet, the introduction of IPv6 alone will not prevent the eventual collapse of the Internet.

2.1.7. IPv6 and ND Limitations


IPv6 increased the number of address bits as a successor to the IPv4 protocol. It directly followed most of the other conventions of IPv4 and also added several "improvements" such as neighbor discovery (ND). As a result, it is a protocol group that carries on the limitations of the existing Internet protocol group. Many of those limitations are obviously manifested in ND, which is a "standard" protocol for linking together the IP layer and lower layers. One of the advantages of IP is that it can run on a great variety of lower layers since it is simple. As a result, a means of implementing IP must be devised according to the special characteristics of the lower layer. Although ND was designed as a universal protocol for implementing IP on all lower layers, only Ethernet, PPP, and ATM were actually assumed as the data link layer, and only conventional methods of using IP on them were assumed. As a result, if ND is used to run IPv6 on various types of data links, new kinds of limitations will occur. As the result of a mistaken investigation that took ATM into consideration, IPv6 does not have link broadcasting, but only provides multicasting. Multicasting causes IPv6 to directly inherit IPv4's IGMP protocol along with its limitations. As a result, ND frequently uses multicasting since it cannot use broadcasting. Therefore, since IPv6 not only uses IGMP but also an ND-specific timeout, it is forced to use a timeout value

14

denominated in terms of seconds, which ignores the special characteristics of the data link. The upper and lower limits of the ND timeout value, which were determined without any particular justification, make high-speed handover impossible. This is the latest recognized limitation of IPv6. Although the specifications were changed for only this part, this change was merely an improvement of unimportant details. For example, in a wireless LAN, since multicasting and broadcasting do not resend packets when a collision occurs, they are not as reliable as Ethernet broadcasting or wireless-LAN unicasting. Although congestion causes processing performance to drop significantly, this problem has not been solved. With ND, an attempt was made to have unicast routing (not just multicast routing) distinguish between simple terminals and routers so that a simple terminal would not need to understand the routing protocol. However, reducing terminal functions and relying on routers is a violation of the end-to-end principle. IPv6 differs from IPv4 in that the minimum Maximum Transmission Unit (MTU) has been significantly increased. In many upper layer technologies, it is sufficient if the standard MTU can be used. A value that is required in the upper layers is the Path MTU (PMTU), which is the minimum MTU value on a path spanning multiple hops. PMTU discovery is an IPv6 option. However, the PMTU varies as the route varies, and monitoring is required at a suitable interval. PMTU was first implemented in the network layer, and then timeouts and the concept of time were also introduced. Currently, PTMU discovery cannot actually be used. The increase in the number of global routing table entries for interdomain routing had been recognized at the initial stage of IPv6 development as a problem no less important as that of the pressure on the address space, and address structures and address assignment methods that would suppress the number of global routing table entries were proposed. However, since the multihoming problem has not been solved, multihoming requests from ISPs cannot currently be resisted and the unlimited increase in the number of global routing table entries is not likely to stop. Although there also have been experiments attempting to make IPv6 deal with the multihoming problem, since many of them try to introduce even more timeouts in the network layer in a similar manner as NAT, this only worsens the current situation. IPSec has also been integrated as a standard feature in IPv6. However, no attempt has been made to resolve the key sharing problem, and security is not particularly increased by IPSec.

2.1.8. Avoiding New Generation Packet Network Limitations


IPv6, which was introduced to overcome the IPv4 limitation of address resources being exhausted, not only is powerless concerning other causes of limitations, but even accelerates these limitations as described above. When we look at a sensor network, for example, using packet switching will also be required for a new generation network (whether or not to call this the "Internet" is a matter of personal choice). However, to avoid limitations of packet switching technology in the new generation network, its surrounding technologies should be radically reconsidered even more than for IP itself.

15

2.2. Future Frontier 2.2.1. Long Tail Applications


The Long Tail theory is an economic theory that states that high sales and profits can be obtained by small-lot production of a wide range of niche products without relying on large volume sales of hit products since an enormous number of products can be handled at low cost through online sales using the Internet. This name was chosen to represent the following situation. When a graph is drawn with sales volume on the vertical axis and products arranged in decreasing order of sales volume along the horizontal axis, then the part indicating products with small sales volumes, which stretches out over a long distance, has the appearance of a long tail. The long tail theory can be applied to research and development of information networks as follows [2-1]. Consider a graph with the number of users on the vertical axis and link speed increasing towards the right from the origin on the horizontal axis. Home users, which are the greatest number of users, use ADSL or FTTH links ranging in speed from several Mbps to 100Mbps. To the right of home users are corporate users using LANs ranging in speed from 100Mbps to 10Gbps. There are significantly fewer of those users than the numbers of general home users. Even more to the right are scientific and technical research and development groups, which have an extremely small number of users who require speeds of 10Gbps to 1Tbps. Although those speeds cannot currently be used in the computing environments of these research and development groups, those kinds of link speeds potentially will be required. Therefore, the graph has a declining hyperbolic shape.
Number of Customers A

Two kinds of Long Tail Long tail for Business Long tail for R & D

Long tail applications

C Data Speed

Short-term development performed in corporations cannot help but emphasize technologies that are adaptable to the area of the graph to the left in which there are many users. However, if we look at the history of ICT research and development since the dawn of the Internet, it is apparent that innovative technologies were created from research targeting the long tail part of the graph where there was an extremely small number of users and that those technologies gradually expanded into the areas to the left until they finally spread to the part with an enormous number of ordinary users. The Internet, World Wide Web, search techniques, and other technologies all started from research intended for the long tail part, which targeted an extremely small number of researchers. Of course, it takes a long time for these technologies to spread from the extremely small number of special users to the enormous number of general users. However, the corporation that accomplishes this before any others will dominate as the

16

current ICT champion. Even when designing the new generation network architecture, it is important to emphasize variety and the ease of introducing new services from the viewpoint described above.

2.2.2. Scale Free


From ultra-high definition video applications to Web2.0 or sensor networks, which are described later, bandwidth and usage frequency are extremely varied and wide ranging, and there exist no characteristic typical values of a system. In the future, high-resolution video streaming will be performed for each household in an evolved form of IPTV, distribution systems for uncompressed ultra-high resolution digital cinema of at least 4K will become commonplace, and the upper limits of network capacity will continue to be increased. Therefore, communication methods based on circuit switching must also be investigated rather than creating a network using only packet switching as in the NGN.

Contents in the ubiquitous society From tiny to huge Scale free


[bit]
Capacity of content

P T G M K K

B2B
Cine-grid
Digital Cinema > 100GB DVD>GB MP3>MB/ music Web10kB/page

IP TV

Both directions
e Commerce P2P Web content B2C
InternetTV S2M 11Mpage/day Yahoo 300Mpage/day

HDTV SDTV

Sensor & RF ID

M
Access frequency [page/day]

2.2.3. Sensor Network


Sensor networks that connect sensor nodes consisting of sensors equipped with signal processing functions, wireless communication functions, and a power source will be used to measure and analyze worldwide society en masse. For example, by deploying sensor nodes worldwide to monitor temperature and soil pollution, sensor networks are useful for environmental preservation. Also, arranging sensor nodes throughout cultivated land to monitor weather conditions enables the provision of a safe supply of food. Equipping automobiles with sensors for measuring pollutants, temperature, and speed can be useful for environmental preservation, performance improvement, or analysis of the causes of accidents.

17

Dramatic Increase in Nodes


Connecting sensors to networks will cause the number of nodes that are connected to the networks to increase dramatically. Several application examples are given below. Applications are being considered for dealing with aging and eliminating problems of insufficient medical resources by converting from regular monitoring of the conditions of patients following medical treatment to preventive medicine. Therefore, models have been designed in which sensors are installed for monitoring health conditions on an individual basis and detection data is sent to the network. Since the world population is predicted to be 7.5 billion by 2025, the number of sensor nodes will probably range from several billion to 10 billion. In a model for using a sensor network to monitor all cultivated land on earth in order to eliminate food shortages and provide a safe supply of food, if sensors are distributed over the 1.4 billion hectares of cultivated land so that there is one sensor per hectare, there will be 1.4 billion nodes. Automobiles are equipped with many sensors. By connecting these to a network and using the information obtained from them, various applications can be considered to improve automobile performance, determine whether accidents or breakdowns occur, and measure environmental conditions. The number of automobiles owned worldwide in 2003 was estimated to be 840 million, and it is expected to reach several billion by 2020 mainly due to the increase in ownership in developing countries.

Sensor networks for environmental measurement can be considered to help preserve the Earth's environment by monitoring its deterioration. For example, assume that the urban areas throughout the world are covered by sensor networks. The total area of the land surface of the Earth is 149 million sq. km., and 10% of that or 15 million sq. km. comprise urban areas. If 10 sensor nodes were deployed per sq. km., there would be 150 million nodes. When considering the increase in the number of nodes, besides the sensor networks described above, we must also take into consideration the increase in existing nodes for mobile devices, home networks, and appliances.

2.2.4. Web 2.0


Web 2.0 is a term coined by Tim O'Reilly in an essay entitled "What is Web 2.0" published in September 2005, which refers to various phenomenon that began appearing on the Internet in 2004 [2-2]. O'Reilly identified the following seven items as components of the changes in the Internet. (1) The web as platform (2) Harnessing collective intelligence (3) Data is the next Intel inside (4) End of the software release (5) Lightweight programming models (6) Software above the level of a single device

18

(7) Rich user experiences Various examples of typical services are Google Maps, Flicker, Hatena: Bookmark (one kind of social bookmark service), AJAX, Gmail, Amazon, SNS, and Wikipedia. Although there are various ideas of what Web 2.0 represents, we define Web 2.0 here as something that implements: (1) Frameworks for collecting users' content (2) Frameworks for collecting users' personal information

2.2.5. Frameworks for Collecting Users' Contents


Gmail is a typical example of a framework for collecting users' content. By providing users with a secure spam filter and a search mechanism, Gmail collects users' content (email) on Google servers. In other words, Gmail is a framework for collecting users' content by providing new functions or services to users. Nintendo's Wii video game machine can also be considered, in a broad sense, to be providing a framework for collecting users' content. By using an ID that is embedded in each Wii, a secure Wii network can be constructed to implement a Wii community. Currency can also be issued in the Wii community by using a certification function based on the ID. In addition, required software can be downloaded from appropriate networks. This enables Nintendo to implement an interactive service platform. All of these frameworks can be summarized from the viewpoint of a user enclosure. From a network viewpoint, however, they can be treated as a world in which diverse overlay networks such as the XX network or YY network are freely built on the Internet. The Wii network is a typical example. To enable overlay networks to be freely constructed, a "certification mechanism" and "flexible access control" are required. One certification mechanism is the method of using IDs that are built into devices. This is the mechanism that is implemented by mobile phones and Wii video game machines. On the other hand, by opening up the access certification mechanism that is provided on the NGN to third parties, similar services can also be implemented by third parties who cannot distribute IDs (this is model in which the certification authority that distributes IDs becomes a type of telecommunication common carrier). Third parties must cooperate with certification authorities to implement a platform that enables diverse overlay networks to be constructed. Flexible access control is a mechanism required to provide services only to limited users or terminals. The current Internet architecture, which has been constructed with the primary policy of being able to reach any IP address efficiently, is managed in a way that strongly depends on the network topology. As a result, a user can access a terminal that is connected to the Internet from anywhere at any time. On the other hand, the current Internet only has administrator controlled frameworks, and when users use network resources, they can only use network information that is managed by administrators such as IP addresses. This hinders uses of the Internet that may satisfy user requests. In constructing a user-oriented framework, attention should be given to the fact that "services are provided to limited people or terminals." Instead of considering all IP addresses to be equal and permitting anyone to access a service, a mechanism is required 19

that enables the service to be provided only to a specific group that depends on metadata such as the terminal owner or physical location. Private networks must be able to be freely constructed based on such information as the terminal owner, terminal location, or billing information.

2.2.6. Frameworks for Collecting Users' Personal Information


Typical examples of frameworks for collecting users' personal information include Amazon and TiVo. Amazon accumulates users' purchase history information to provide a recommendation service that uses collaborative filtering. TiVo is a system that learns a user's preferences and automatically records TV shows that the user likes. By accumulating users' behavior patterns, these frameworks implement services suited to the users. Most services up to now have targeted content that exists on the Internet. A typical example of this is a search service. However, from now on, accumulated users' personal information will be an individual stream of targeted content. In other words, contextaware technologies will be required. Context is a word that includes a variety of meanings such as user context (user profile, location, behavior), physical context (brightness, noise, traffic conditions, temperature), computing context (network connectivity, communication cost, neighboring devices), and temporal context. As these kinds of context information circulate within the Internet, we can expect new services to be created. The most important information among the diverse context information is position information. This is because the real world in which we live is often modeled based on position. For example, if temperature is to be measured by a sensor network, information indicating where the temperature is measured will be required, and if information indicating whether a person is walking or standing still also contains the location where that person is standing still, the service is more likely to provide finer details. By using information indicating whether a user is in a "movie theater" or riding on "public transportation" or information such as the number of people in a room or number of conversations being conducted, natural communications can also be implemented without changing the device that can be used accordingly or increasing the burden on the user. In addition to directly monitoring the real-world context such as the temperature, degree of soil pollution, or engine revolutions, sensors also share the data they obtain with other sensors. They also initiate the execution of physical actions through actuators. New services will also be possible such as distributing appropriate advertisements to users according to the user's circumstances, which are estimated from information obtained from acceleration sensors built into mobile devices. User behavior modeling information can be applied to nursing support, office design, and facility systems. A nursing support system can use behavior modeling information to detect any unusual behavior and can help significantly reduce labor for the nursing care of elderly patients who have cognitive disabilities. Also, by accumulating a history of contacts with people or flow paths within an office, the personal networks of individuals in the office or the flows of information that existed can be quantified. This enables a next-generation office design to be implemented, which can improve business processes or increase intellectual productivity. Moreover, by linking the living environment of the

20

entire floor with a sensor network, the environment surrounding inhabitants can be optimized, and the energy consumption of the entire floor can be reduced. To implement these kinds of context-aware services, basic technologies such as a context acquisition mechanism, context representation model, distributed context database mechanism, context filtering mechanism (privacy, security, policy), and context estimation mechanism must be developed. In particular, a context estimation technology that estimates high-level context information based on physical information obtained from sensors is extremely important for developing real world-oriented applications. However, although the word "context" is used without qualification, there certainly will exist various levels of context granularity required by applications. Consider position information as an example. Even if there are applications that require coordinate information, there will also be applications that require more abstract information such as "movie theater," for example. Context information platforms that can appropriately provide various types of granularity required by applications cannot be developed in a short time. Development must proceed while gaining experience in constructing and operating prototype systems.

2.3. Traffic Requirements 10 Years into the Future


To estimate the performance of switching nodes and transmission equipments in the new generation network, we observed the increasing trend of traffic at a typical Internet exchange point (IX) in Japan. This trend shows the same trend as Moore's Law (the level of integration of semiconductors will double every 18 months). Guilder's Law [2-3], which is related to bandwidth, can be expressed in terms of the doubling of Moore's law, and although the factor (period in which doubling occurs) differs, a similar trend is known to occur [2-4]. If we aggressively estimate here and assume that the traffic doubles every year, then after 10 years, it will grow by 210, which is approximately by 1000 times. We can estimate that this exchange point, which must already accommodate traffic on the order of 100Gbps, will have to handle 100Tbps in 10 years. If we assume that access also shows the same trend, then access speed at homes will be 10Gbps. The exchange capacity of current high-end routers is at the tera-bps level, and we can estimate that peta-bps routers will be required in 2015. The data link layer speed in that case will reach 10Tbps.
99 00 01 02 03 04 05 06

Increasing trend at JPIX


Moore's Law

in case of double per year, 10 years later from 2005

Year 2015 Traffic at Backbone


IX Traffic Volume: 100 Gbps100 Tbps IX Traffic Volume: 100 Gbps100 Tbps Home Access Speed: 10 Mbps10 Gbps Home Access Speed: 10 Mbps10 Gbps Backbone Node Capacity:T bpsP bps Backbone Node Capacity:T bpsP bps Backbone Link:10 Gbps10 Tbps Backbone Link:10 Gbps10 Tbps

~double per year

http://www.jpix.ad.jp/jp/techncal/traffic.html
Fig. 2.3. Traffic Forecast for 10 Years into the Future 21

2.4. Societal Requirements and Design Requirements


The next 10 years will not be a time of just pursuing larger capacity. We must increase network quality by creating an excessively large capacity. This will be used for diversifying society and as a mortgage carried forward into the future.
Societal requirements
Peta -bps backbone, 10Gbps FTTH, e -Science 100 billion devices, M2M, 1million broadcasting stations Principles of competition and user orientation Essential services (medical care, transportation, emergency services), 99.99% reliability (for nines) Safety, peace of mind (privacy, monetary and credit services, food supply traceability, disaster services) Affluent society, disabled person, aged society, long -tail applications Monitoring of global environment and human society Integration of communication and broadcasting, Web2.0 Economic incentives (business -cost models) Ecology and sustainable society Human potential, universal communication

Design requirements
Large capacity Scalability Openness Robustness Safety Diversity Ubiquity Integration and simplification Network model Electric power conservation Extendibility

Fig. 2.4. Societal Requirements and Design Requirements

Design Requirement 1: Large Capacity


If we assume that the societal requirement for capacity is estimated to be approximately 1000 times larger than the current capacity in 10 years, then a peta-bps backbone and 10Gbps FTTH will be required, and a very rapid increase in capacity is required to satisfy these needs. Also, e-Science, which refers to scientific fields that use networks for computationally intensive research, has been proliferating, and these advanced fields are estimated to require tera-bps access capacity through direct connections to the backbone.

Design Requirement 2: Scalability


The devices that are connected to the network will be extremely diverse, ranging from high-performance servers to single-function sensors. Although little traffic is generated by a small device, their number will be enormous, and this will affect the number of addresses and states in the network. Consider the following design example. If the global population is 5 billion people and there are approximately 20 connected devices per person, 100 billion devices must be able to be connected. In addition, besides communication between people, Machine-to-Machine (M2M) communication between robots or computers is also expected to increase. Furthermore, if broadcasting enters the global communication network, the number of transmission stations should be estimated at more than one million since broadcasting areas will become borderless, worldwide broadcasting stations will be globalized, and transmissions by individuals will also increase.

Design Requirement 3: Openness


Appropriate principles of competition promote autonomous growth of both society and the network. The degree of competition in a network is affected by the network

22

architecture. A balance between network providers and network users is important, and a high degree of control by users as well as user-oriented diversity is also required. Therefore, the network must be open and must be able to support appropriate principles of competition. Standardization of interfaces or the technologies used by them is important. The World Wide Web was invented because networks were open, and networks should have a degree of openness that brings out users' creative originality and enables networks to fully prosper. Mechanisms that enable users to provide services and control networks are required. In this case, there will be no distinction between users and service providers. Functions should be provided to enable users to easily bring services to the network.

Design Requirement 4: Robustness


To be able to rely on networks as part of our societal infrastructure, we must be able to use them for medical care, traffic light control and other vehicle services, or bulletins during emergencies. We must be able to entrust important services to networks just like we entrust our lives and well-being to doctors. The existing telephone network provides us with a benchmark of 99.99% availability. Networks must provide an even higher availability.

Design Requirement 5: Safety


Network privacy is not just the hiding of information, but the ability of the entity that owns information to control that information. On the other hand, the tracking of food or other commodities means that the recipient traces back along the information path of that commodity. Safety that enables the flow of information to be controlled or information to be traced in the reverse direction is an important network function. To enable safety to be used with monetary and credit services, certification of individuals is required as well as mutual certification, which also enables the individual to certify the communication destination such as a bank. The architecture must be able to certify all wired and wireless connections. It also should be designed so that it can exhibit safety and robustness according to its conditions during a disaster.

Design Requirement 6: Diversity


Current network design practices have pursued volume or efficiency objectives and have mainly targeted large numbers of users. In the future, an information networkoriented society that also targets fewer users should be constructed. The diversity of society will also be carried forward on the network. From a technical standpoint as well, there has been a move from a usage scenario like telephone for which traffic can be predicted to computer-centric traffic, which cannot be predicted, and the diversity of small sensors and connected devices will also increase. A network must be able to be designed and evaluated based on diverse communication requirements without assuming specific applications or usage trends.

23

Design Requirement 7: Ubiquity


To implement pervasive development worldwide, a recycling-oriented society must be built. To accomplish this, a network for comprehensively monitoring the global environment from various viewpoints is indispensable. However, monitoring the natural environment alone is not enough. Human activities also must be monitored. But privacy must be taken into consideration where human monitoring is concerned. When designing a network, there is a tradeoff between transparency and privacy protection, and a means must be provided for controlling the balance between them.

Design Requirement 8: Integration and Simplification


The time when networks were constructed for individual applications is fading away. Information networks are shared by all applications. In addition, not only broadcasting stations, but also individuals are sending transmissions to widely scattered recipients, and a large number of data sources, including devices such as sensors, are pouring information into the network. Network design must be simplified by integrating selected common parts, not by simply packing together an assortment of various functions. Simplification increases reliability and facilitates subsequent extensions.

Design Requirement 9: Network Model


To enable the information network to continue to be a foundation of society, it should be developed in a sustainable manner. To accomplish this, appropriate economic incentives must be offered to service providers and businesses in the communications industry. In addition, the network architecture must have a design that includes a business-cost model.

Design Requirement 10: Electric Power Conservation


As network performance increases, its power consumption continues to grow, and in 2004, network power consumption reached approximately 5.5% of total power consumption [2-5]. In addition, the traffic volume is expected to increase, and if we assume that traffic volume increases at an annual rate of 40% and that there is no change in electronic technology, then by 2020, network power consumption is estimated to reach 48.7% of total power consumption [2-6]. In particular, as things stand now, a router at a traffic exchange point will require the electrical power of a small-scale power plant. The information network-oriented society of the future must be more Earth friendly.

Design Requirement 11: Extendibility


The network must be sustainable. In other words, it must have enough flexibility to enable the network to be extended as society develops. A network that cannot self-reform will end up being repeatedly scrapped and rebuilt. The network will support universal communication that will overcome the obstacles of language, culture, distance, or physical ability and contribute to the creation of human "wisdom." Since it cannot easily

24

be replaced once it is embedded in society, the network architecture must be able to be developed in a sustainable manner for 50 or 100 years.

References
[2-1] Tomonori Aoyama. Digital Musings (e-Zuiso) "Two Long Tails," Denkei Shimbun, August 14, 2006 (in Japanese). [2-2] Tim OReilly, What is Web2.0, http://www.oreillynet.com/pub/a/oreilly/ tim/news/2005/09/30/ what-is-web-20.html. [2-3] C. A. Eldering, M. L. Sylla, J. A. Eisenach, Is there a Moore's law for bandwidth?, IEEE Communications Magazine, Vol. 37, No. 10, pp. 117-121, Oct. 1999. [2-4] G. Guilder, Telecosm: How Infinite Bandwidth Will. Revolutionize Our World, The Free Press, NY, 2000. [2-5] Survey Data Concerning Electric Power Conservation Techniques in Networks, Mitsubishi Research Institute, Inc., February 20, 2004 (in Japanese). [2-6] http://innovation.nikkeibp.co.jp/etb/20060417-00.html, Nikkei BP, Emerging Technology Business, April 17, 2006 article (in Japanese).

25

Chapter 3. Future Enabling Technologies [Morioka, Otsuki, Harai,


Inoue, Morikawa] This chapter describes optical and wireless enabling technologies that are expected to be used in the new-generation network. It also describes quantum and timesynchronization technologies that must be taken into consideration as part of the basic technologies for future networks.

3.1. Optical Transmission 3.1.1. Serial Transmission


Serial transmission technologies include electrical time division multiplexing (ETDM) using digital electrical multiplexing and optical time division multiplexing (OTDM) using optical delay multiplexing. ETDM is a commercially deployed technology that is currently being installed to commercialize a 40 Gbit/s system. In addition, to increase the transmission rate and to use bandwidth more efficiently, research and development of multi-level modulation/demodulation technologies have been accelerating recently. As a result, transmission experiments with transmission rates exceeding 100 Gbit/s and total capacity exceeding 10 Tbit/s by using carrier-suppressed return to zero differential quadrature phase shift keying (CSRZ-DQPSK) or return to zero quadrature phase shift keying (RZ-QPSK) have been reported [3-1], [3-2], [3-3], [3-4]. If we consider that modulation rates will approach 100 Gbit/s in the future, multi-level modulation/demodulation techniques may be able to implement serial transmission rates of several hundred Gbit/s.

Fig. 3.1.1. Multi-level Modulation/demodulation Schemes On the other hand, 100 Gbit/s transmission experiments using OTDM were reported in 1993, and 1.28 Tbit/s per wavelength (640 Gbit/s 2PDM (polarization division multiplexing)) experiments were reported in 2000. Although the pulse width, which will be on the order of sub-picoseconds, will easily be affected by the dispersion of the transmission optical fibers, OTDM has a potential to be used on ultra-fast links exceeding several 100 Gbit/s over short and medium distances in the future.

26

3.1.2. Parallel Transmission (Ultra DWDM) and Number of Wavelengths


Currently, wavelength division multiplexing (WDM) is mainly used as the parallel transmission technology. The number of wavelengths currently used in commercial systems is approximately 100, with this number expected to increase in the future. If the spacing is 25 GHz, even a 1.5m band can accommodate 1000 wavelengths, and if the abovementioned multi-level modulation/demodulation techniques are used, transmissions of 40 Gbit/s per wavelength or a total capacity of 40 Tbit/s will be achieved. However, as long as existing optical fiber is used, core melting (optical fiber fuse effect) must be avoided by controlling the optical power that is input in the optical fiber so that it does not exceed 1W. A 1000-wavelength transmission experiment was performed using a 2.7 Gbit/s per wavelength 1000 wavelength field transmission with a 6.25 GHz spacing [36]. This experiment used the JGN II optical testbed consisting of installed optical fibers. In future optical path networks in which the wavelength will be widely used as an identifier, the absolute wavelength or number of wavelengths must be able to be controlled freely. The supercontinuum technique (Fig. 3.1.2.1), optical frequency comb technique, and their combined techniques are useful for controlling the absolute wavelength and number of wavelengths. These techniques have actually been used to successfully generate 10,000 wavelengths (with a wavelength spacing of 2.5 GHz), and all of the wavelengths were stabilized with optical frequency stabilized light source accuracy (8 digits) (Fig. 3.1.2.2) [3-7]. We also think that absolute wavelength control will be required for future inter-domain or inter-carrier optical direct connections.

Fig. 3.1.2.1. Supercontinuum Technologies

27

Fig. 3.1.2.2. 10,000 Generation Technology

3.2. New Optical Fiber


Research and development of new optical fibers that can control wavelength dispersion properties, nonlinear effects, and input power resistance properties is currently focusing on photonic crystal fiber (PCF) and photonic band gap fiber (PBF). If nonlinearity is increased, the fiber can be used as various types of nonlinear devices, and if wavelength dispersion properties and input power resistance properties are controlled, the threshold for fiber fusing, which was mentioned above, can be increased and the fiber may be able to be used for ultra-wideband transmission.

Fig. 3.2. Cross Section of (a) Photonic Crystal Fiber, (b) Photonic Bandgap Fiber

28

3.3. Wavelength and Waveband Conversion


Wavelength conversion is useful for preventing wavelength collision in an optical path network in which wavelength is used as an identifier, and transponders (OEO conversion) are currently used for wavelength conversion. In the future, all optical wavelength conversion or waveband (i.e., a group of wavelengths) conversion will probably also be necessary in dynamic networks in which the frame or modulation format and the transmission rate on the wavelength channel will vary. Currently, the three main types of all optical wavelength converters are optical switching, parametric wavelength conversion, and supercontinuum (SPM). Among these, only the parametric wavelength conversion type optical wavelength converter maintains optical phase information, and since this can be applied to waveband conversion, more and more research on this type of converter is being conducted. Fig. 3.3.1 shows a typical example concerning waveband switching nodes.

Fig. 3.3.1. Waveband Conversion at a Waveband Node Research has been conducted on a quasi-phase matched Lithium Niobate (QPM-LN) waveguide as a material to be used for parametric wavelength conversion (Fig. 3.3.2) [38]. Since actual experimental results showing conversion gain with little degradation have also been reported recently, further progress is expected

29

Fig. 3.3.2. Principle of Optical Parametric Wavelength Conversion using QPM-LN

3.4. Optical 3R
To implement a wideband all-optical core network, the current OEO-type transponder must be replaced by an optical 3R element. For RZ signals, this can be implemented by generating an optical clock according to clock extraction and switching the optical clock by signal pulses. However, for other modulation formats, some kind of format conversion is required. Operation exceeding 100 Gbit/s has been proven by using a semiconductor or nonlinear fiber as the switch. However, since processing is required for each individual wavelength, integration with an optical add/drop multiplexer such as an AWG will probably be required. On the other hand, if 2R operations are performed, nonlinear optical effects can also be used for simultaneous wavelength operations [3-9], and a choice between 2R and 3R regeneration must also be made according to the scale of the networks.

3.5. Optical Quality Monitoring


To implement an all-optical network, quality monitoring of optical signals is also required in the optical time domain, in addition to OSNR. Synchronous and asynchronous quality monitoring techniques can be considered for these optical signals, and both of these types of techniques use ultra-fast optical sampling techniques. For an asynchronous type quality monitoring technique, the averaged Q-factor method has been proposed for estimating the Q-factor from the distribution histogram of the optical signal intensity [3-10], and a simple method in which the eye opening can be observed using high-speed asynchronous sampling has also been proposed [3-11]. Simultaneous monitoring of WDM signals will be a future topic of interest in a similar manner as optical 2R and 3R techniques.

3.6. Optical Switch


Optical switches can be broadly classified into space switches (MEMS: Micro-electromechanical system) and optical matrix switches. MEMS switches have been implemented with a size of 256256, and the number of ports for a matrix switch is currently on the order of 3232 for a TO switch. In the future, the wavelength bandwidth 30

will increase, and if the required number of ports is on the order of 100, MEMS switches with little wavelength dependency may become advantageous. In addition, research is actively being conducted on optical matrix switches using PLZT or SOA to obtain a fast switching speed (several nsecs). On the other hand, since an all-optical switch opens and closes gates using ultrashort optical pulses, they can operate in the 100fsec (0.1ps) range. In particular, performance of several 100 Gbit/s has been empirically verified by using the optical Kerr effect due to pure electronic polarization or the four-wave mixing effect [3-12].

Fig. 3.6. Principles of All-optical Switching

3.7. Optical Buffer


In conventional electronic systems, buffers consist of semiconductor memory (and peripheral circuits). These buffers can read and write information at random times to or from random positions of the storage space. While the buffers of electronic systems can be uniquely identified, this is not the case for optical systems. Candidates for optical system buffers include serial optical to parallel electrical conversion buffers, optical fiber delay line buffers, optical slow light buffers, and optical memory buffers. These are listed in ascending order of ease of implementation. The first two may be able to be used in 10 or 15 years. A serial optical to parallel electrical buffer implements memory electrically. In other words, high-speed optical signals are converted to parallel signals by optical or electrical processing, and the parallel signals are stored in semiconductor memory. Reading can be performed by performing processing in the reverse order. An optical fiber delay line buffer, which is not memory, assigns an appropriate delay to the progress of information by changing the length of the fiber that the information passes through. Since random access using an optical fiber delay line buffer is extremely complex processing, it is not realistic. Even though random access cannot be used, since an optical fiber delay line buffer can be implemented only in an optical system, high-speed optical-to-electrical 31

conversion, which tends to be costly (power consumption, number of parts, amount of money), is unnecessary. A representative application of an optical fiber delay line buffer is an optical packet switch buffer. The following figure shows a typical configuration for an optical fiber delay line buffer. The figure on the left below is an example of a 4-input, 1-output optical buffer consisting of an optical switch and optical fiber delay lines with different lengths and an optical coupler. A B-type delay is assigned. By controlling the optical switch appropriately, a delay is obtained by directing information to the optical fiber with the appropriate length. Even if information arrives from different input lines simultaneously, a collision can be avoided by controlling the switch appropriately. Since there is no optical logic circuit, using electronic processing for control is more realistic. To create a larger buffer without a large-scale optical switch, multiple 1N or NN optical switches can be combined as shown in the figure on the right below. To make the system more compact in this case, it is necessary to create fiber wiring sheets or ribbons and an array of switches.
Control signal
3 3 4 4 1 1 2 2

Buffer Manager d0 32:1 combine OUT d1 d2 d3 d31 8x8 1x8


Input (2)

1x8

8x8

Buffer Manager
1 1 2 2 3 3

Input (0)

Discard

3 3 4 4

1 1 2 2

4 switch 4

NX (B+1)

0 T 2T 3T 4T (B-1)T

8x8

1x8

Input (1)

4 4

3 3

2 2

1 1

1x8 1x8

8x8

Input (3) Input (4) Input (5) Input (6) Input (7)

Optical packets

Discard

1x8 1x8 1x8

3.8. Silicon Photonics


Research concerning on-chip optical wiring technologies and ultra-stable photonic integrated circuits is being conducted continuously. Since the refractive index of silicon is high (refractive index: 3.5), its optical confinement region is the submicron region, and it can bend light with a precision on the order of a micron. Also, its Raman coefficient is several orders of magnitude larger than silica, and its Raman amplification properties have also been verified. Although light emission due to current injection is difficult because of indirect transitions, we have to wait for breakthroughs in future research. Applications of silicon photonics as nonlinear processing devices are expected because of their high-density optical confinement properties, and wavelength conversion, an ultra-fast all-optical switch, and supercontinuum operation have been confirmed [313].

32

Fig. 3.8 All-Optical Switch Using Silicon Photonics

3.9. Electric Power Conservation


The electric power consumption of IP routers now exceeds several kW per 1Tbps of throughput. This power consumption can be held to a fraction of the current amount by introducing ROADM, OXC, or other optical nodes using optical switches [3-14]. Electric power consumption may also be able to be further reduced by using quantum information technologies in the future [3-15].

3.10. Quantum Communication


Quantum information communication, which includes quantum encryption key distribution and quantum data communication according to quantum encoding, uses the two basic principles of quantum mechanics, namely the "uncertainty principle" and "superposition principle." Quantum encryption key distribution technology is approaching practical use, and research for increasing its speed and the distance over which it operates is currently progressing. A 96-km field test using the JGN II optical testbed has already been successful. Although quantum data communication is currently at the basic investigation stage, in the long run, it is expected to become the ultimate communication technology that exceeds the shot-noise limit based on progress in basic research related to quantum teleportation technology, quantum encoding technology, and quantum 3R technology [3-16].

3.11. Time Synchronization


The current networks in Japan are master-slave synchronization networks for which synchronization is maintained with an uncertainty of 11 orders of magnitude. In the future, as we move into an NGN era based on packet switching, although the parts of the network that will require a network-synchronized clock in the transport layer are expected to be relatively smaller, advanced application services may also require additional high-precision time or frequency technologies. Although time or frequency is currently stipulated by a cesium atomic line standard with an uncertainty of 15 orders of magnitude, the establishment of an absolute frequency measurement technology using an optical frequency comb, which was developed around the year 2000, will enable a more stable microwave frequency standard or optical frequency standard to be obtained [3-17]. A new network paradigm may be able to be created by sharing these ultra-high-stability

33

time or frequency standards on the network or by independently having a large number of ultra-small atomic clocks such as chip-scale atomic clocks (CSAC).

3.12. Software-Defined Radio


Software-defined radio (SDR) is a radio communication technology featuring the ability to use software rather than hardware to freely change the radio communication signals that are to be sent or received. It has the advantage that it enables new radio communication methods to be supported by merely installing new software. Although it originated with research for military objectives in the United States, the US Federal Communications Commission (FCC) enacted regulations for putting the same technology to civilian use, which led to increased research and development activities in Japan and other countries. Some software-defined radio has been installed in existing mobile telephones and wireless LANs. At the research level, relatively high-frequency, highspeed radio communication methods such as IEEE802.11a and W-CDMA have been implemented. Some technical merits, advantages and disadvantages, and topics of research are as follows. New radio communication methods are easily supported. It is effective for base stations which require longer periods of use than terminals. It has the advantage that it enables terminals to be used for longer periods of time. However, like the relationship between present day PCs and application software, if a new communication method that exceeds the maximum allowable limits of the installed hardware arrives on the scene, it may not be able to be supported. High-level signal processing must be performed, and high-speed large-capacity hardware is required. Topics of research include broadband antennas, RF front ends, broadband amplifiers, and high-speed DA/AD converters for handling ideal broadband radio signals. Electric power consumption is high. New methods are required for exchanging control signals between radio terminals and radio base stations for determining the radio communication method.

The following points will have an impact on the network architecture.

Increased Connectivity
Since radio terminals and radio base stations on which software-defined radio technology has been installed will no longer select the radio communication method, the radio systems to which a radio terminal can connect will increase from the radio terminal's viewpoint. Applications can be expected for use at times of emergency such as temporarily switching to a separate radio system when communication with a specific radio base station or a radio area of a certain range is impossible. In addition, the radio terminal is also able to access multiple radio communication methods at normal times.

34

Increased Efficiency
Increased overall communication efficiency will be achieved by enabling radio terminals, which had been separated for each radio system in the past, to be multiaccessible without having to select a radio system as described above.

Multi-Connection
Since software-defined radio technology enables multiple radio communications to be executed simultaneously without having to physically equip radio terminals with multiple radio communication interfaces, so-called multi-connections will be able to be widely used. By generalizing multi-connections, which had previously been a relatively special condition, more communication models or applications that assume the use of multiconnections may be created.

Necessity of a Common Control Line (Network)


Previously, even if multi-connections were possible, a so-called main circuit existed, which was determined by the network for exchanging control information between the network and terminal. In contrast, the ideas of a main circuit and a control information exchange protocol used on it do not exist for software-defined radio technology, and research is centered on the generation and reception of radio signals according to software-defined radio technology in the terminal and base station devices. The abovementioned control communication architecture is needed when multi-connections have been generalized. Investigations of this architecture have begun for cognitive radio, which is a broader communication concept based on software-defined radio technology.

3.13. Cognitive Radio


Cognitive radio is a technology in which a radio terminal or radio base station recognizes the radio environment in which it is placed and communicates by effectively using frequency resources according to the state of the environment without causing radio interference. In the United States, which has abundant software-defined radio as a result of military applications, the FCC is investigating radio policies that may introduce cognitive radio technologies in the UHF/VHF band, which currently is an analog terrestrial TV band, after the migration to digital TV. At the same time, IEEE802.22 is standardizing methods of providing cognitive radio. Generally, to communicate stably and avoid interference, radio frequencies are divided into bands and the uses, communication methods, users, and usage locations of each band are regulated by law. However, in some bands, the usage efficiency with respect to time or location may be extremely low. Cognitive radio has been given attention as a technology for increasing that usage efficiency. Cognitive radio technology enables various radio communication systems to coexist with high density and be used in a range where they do not interfere with each other. Software-defined radio is expected to be its basic technology. From a system viewpoint, cognitive radio differs from software-defined radio in that a database related to each radio system is established on the network. Although no common method of thinking about the entire cognitive radio system has been determined,

35

there are many examples in which this kind of architecture has been assumed. The terminal or base station searches the radio environment and considers the search result together with information in the database on the network to determine the communication method that should be used or the time or place to use it. The following considerations will have an impact on the cognitive radio network architecture in addition to those listed above for software-defined radio.

Increased Degrees of Freedom When Constructing Access Networks


Cognitive radio technology will increase the degrees of freedom for choosing locations of radio base stations. Although simulations or measurement-based investigations previously had to be performed before laying out designs for station locations, part or all of that procedure may be unnecessary. For example, if there is a model in which consumers can subjectively establish public radio base stations like they can freely establish wireless LANs currently, the development of the relevant area will differ from conventional development. Once the advantages of cognitive radio technology are more fully understood, many radio systems will be able to be constructed in a certain area, and not only will the number of available radio systems increase, but services will also become more diverse.

Common Control Networks


In a similar manner as described for software-defined radio, common protocols must be determined for exchanging information indicating the recognized radio environment between the terminal, base station, and network and for delivering or verifying the communication method that was determined based on that result as well as radio methods for transmitting these protocols.

3.14. Sensor Networks


Although various types of sensors are actually used in various locations, research is actively being conducted regarding sensor networks in which sensors are interconnected or sensors are connected to a network by using wired or wireless connections and the information detected by the sensors is ultimately delivered to the network for use. In the 1990s, a great deal of sensor network research was conducted starting with the "Smart Dust" project whose goal was the monitoring of conditions in enemy territory by a large number of tiny sensors dispersed from the sky like dust. This research began at UC Berkeley with financial support from the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense. Besides the dramatic increase in the number of nodes described above, the differences between IP and non-IP networks are significant for sensor networks, and the following points will have an impact on the network architecture. Sensor node capacity: Sensor nodes are generally small, limited power is supplied from external sources, and available electrical power or resources (hardware and software resources) are limited. Since it is not optimal for a sensor node having such limitations to use IP to exchange information with other sensors 36

or with a connected network, means have been considered in which IP is not used. If the hardware for future sensor nodes progresses sufficiently, means of using IP will also be considered. (However, there will still be the communication efficiency problems described in the next paragraph.) Communication efficiency: Each sensor in a sensor network is generally thought to generate a small amount of data. Therefore, if IP is used for communication, the relative weight of the header will be too large, and communication efficiency will drop. As a result, use of a more efficient independent communication protocol rather than IP is often considered in sensor networks. Connection type: The types of connections for connecting sensor nodes and a network can be broadly divided into types in which sensor nodes are connected to an IP network and types in which a sensor network is constructed as a non-IP network (that is, a network based on a different protocol than IP and connected to an IP network via a gateway). The first type is suitable, for example, when access from general users is widely permitted or when information is widely sent to general users such as with current web cams. However, this type also has the problem that security is difficult to ensure when communicating only with the owner or a limited number of users. The second type is suitable, for example, when constructing a sensor network in a limited area for specific objectives of the owner or managers in charge such as security cameras in a shopping district. However, this model does not have a very high degree of freedom since it assumes that the information that is obtained will undergo some kind of data processing before the information is passed to the general public via the gateway if that information is to be further made publicly available to general networks or general users. In other words, there is considerable interest in a sensor network that can ensure a desired level of security and enable information to be freely obtained and processed by general users.

3.15. Power Conservation for Wireless Communications in the Ubiquitous Computing Era
In a ubiquitous computing environment, it is necessary to always know about diverse devices or services that are constantly around you. To accomplish this, not only must functions for quickly discovering the devices or services be provided, but the power consumption of mobile terminals must also be kept low. The transmission speed, however, only matters to the extent that device or services descriptions are exchanged. Also, a requirement of a wireless sensor network is that it must operate in an environment with a limited power supply and limited CPU and other resources. Based on the fact that the volume of sensor data is quite small, a low speed wireless communication technology that can reduce power consumption as much as possible is required. In addition, if the power consumption required for wireless communication is extremely small in a tag for which diverse applications such as production management, inventory management, or delivery status management are expected, an active tag with a built-in battery becomes a real possibility. Advanced tag applications can be implemented by using active tags with built-in batteries.

37

Currently, a device with a wireless technology such as Bluetooth or ZigBee is functionally complex because diverse applications are assumed, and the power consumption also cannot help but be large. These devices also take time to discover devices or services in their immediate vicinity. Since high-speed is a selling point of ultra-wideband (UWB) technology, its power consumption essentially increases. Previous wireless communication research attempted to implement high speed and high mobility properties. Of course, although investigations concerning power consumption were also performed because of its relationship to standby time, the flow of research and development for realizing third generation mobile phones pointed towards the upper right in Fig. 3.15. On the other hand, in the future pervasive computing era, a third axis for "Power" will project outwards towards the foreground. Development of wireless communication technologies for implementing locations near the origin in Fig. 3.15, that is, locations corresponding to lower power consumption, low speed, and low mobility properties will be required from the viewpoint of device or service discovery, sensor networks, and active tags. To move towards a reduction in power consumption, entire systems must be considered based on computer architectures and device technologies, not just wireless communication technologies. Since the power consumption required for sending/receiving 1 bit a distance of 10 to 100 meters is almost equivalent to the power consumption required for computing several thousand to several million instructions, wireless communication technology must fulfill a large role for reducing power consumption. The keys to reducing power consumption include a reduction in communication overhead, a concise MAC protocol, and a highly efficient sleep mode as well as reduction in the transmission rate.
Mobility

Data Rate

Power

Fig. 3.15. The Third Axis

References
[3-1] A. Sano et al., 14-Tb/s (140 x 111-Gb/s PDM/WDM) CSRZ-DQPSK Transmission over 160 km using 7-THz Bandwidth Extended L-band EDFAs, ECOC 2006 Th4.1.1 (2006).

38

[3-2] A. H. Gnauck et al., 12.3-Tb/s C-Band DQPSK Transmission at 3.2 b/s/Hz Spectral Efficiency, ECOC 2006 ETh4.1.2 (2006). [3-3] A. H. Gnauck et al., 25.6-Tb/s C+L-Band Transmission of PolarizationMultiplexed RZ-DQPSK Signals, OFC 2007 PDP19 (2007). [3-4] H. Masuda, et al., 20.4-Tb/s (204 111Gb/s) Transmission over 240 km Using Bandwidth-Maximized Hybrid Raman/EDFAs, OFC 2007 PDP20 (2007). [3-5] Nakazawa et al., 1.28 Tbit/s-70 km OTDM transmission using third- and fourthorder simultaneous dispersion compensation with a phase modulator, Electron. Lett., Vol. 36, pp.2027-2029 (2000)H.G.Weber et al, Single channel 1.28 Tbit/s and 2.56 Tbit/s DQPSK transmission, Electron. Lett., Vol. 42, pp.178 - 179 (2006). [3-6] H. Takara et al., Field demonstration of over 1000-channel DWDM transmission with supercontinuum multi-carrier source, Electron. Lett. Vol. 41, pp.270 (2005). [3-7] Y. Miyagawa et al., Over-10 000-channel 2.5 GHz-spaced ultra-dense WDM light source, Electron. Lett. Vol. 42, pp.655 (2006). [3-8] Yamazaki et al. Waveband Path Virtual Concatenation With Contention Resolution Provided by Transparent Waveband Conversion Using QPM-LN Waveguides, IEICE Technical Committee on Optical Communication Systems (OCS) (May 25, 2006). [3-9] T. Ohara et al., 160-gb/s all-optical limiter based on spectrally filtered optical solitons, IEEE Photonics Technology Letters, Vol. 16, pp. 2311 2313 (2004). [3-10] I. Shake et al., Averaged Q-factor method using amplitude histogram evaluation for transparent monitoring of optical signal-to-noise ratio degradation in optical transmission system, IEEE J. Lightwave Technology, Vol. 20, pp. 1367 1373 (2002). [3-11] I. Shake et al., Simple measurement of eye diagram and BER using high-speed asynchronous sampling, IEEE J. Lightwave Technology, Vol. 22, pp. 1296 1302 (2004). [3-12] T. Morioka et al., Error-free 500 Gbit/s all-optical demultiplexing using lownoise, low-jitter supercontinuum short pulses, Electron. Lett. Vol. 32 pp. 833 - 834 (1996). [3-13] World's First Implementation of an Ultra-Fast All-Optical Switch Using Silicon, NICT press release, http://www2.nict.go.jp/pub/whatsnew/press/h17/0511091/051109-1.html. [3-14] Aoki. Energy Consumption Trend of IT Facilities and Energy Reduction Achieved by IT Services. Journal of the IEICE, Vol. 90, No. 3, pp. 170-175, March 2007. [3-15] Sasaki. Quantum Information Technology and Energy Consumption. The Journal of the IEICE, Vol. 90, No. 3, pp. 220-225, March 2007. [3-16] M. Sasaki, Overview of Quantum Information Communication and the NICT Initiative. Journal of the National Institute of Information and Communications Technology, Vol. 52, No. 3, 47-53 (2006). [3-17] Seth M. Foreman, et al.: Review of Scientific Instruments 78, 021101 (2007).

39

Chapter 4. Design Principles and Techniques [Harai, Murata,


Hirabaru, Ohta]

4.1. Design Principles for a New Generation Network


Previously, when discussing a new network architecture, "integrating existing communication technologies and satisfying all user communication requests" had naturally been stipulated as requirements. However, the success of the Internet and the recent emergence of IP convergence have taught us that this kind of unitary network technology could no longer exists. The main factors contributing to the success of IP basically are as follows: (1) By aggregating all of the technologies of lower layers in the network layer (Internet layer), even when there are communication technology developments, they can be converged on the IP layer to minimize the effects on upper layers. (2) Network layer functions are held to a minimum (guaranteeing packet reachability) so that new application requests can be flexibly supported. When discussing a new generation network architecture, we must also give serious consideration to these basic principles and enable the various Internet architecture problems that have currently already been identified to be resolved. In this chapter, we will first summarize design principles that should be studied in the current Internet architecture and point out their problems. Then, we will summarize design principles that are required in a new generation network architecture.

4.1.1. End-to-End Principle and KISS Principle


One design principle of the Internet is the End-to-End principle [4-1, 4-2]. This states that a network should not be constructed based on a specific application or with the support of a specific application as its objective. Strictly speaking, the network should devote itself to transporting bits from sending nodes to receiving nodes. This principle also conforms to the KISS principle (Keep It Simple, Stupid) [4-3], and for the Internet, means that the network layer is kept as simple as possible and that services or applications are implemented at end hosts or edge nodes. Designing Internet architecture using these principles enabled their positive aspects to be realized in Internet development over the years. During network design and implementation, possible future applications are yet unknown. However, the likelihood of future application needs should be considered as a source of innovation fir an information network. The development of the World Wide Web surely is an example of this. On the other hand, if a network is implemented with a specific application in mind, functional extensions necessary for satisfying requests of applications that are extreme opposites of the original application may become immense even if the degree of extensions is kept to a minimum. Applications on the Internet are certainly not exceptions to this observation. Telephone communication on the Internet (VoIP) is a straightforward example. If an attempt is made to implement telephone communication that aims for conventional telephone network quality on an Internet originally intended for data communications having different pros and cons, the mechanism for implementing this will naturally be bloated. Also, since functions for

40

supporting specific user requests such as NAT or proxies are placed in the network, network growth or new application uses are obstructed. Keeping the network layer simple is also extremely important from the standpoint of ensuring reliability or extensibility. Making a system simple is the first step in ensuring its reliability. Also, providing extensibility enables functions to be easily added. However, keeping a network simple can create its own problems as the following example explains. MPLS, which locates a circuit switching technology immediately below the network layer and above the link layer, appeared as a technology for increasing the speed of the Internet. However, when MPLS introduces traffic engineering, it duplicates many of the roles of the network layer or link layer such as QoS routing or measures for dealing with link failures. In other words, if the network layer is kept simple, there arises the temptation to introduce new technologies and to try to use the functions that are obtained by those technologies to their fullest extent. The network is simple and there consequently is a tendency to optimize or functionally maximize one technology without considering its consistency with other technologies or other layer functions. Therefore, the simplicity of the network carries with it the risk that it is capable of bringing about the breakdown of the network architecture. This point suggests the importance of an architecture design that is not only taken into consideration at the design stage, but also through the subsequent development stage. When designing a new generation network, we must maintain the principles mentioned above and also aim for a network architecture that can support diversity and extendibility, which have increased traditionally. Although this chapter does not give final solutions, it presents important principles or rules as design principles for a new generation network architecture.

4.1.2. Basic Design Principles for a New Generation Network Architecture


We identified the following three principles as our core design principles for designing a new generation network architecture.

4.1.2.1. KISS Principle


The success of this principle in guiding Internet architecture leads us to believe it should be followed even more thoroughly in new generation network architecture development. As already mentioned, the KISS principle is an important guide for increasing Internet diversity, extendibility, as well as reliability, thereby reducing possible complications that can easily arise. We have chosen the following design principles to support the KISS principle.

End-to-End
This is a basic principle that states that a network should not be constructed based on a specific application or with the support of a specific application as its objective. Although Internet architecture development benefited from the application of this principle, as time passed, the end-to-end principle was gradually lost. History suggests

41

that a new principle is required in addition to this principle so that the same mistake is not repeated.

Crystal Synthesis
When selecting from among many technologies and integrating them in order to enable diverse uses, simplification is the most important principle. However, as the history of Internet development shows, network complexity increases with time since network uses become more diverse and new inconsistent functions are added. To counter this and maintain the KISS principle, the design must incorporate "crystal synthesis," a kind of simplification of technologies to reduce complexity even when integrating functions.

Common Layer
In a network model with a layer structure, each layer's independence is maintained. Each layer is designed independently and its functions are extended independently. An example is IP, which is in charge of the network layer, and Ethernet, which is in charge of the data link layer. The functions of each protocol exist independently, and redundancy occurs in the functions because of extensions. If we assume that the network layer exists as a common layer, other layers need not have the functions that are implemented in that common layer. One of the reasons for the success of the Internet is that the IP layer is a common layer. Therefore, we concluded that the design of the new generation network architecture will have a common layer and will eliminate redundant functions in other layers to degenerate functions in multiple layers.

4.1.2.2. Sustainable and Evolutionary Principle


The new generation network architecture must be designed as a sustainable network that can evolve and develop in response to changing requirements. It is apparent from the history of the Internet that it is impossible to predict the applications that will appear in the future and to design a network architecture suited to them. Even if a network that will satisfy current and near-future user requests is designed from scratch, the migration to that network will be difficult if its limits are encountered in less than 10 years. In other words, it is important for the network to have a simple structure and for service diversity to be ensured in end or edge nodes. To accomplish this, the following network control or design methods must be followed to enable a sustainable network to be continuously developed over 50 or 100 years. The concept of an overlay network is one means of accomplishing this goal.

Self-* properties
To construct a sustainable network that can be continuously developed, that network must be adaptive. To accomplish this, it is important for all entities within the network to operate in an adaptive, self-distributed, and self-organizing manner. For example, although current IP routing control is often described as distributed-oriented, this is not really the case. Current IP routing control is not completely distributed control. It is more accurate to describe it as distributed centralized or distributed cooperative. For example, 42

although OSPF, which is one type of IP routing control is distributed in the sense that all routers (entities) perform packet forwarding based on independent decisions, all nodes collect the same information and perform the same operations (distributed centralized) and other nodes are also expected to behave in the same manner (distributed cooperative). The fact that IP routing control is not completely distributed control is linked to the weakness of network fault tolerance. In the future, the network must be designed so that the distributed orientation is further advanced and individual entities operate in a selfdistributed manner and that intended controls are implemented overall. In other words, a self-organizing network must be designed. Also, although the hierarchical structure of the network will continue to be an important concept in the future from the perspectives of function division and function sharing, the hierarchical structure, which is a vertically aligned entity, must become a more flexible structure. That is, the network must be designed having an adaptable control structure for upper and lower layer states without completely dividing the hierarchy as is traditionally done. In other words, a self-emergent network must be designed. Although the aim in a conventional distributed system often had been to improve the overall performance by such means as load balancing, the distributed processing mentioned here will very likely lower resource usage efficiency instead. Therefore, the inefficient resource usage that is caused by the distributed processing orientation must be compensated for by end node adaptability.

Robust large-scale network


As the scale or complexity of a system increases, multiple simultaneous break-downs normally occur, rather than single independent failures. In addition, the elements in which software bugs are introduced are larger and human error is more likely to occur when managing operation. The new generation network architecture must be designed to handle simultaneous or serious failures that may occur. Although it is certainly possible that failure-free performance may not be adequately obtained as a result, it will be sufficiently regained by two or three years of technological development. However, it is not easy to quantitatively evaluate this kind of robustness. Although a new network design must have functions that are aware of robustness and can deal with unexpected situations, to be able to verify that unexpected situations can be dealt with is itself contradictory. Both a design policy for the implementation of a robust architecture and an evaluation technique for that policy are future topics of interest.

Controls for a topologically fluctuating network


It is important to develop a flexible network for which topology changes are also taken into consideration. In particular, recently, in mobile or P2P networks, communication devices are frequently created, eliminated, or moved. For example, in a mobile environment, the routers themselves may be moved, or in a P2P network, users may disconnect computers from the network. It is essential for mobility to be taken into consideration when designing a network. For example, when the topology frequently changes, although controls for finding resources on demand naturally are more effective than controls for maintaining routes or addresses, the overhead is high. On the other hand, routing control based on routing tables, which are used by IP, has less overhead than on-

43

demand control. It is important to enable routing to be implemented according to changing conditions.

Controls based on real-time traffic measurement


Real-time traffic-based network control is also important. For example, current Internet routing control determines routes by using fixed costs to find the lowest cost routes as is seen in a link cost that is proportional to the number of hops in RIP or to the reciprocal of the bandwidth in OSPF. As a result, a network with stable routes that do not frequently change can be provided. However, this also becomes a disadvantage in that it is difficult to deal with any network congestion that occurs suddenly. Also, failures become more commonplace as the scale of a network increases. For example, setting parameters that vary dynamically such as delay or used bandwidth on a link as the cost can be considered. However, if this is simply applied, there is the risk of route oscillation even though congestion can be avoided early. As a result, precision-optimized real-time traffic measurements over the time scale required for control are important, and these must be applied to routing. This kind of thinking should also be applied to network dimensioning, not just to routing control. Also, to pursue more autonomous actions in end hosts, it is important to actually measure or estimate the network status in real time.

Scalable, distributed controls


Distributed controls that are more scalable than are currently available are also important. Since distributed controls are generally more scalable than centralized controls, the importance of distributed controls had previously been recognized. Also, scalability is often discussed with respect to computational complexity. Based on common sense, since distributed centralized control is used for Internet routing, for example, scalability is lost. This is because the computational complexity of Dykstra's Shortest Path Algorithm, which OSPF is based on, is O (N2) relative to the number of nodes N. However, this is not necessarily a problem for the current number of routers or current route change frequency. On the other hand, this immediately becomes a problem in the future if the number of nodes continually increases and an attempt is made to require network adaptability by increasing calculation frequency. To sufficiently scale controls even in large-scale or topologically varying networks, it is important to introduce the previously mentioned self-organizing controls or pursue autonomous actions at each node.

Openness
Providing openness to users to facilitate the creation of new applications is also important to the network. An example of a network without openness is the telephone network, which includes telephones in the network, but has no room for other applications to be connected. The NGN, in which openness is improved, has room for providing specialized functions to network services according to the newly defined ANI. However, in the NGN, the network and users are basically independent, and the degree of freedom for creating new applications is unclear. Therefore, in the new generation network, although the network itself will have a simple configuration, it is important to provide openness to users and to entrust users with some of its handling. Naturally, there will be problems depending on the degree of openness. For example, the degree of openness can range from aggressive in which users manipulate network internals (for example, routing tables) to soft, which is the degree for designating network resources 44

that are to be used. Future topics of interest include network modeling so that requests from users can be conveyed to the network as well as control plane or protocol design. Network monitoring for ensuring safety is also important as the network becomes more open.

4.1.2.3. Reality Connection Principle


Internet problems occur because entities in space on the network are disassociated from real-world society. To smoothly integrate relationships between these entities and society, addressing must be separated into physical and logical address spaces and then mappings must be created between them and authentication or traceability requests based on those mappings must be satisfied.

Separation of physical and logical addressing


Investigating the extent to which physical and logical addressing should be separated is important for a new architecture. On the Internet, an IP address is assigned to each host (interface) as a physical address. In the Internet architecture, this address also has logical address functions. Therefore, various problems have been caused by the appearance of new types of host connection scenarios that had not previously existed such as mobility or multi-homing scenarios and by handling physical and logical addresses in the same way. For example, some people believe that this problem has been solved by applying techniques for Mobile IP. However, resource discovery mechanisms in P2P, the coexistence of various routing in ad-hoc networks, and data-centric concepts in sensor networks suggest the future importance of addressing. However, a careful discussion is required concerning what to do about addressing granularity, that is, about the targets to which logical addresses are to be assigned.

Bi-directional authentication
Although bi-direction authentication is also performed explicitly or implicitly in real life, authentication is particularly important in a new generation network. A network should be designed so that bi-directional authentication is always possible. Also, authentication information must be located so that the particular individual or entity controls the information.

Traceability
Individuals or entities must be traceable to reduce attacks on the network. Traceability must be a basic principle when designing addressing and routing as well as transport over them. To reduce spam, systems must be traceable from applications to actual society. Anonymity should also be provided at the same time as a means of protection. Traceability is a technological principle provided as part of the architecture, and societal rules are applied for its operation. The basic principles described above are closely related. For example, since selforganizing control is distributed, scalability is ensured. Also, how addressing is handled 45

is an important problem in a network where the topology changes. This discussion gives us a glimpse of how architecture is a comprehensive science.

4.2. Network Architecture Design Based on an Integration of Science and Technology 4.2.1 Conventional Network Design Techniques
Network design, including the Internet, provides many examples in which theoretical research results were applied in implementing technologies. In particular, research and development had conventionally progressed in a close relationship with various fields in applied mathematics such as queuing theory, traffic theory, game theory, and optimization theory. One example in which theoretical research had clearly promoted development of related technological fields is research concerning multiple access technologies such as ALOHA, its successor CSMA/CD, and the recent CSMA/CA. Recently, theoretical research is also actively being conducted regarding QoS technology and TCP technology. However, there are also many criticisms concerning theoretical research related to these technologies. For example, it has often been pointed out that theoretical research related to QoS technology has not produced any new technological advances. Actually, the limitations of QoS seem to have been clarified as theoretical research progressed. Looking back at the results obtained by QoS technology, we find many things that should be studied or reflected upon in furthering network architecture design. Also, TCP technology originally seemed to be an ad-hoc technique, and theoretical research just showed its predominance after the fact. However, it also seems that progress in current theoretical research can be highly significant for additional future studies. At any rate, a fundamental problem that should be mentioned here is that previous theoretical research, including research concerning the technologies mentioned above, targeted individual technologies. An architecture should essentially be produced by integrating technological and theoretical (scientific) techniques [4-4]. The separation of science and technology seemed to have been a problem previously, especially for the research and development of network architecture. To search for universal laws inherent in a system that already exists, a "scientific technique" models the target and clarifies the target's properties based on mathematical theory. On the other hand, "technology" invents, creates, and uses a specific method to implement new functions. As a result, to implement new functions based on properties that were derived from a scientific technique, it is important to consider a model and apply it to the actual system. However, conventionally, the lack of actions based on this perspective was a problem. In other words, despite the fact that the essence of architecture design is an accumulation of methods based on properties that were obtained by scientific techniques, this kind of cycle was not followed very well conventionally. The main reason that this kind of separation occurred was that the theories that had previously been used for studying networks were borrowed from applied mathematics, which was not a science that was created for information networks. In addition, the following practical problems also were encountered. Previous theoretical methods had focused on the optimization of service quality based on current and near-future technological levels. To be able to easily handle optimization problems, optimization had targeted a certain layer or certain protocol rather than the entire network system. Since an

46

information network had a layered structure, if a lower layer had a stable structure and requests were input from upper layers, this kind of approach could be sufficiently valid. Actually, even in the Internet, various small functions had already been added and it was possible at that time to locally search for universal laws or optimize some functions. Also, if optimization can be performed targeting a specific control method or protocol and this work is ultimately repeated across all layers, the entire architecture may be able to be evaluated. However, this assumption does not hold now because interactions between adjacent layers will be more dynamic in a future self-growing, adaptive information network architecture.

4.2.2 Network Architecture Design Based on an Integration of Science and Technology


To build a new generation network architecture, it is important to design the network architecture by integrating science and technology [4-4] according to the following procedure. Design a single architecture that can be entirely optimized and can flexibly adopt new functions. Then, refine that architecture by creating a model based on network science, and discover its system properties according to mathematical analysis or actual inspections. Specific methods for achieving further global optimization (such as moderate interactions between layers or moderate interactions between different modules in the same layer) are created and new functions are adopted. This causes the network system to grow. Although naturally the design is being done by research and development personnel, the network is self-growing as a result of its integration of science and technology. The entire process in which new properties for the system are discovered from a scientific standpoint and new technologies are adopted is repeatedly executed. In other words, network development can be promoted through a feedback loop containing repeated scientific and technological processes. To form the kind of science and technology feedback loop described above, a new network science will probably have the following kinds of requirements. Network science provides basic theories and methodologies for network architectures. However, the network system itself must be understood. New discoveries or principles can be obtained and system limitations can be learned by understanding system behavior through basic theories and methodologies. These theories and methodologies can also help clarify what makes good protocols or control mechanisms. To pursue this kind of network science, interdisciplinary science or, in other words, learning about and complementing other fields is increasingly important. Recently scientific fields have been compartmentalized. That is, individual research projects are limited to narrow scientific or technological fields, and the knowledge obtained from them is also limited to a narrow range. A new network science must be a comprehensive science that "designs a system by starting from fundamental principles." Since networks have become a foundation of society that is a daily necessity of people's lives, the design of a network architecture simply by solely integrating science and technology will not be enough. Relationships between people, society, and economics must also be considered. Although it is not easy to create a new network science, the increased usage of such terms as power law, self-organization, self-growing, complex adaptable system, emergence, and non-equilibrium system point to its growth already. Of these, brief explanations of only "complex adaptive systems" and the "power law" are given below.

47

Complex Adaptive Systems


In future networks, many factors such as increasing scale, increasing complexity, selforganization, and sustainable development will be more and more entangled. Despite the fact that today's networks were artificially created, they continue to grow beyond a range that can be designed or controlled by people. Even if a network architecture were designed from scratch, the developed network would be a complex adaptive system similar to the Internet, and the power law phenomenon described below is likely to appear. Complex adaptive system science can be expected to have a significant theoretical role in explaining the behavior or design techniques of entire large-scale systems, stability of non-linear systems, effects of chain reactions of faults, and robustness as well as in clarifying optimality or the speed of convergence to optimal solutions. Artificially designed and constructed networks must be made controllable through these processes. In particular, if there is an increasing demand for end hosts to become autonomous and this is taken as a prerequisite, the entire network will have to be harmonically ordered. This is certainly discussed in adaptive complex systems, and if the knowledge that is obtained can be practically applied, a new generation network architecture may be possible. Actually, even in the existing Internet, some of the characteristics of complex adaptive systems are seen as a result of the implementation of systems that have adaptability and ensure robustness and stability. For example, some characteristics of complex adaptive systems are given below, and the entries in parentheses are examples that have been empirically adopted in today's Internet. (1) Feedback control (TCP) (2) Redundancy (IP routing) (3) Modularization (creation of protocol hierarchies and autonomous control of nodes or end hosts) (4) Structure stability (creation of hierarchies in terms of individual AS units and self-organizing clustering of sensor networks or ad-hoc networks) Self-organizing controls also evidently have these characteristics, and if the concepts concerning them that have also been adopted in the current Internet are carried forward, it is likely that we will arrive at a new generation network that is a self-organizing network as well as a complex-adaptive system.

Power Law
Recently, the power law has often been observed in various networks. In terms of network topology, this means that the probability distribution follows the relationship p(k) k-r, where p(k) is the probability that the nodal degree is k. This relationship is seen in the Internet for the number of AS connections, number of router link connections, number of peer connections in P2P networks, and number of Web link connections. Some reasons that are given for why the power law is observed are self-organization, dynamic growth, and interactions that occur among a large number of entities. These certainly conform to the aims that should be pursued for a new generation network, which were presented earlier, and as a result, are given here as reasons indicating the possibility that the power law will be discovered. Currently, scientific research related to the power law has been actively conducted in statistical physics, applied mathematics,

48

sociology, economics, and biology. To continue to apply the results of these disparate fields to the development of information network technology, it is important to integrate science and technology.

4.2.3 . Biologically Inspired Self-Organizing Network Architecture


Finally, as an example of network architecture design based on interdisciplinary science, we will describe biologically inspired network control. Our objective is to implement a network with abundant robustness and adaptability by learning from the autonomous and self-organizing abilities of biological systems. Self-organizing control is based on positive feedback, and stability is provided according to negative feedback. This kind of control is required from the start in autonomous network control. In addition, by introducing randomness in the system, a mechanism is incorporated that can discover new solutions without being trapped at local solutions. This enables adaptability to be ensured, particularly for systems that vary with time. Of course, this kind of control had been adopted empirically in the past in CSMA/CD and TCP, for example. However, it is also important in self-organizing control to determine actions through local communication between entities. Many biological systems have incorporated this kind of scheme, and using it will make self-organizing network control possible [4-5] [4-6]. However, this will not be a matter of simply imitating a biological mechanism. Many mechanisms that are incorporated in biological systems have been modeled as nonlinear systems, and stability or parameter tuning discussions are possible based on those kinds of mathematical models. Therefore, we may be able to have an architecture design theory based on science, not simply analogy. In addition, there also are cases in which overall control has been implemented only by indirect interaction through the environment, without any direct information exchange between entities. This is a mechanism for further ensuring robustness and is referred to as stigmergy [4-5]. An example in which actions are determined through local communication of entities is a synchronization mechanism based on the luminescent synchronization of fireflies, and an example of indirect interaction through the environment is ant routing. However, further investigation is required to determine the kind of mechanism that ultimately can be applied in information networks. Although routing in the Internet is often described as distributed, it is actually distributed centralized control. In other words, for each router to determine the route of each packet, information is exchanged in a distributed manner. However, there is a required prerequisite that the overall topological image is common. In addition, even for the same topological image, if each router does not have the same routing information, the routes will become unstable and other problems may occur. On the other hand, true distributed control, which is not implemented in the current Internet, has been seriously considered for the abovementioned mechanism. The problem in determining whether self-organizing control actually works well is that the only systems that can currently be verified mathematically are some of the ones that have been modeled by nonlinear systems, and biologists believe and have verified experimentally that many of these systems work smoothly in living creatures. To establish a network architecture design theory in the future, it is necessary to pursue independent scientific developments in the field of information networks together with progress in biological fields. It is currently difficult for people to control biological systems, and direct investigation of controlling these systems is also difficult. On the other hand, an information network

49

is artificial and controllable. Therefore, if self-organizing control based on network science can be implemented, the information network itself can be used as an enormous testbed for self-organization control, which will enable feedback to be passed from the information network field to the biological field and from there to the field of complex adaptive system science. A true mutually beneficial integration of science and technology will be able to be implemented.

4.3 Measures for Evaluating Architectures


Even if a network architecture is designed through network science research, implementation will not be promoted without clarifying whether or not the design is truly useful. This section presents five criteria for evaluating a new generation network architecture. (1) Has a new design policy been developed? For example, if we focus on QoS technology, the ultimate goal of that research is to guarantee QoS for each end-to-end flow. That goal has not been achieved yet. However, packet scheduling technologies for controlling QoS for each flow or each class and connection admission control technologies have been produced as byproducts of this research. In addition, QoS research not only has clarified communication methods that can control bandwidth in packet-switched networks but has also clarified that the ultimate goal cannot be reached in the current Internet. Also, derivative technologies such as VoIP have been introduced. (2) Has a new communication method been implemented? On the current Internet, some new communication methods include multicasting, anycasting, and location-based services. If a single method is implemented and different research projects ranging from basic research to research on practical applications are actively conducted concerning various problems accompanying that implementation, then those research results should be commendable. (3) Was a new abstraction, model, or tool conceived? For example, CSMA/CD is widely modeled. Many kinds of derivative research have been produced through abstractions, and models have also been created in simulation packages. Research concerning the self-similarity of Internet traffic, which originated with measurements of Ethernet traffic, is also being pursued in a similar manner. The power law, which was mentioned above, applies to activities that are currently occurring. Early signs of control research using the power law, beginning with observations in network research and applications of theoretical models based on them, are beginning to be seen, and the results of that research should be commendable. (4) Were results commercialized and accepted by the user community? Were ideas that were created on the network adopted in products? Did those ideas add value? Did those ideas lead to businesses? (5) Were solutions given for real-world problems? Can real-world problems that actually confront us be solved even if the solution is not perfect?

50

4.4. Business Models


This section reviews current conditions and then describes basic concepts concerning business models that may be used when commercially developing a new network. Specifically, it summarizes the necessity for accounting, volume charge accounting, and a volume charge accounting model linked with routing control.

The Internet and Flat-Rate Accounting


An ISP usage fee, which provides users with Internet access services, is a fixed amount. The most appropriate reason is that each individual communication does not occupy resources. For example, there is no communication channel setup (signaling) before communication begins, and processing is completely independent for each individual packet. By including the destination address in each packet, the intermediate routers that the packet passes through only read that address and look up a routing table. The routing table itself is not prepared for each communication. Since the routing table entries are finite resources consisting of memory, routes are aggregated to save resources. Therefore, these entries do not correspond one-to-one with each destination. Although each packet occupies a communication channel for a very short period, it would be an exaggeration to view them as being occupied as resources in the way they are in circuit switching. From the above example, we can see why it would be unsuitable to apply volume charge accounting per individual communication.

Significance of Accounting for Implementing QoS


Internet services are best effort services, which mean that no quality differentiation functions are provided. However, to be able to provide diverse services, a new network must provide a QoS mechanism. QoS-guaranteed communication is obviously dealt with preferentially. Since QoS assurance is meaningless if everyone demands it, some kind of scheme for avoiding this situation is required. Accounting is an appropriate mechanism for accomplishing this. Paying for a high quality service is an accepted practice and everyone willing to pay a fee is offered the prioritized service. Specifically, fees are charged according to the amount of resources occupied such as occupied bandwidth, occupied time, or occupied communication channel length. For example, with a telephone for which the occupied bandwidth is a uniform fixed amount, fees have been charged for some time now according to the occupied time and occupied communication channel length (region). An example of a QoS assurance method that is not based on accounting is priority handling according to law such as for emergency communications. However, the users or timing with which this service can be received are restricted.

Accounting and Routing


The rest of this section will describe accounting and routing that causes it to function. If the fee charged to a user located at the edge of a network is unrelated to the packet destination (flat-fee), the income of the ISP to which the user subscribes is unrelated to the destination. The ISP is likely to select the least expensive trunk line. However, since the user entrusts the trunk line selection to that ISP, no disadvantage arises from a monetary standpoint. Because trunk line cost is generally proportional to the communication channel capacity or distance, using an inexpensive trunk line may reduce

51

the quality of service. However, since users will leave an ISP whose quality of service deteriorates, the quality is maintained. This situation is sufficiently embodied by routing based on the current border gateway protocol (BGP). However, it is difficult for a network whose design is oriented to the use of inexpensive trunk lines to include a QoS mechanism.

Volume Charge Accounting and Routing


On the other hand, what would happen if volume charge accounting based on the communication channel length were introduced? In this case, the income of the ISP to which the user subscribes increases as the communication channel gets longer. This is a structure in which the ISP's income increases if it selects the most expensive trunk line. If the user can select an ISP, this will not be a problem. However, if the ISP is a regional monopoly, there will be no room for user choice. Kickbacks may also occur in addition to ordinary business fees. Considering these drawbacks, unlike the flat rate accounting system, a volume charge accounting system would be difficult to operate based on the current BGP. However, if the routing mechanism is changed, a volume charge system can also be utilized. Specifically, QoS can be provided by assigning the right for selecting the trunk line route to the user and introducing volume charge accounting.

Route Selection by Users


One method for allowing the user to select the trunk line route is to register the ISP in advance when a long distance line or international line is to be used as is done with telephone service. However, this is not realistic when there are a great number of ISPs, and if registrations were fixed in advance, it would be difficult to avoid failures or congestion. Therefore, a new function is required to enable the user to select the route by using a different method than is used for telephone service. To provide this function, each ISP should distribute route information having QoS information to accessing users. Of course, if the volume of information that the user receives is large, the amount of information flowing on the network and the amount of processing at user terminals will increase, and scalability problems will occur. Therefore, the amount of information that is given cannot be very large. A routing method must be embodied that creates a multilayer hierarchy in which the highest layer is small. On the other hand, the ISP may not want to show internal routing information to outside sources. Providing extensive information is unnecessary, and abstracted information such as transit service charges, bottleneck bandwidth, or delay should be disclosed for this. If a better value than the actual charge, bottleneck bandwidth, or delay were disclosed at that time, a user request that trusted that value may not be satisfied. Signaling such as a trackback may be repeated without limit or become more complex, and greater time will be required. Therefore, when abstracting, it is important to set values equal to or worse than the actual performance. The tail end ISP should also disclose similar abstract information to users that belong to the ISPs. If the information disclosed to the users at both ends is integrated, the end-to-end charge, bottleneck bandwidth, and delay will be estimated. Since it is difficult for a user to use a communication channel for which the price or available (residual) bandwidth is unknown, we would expect that there would be this degree of disclosure. On the other hand, taking payment from users without disclosing at least this information would be excessive.

52

References
[4-1] J. H. Saltzer et al. End to End Arguments in system Design, ACM Transactions on Computer Systems, 1984. [4-2] R. Bush and D. Meyer, Some Internet Architectural Guidelines and Philosophy, IETF RFC 3439, December. 2002. [4-3] D. S. Isenberg, The Rise of the Stupid Network, Computer Telephony, pp. 16-26, August 1997. [4-4] Masayuki Murata. Network Architecture and the Direction of Future Research, IEICE Technical Report on Photonics Networks (PN2005-110), pp. 63-68, March 2006 (in Japanese). [4-5] M. Murata, Biologically Inspired Communication Network Control, Proc. of SELF-STAR: Int'l Workshop on Self-* Properties in Complex Information Systems, (Forli), 31 May - 2 June 2004. [4-6] Naoki Wakamiya and Masayuki Murata. Biologically Inspired Communication Network Technologies, Transactions of the IEICE, Vol. J89B, No. 3, pp. 316-323, March 2006 (in Japanese).

53

Chapter 5. Basic Configuration of a New Architecture


This chapter describes the basic configuration of a new-generation network architecture based on design principles described in Chapter 4. Although the elements described here do not cover every component of the new architecture, these elements are the ones that we believe will be particularly important.

5.1. Optical Packet Switching and Optical Paths [Harai, Ohta] 5.1.1. Optical Packet Switching
Definition of Optical Packet Switching
Optical packet switching technology is based on the concept that packets, which had conventionally been switched by freely using electronic processing, are handled as optical packets and switched using optical technology. If circuit speeds exceed 1Gbps, nodes such as routers or Ethernet switches, which are the packet switches used on the Internet, are often connected by optical fiber. In this case, although the packets are optical signals on the transmission channel, conversion processing is performed at the nodes to temporarily convert arriving optical signals to electrical signals. When the packets are to be placed on the next transmission channel, they are converted to optical signals again (O E O). On the other hand, with optical packet switching, the optical signals on the transmission channel are not converted to electrical signals at the nodes. Conversion processing is performed at the optical level as packets, which are placed on the next transmission channel (O O O).
O/E/O
L3 L3 payloadheader payloadheader

O/E/O

L3 L3 payloadheader payloadheader

O/O/O

L3 L3 payloadheader payloadheader

O/E/O

L3 L3 payloadheader payloadheader

1/10/40Gbps

>> 40Gbps

Optical packet generation/reception, optical packet switching

Optical packet switching only

Optical packet generation/rec eption only

All nodes are equipped with functions that enable routing control information to be sent/received

Fig. 5.1.1.1. Optical Packet Switching and Optical Packet Generation/Reception

Internal Functions of Optical Packet Switching


Optical packet switching consists of a switching module through which data passes, buffer module, header processing module for controlling packets, buffer control module, routing module, and other maintenance and operation management functions. Optical packet switching refers to the fact that the packet payload part is transferred directly as optical signals and that the switching module and buffer module consist of optical technologies. In many other parts, electronic processing may be performed by converting optical signals to electrical signals. NICT is conducting trials in which part of the header processing is handled as optical signals [5-1-1].

54

Routing Routing Make aarouting table for Make routing table for forwarding procedure forwarding procedure Forwarding Forwarding Determine output port Determine output port from the routing table from the routing table
payload payload header header

Operation, Operation, Administration, Administration, Management, Management, Monitoring, Monitoring,

Scheduling Scheduling Avoid packet collision Avoid packet collision Priority control Priority control Buffering Buffering Delay the packets Delay the packets in appropriate time in appropriate time Optical E or O

Switching Switching Switch the packet Switch the packet to the appropriate port to the appropriate port Electrical

payload payload

header header

Fig. 5.1.1.2. Internal Functions of Optical Packet Switching Optical packet switching is an indispensable element in a new generation network for increasing the capacity and lowering the cost of the current packet switching, which freely uses electronic processing. This conceptual design does not stop at a primitive design for simply switching optical packets, but instead describes design concepts for an optical packet switching node, which take into consideration processing up to network layer routing. In other words, an optical packet switching node also performs network control. The router is converted to an all-optical device. To perform optical packet switching, a function for generating optical packets and a function for switching optical packets are required. A node having only the generating function, like a label switch, can be referred to as an optical packet edge node, and a node having only the switching function can be referred to as an optical packet core node. However, in the optical packet switching discussed here, each node also performs network layer processing. Therefore, to avoid confusion with a lower layer closed network such as in MPLS, we will not define core and edge nodes. When we actually are in the practical application phase, we will consider a single router to have not just an interface for generating and switching optical packets but also an Ethernet or SONET/SDH interface.

Optoelectronic Integration
A high-performance optical packet switch should be designed by comprehensively investigating the physical scale, electrical power consumption, ease of use, and other factors. For example, since optical packets are extremely fast, the number of physical parts and the power consumption for optoelectric conversion also increase. Therefore, it is desirable for the optical packet payload to be transferred towards the next node without being converted to electrical signals. However, for packets such as routing packets in which the payload contains information that must be read at the local node, the payload is converted to electrical signals. Therefore, an optical packet switch will provide an input/output port for this purpose. Since headers must be overwritten or matched with a large volume of addresses in routing tables, they are temporarily converted to electrical signals for processing. Appropriate integration will be achieved for optical processing of addresses when a critical number are viewed. To avoid packet collisions, a buffer management method with small computational complexity (i.e., the worst case computation time is small) is required. This method must take into consideration the properties of an optical buffer called a feed-forward type buffer, which is described later.

55

The following sections describe design concepts related to the bit rate, communication range, guard time, optical packet format, asynchronous variable-length packet handling, routing, buffers, and the physical signal format while taking into consideration the ability to obtain performance surpassing that of a packet switch that only performs electronic processing, the elimination of redundant optoelectric conversion, use over a wide area, the KISS principle, and the ability to implement routing.

Bit rate
Developing a bit rate of at least 100Gbps per packet is a primary goal. Routers having 10Gbps interfaces are often seen on backbone networks, and some equipped with 40Gbps interfaces have also begun to appear. However, if this speed is exceeded, costs increase when optical signals are converted to electrical signals. Greater cost advantages can be expected by appropriately configuring 500Gbps class optical packets instead of just 100Gbps ones and switching them in the optical domain. For example, the number of components will be reduced by using a single optical switch to control broadband data. Also, since packets having the same data size have a smaller spread over time as the speed increases, the fiber length of an optical fiber delay line buffer can be shortened and a compact optical buffer can be configured.

Communication Range
To develop optical packet switches over a wide range, our goal is to implement optical packet transmission over the range of a wide area network (>> 100 km) and not be too concerned with the range of a metropolitan area network (< 200 km). For example, if optical packets are configured with a single wavelength, single modulation, and high bit rate, then depending on the cost for high-speed optoelectric conversion and modulation, the ability to use this over the range of a metropolitan area network may be sufficient. However, to perform long-distance communication, costs are entailed not only for wavelength dispersion but also for introducing optical 3R regeneration for signal smoothing or optical power management for controlling nonlinear effects such as self phase modulation. On the other hand, if a single optical packet is created by using multiple wavelengths in DWDM, then long distance transmission is possible with a simpler configuration. Use of multiple wavelengths in a single packet is possible when mature O/E- and E/O-conversion technologies are used. Another important consideration is that the cost can be reduced for converting part of the optical packet to electrical signals for header processing or routing. Of course, the speed difference of each wavelength must be compensated for at each link and at the node where the optical signal is ultimately converted to an electrical signal. However, the advantages that long distance transmission can be performed by using multi-wavelength packets and that O/E- and E/O-conversion costs can be reduced are significant.

Guard Time (Minimum Packet Interval)


If guard time is reduced, line utilization efficiency can be increased. For example, when data is sent using 10 Gigabit Ethernet (physical speed = 10.3125Gbps) in which 46-byte IP packets are modulated using 64B/66B encoding, the line utilization efficiency is approximately 53% (calculated with an 8-byte preamble, 64-byte Ethernet frame, and 12-byte inter-frame gap). For 1500-byte packets, the line utilization efficiency is 56

approximately 94%. With optical packet switching, the network must be designed so that the maximum data transmission rate (absolute value obtained by multiplying the line speed by the utilization efficiency) exceeds these percentages. The guard time is a ratedetermining factor for wavelength conversion or the switching time of a switch. Fig. 5.1.1.3 shows the relationship between the guard time and link utilization efficiency. For example, when the guard time for a 64-byte packet at a link speed of 500Gbps is 1 nanosecond, the utilization efficiency is approximately 50%, and the effective speed is approximately 250Gbps. On the other hand, for a 1500-byte packet, even when the guard time is 10 nanoseconds, the utilization efficiency is approximately 70% and an effective speed of 350Gbps is obtained. To create an optical packet switch so that this kind of increase in line speed can be enjoyed, the optical switch must have a nanosecond-order switching performance. Besides the switching performance, the effect of wavelength dispersion will increase if the communication range is extended. If dispersion is increased, the actual interval between consecutive packets will become narrower (shorter) than it was when the packets were sent out as shown in Fig. 5.1.1.4. Therefore, for a link to efficiently accommodate optical packets that are to be transmitted over a long distance, the link must be designed to compensate for wavelength dispersion, and dispersibility must be taken into consideration when assigning the optical packet guard time.
1

0.8

Efficiency

0.6

0.4

0.2

1500Byte, 40Gbps 1500Byte, 100Gbps 1500Byte, 500Gbps 64Byte, 40Gbps 64Byte, 100Gbps 64Byte, 500Gbps 0.1 1 Guard Time (nsec) 10 100

0 0.01

Fig. 5.1.1.3. Link Utilization Efficiency Versus Guard Time

57

Wavelength
Packet

Minimum packet interval


Packet Time a) Original packet interval

Wavelength
Packet

Minimum packet interval


Packet Time

b) Inter-wavelength timing skew and packet interval

Fig. 5.1.1.4. Interval Between Consecutive Packets

Optical Packet Format


Decisions concerning the optical packet format are to be made based on the application of communication ranges that continue to enjoy the cost benefits of using light and an affinity with electrical processing. It is desirable for one optical packet to be configured using multiple wavelengths (left side of Fig. 5.1.1.5) and each wavelength modulated and transmitted at a speed that can be optoelectrically converted (from 10Gbps to 40Gbps). If the optical packet header, which may be rewritten by each optical packet switch, consists of one wavelength and is too long in the time axis direction, shortening the header by configuring it using multiple wavelengths can also be considered (right side of Fig. 5.1.1.5). On the other hand, the payload part, which uses several dozen wavelengths, is configured with speeds exceeding 100Gbps per packet. Since this enables optoelectric conversion to be simplified at the receiving node and allows switching to be performed directly using light, the benefits of using light can be gained. Also, for a set of channels that need not be disassembled at an intermediate stage, this is also effective for increasing the speed by using multilevel modulation in a range that does not exceed the abovementioned wavelength band.

58

Asynchronous Variable-Length Packet Handling


Certain high-end routers that make full use of electronic processing technologies perform internal processing very quickly by dividing packets into the same length to perform synchronous processing. For example, iSLIP, which is a fast scheduler implementation technique, performs its processing on fixed-length synchronous packets. On the other hand, making optical packets have a fixed length inside an optical packet switch will no longer be realistic when the cost benefit is taken into consideration. This is because to divide optical packets into multiple packets within an optical packet switch requires redundant processing such as setting the guard time within the optical packet or performing optoelectric conversion to divide and reconfigure the packet. Another method that can be considered is to divide optical packets into fixed lengths at the entry to a network with consecutive optical packet switches and process fixed-length packets at the optical packet switches. However, in this case, a lower layer must be created such as with MPLS. If the forms of packets sent from end hosts can be maintained or, in other words, if a situation in which various length packets flow on the network can be maintained, excess processing will not occur at the nodes, and the network architecture will be simple. Even at present, an optical packet switch has been developed for processing asynchronous variable-length optical packets [5-1-2]. Further development of this technology is desirable.

Wavelength Time

Wavelength

Waste

Time

Payload Header Header division on wavelength axis

Fig. 5.1.1.5. Optical Packet Format.

Routing
To perform routing, complex processing such as searching for shortest routes and creating routing tables is required, and converting optical packets to electrical signals for electronic processing is the easiest processing to build. For nodes on a packet-switched network to exchange routing information, the information must be entered in the payload part of the packets. For example, in an IP protocol routing packet, routing information is stored in the payload part. For routing packets, if the same wavelength as in the optical packet header is used, the packet length increases in the time direction. Optoelectric conversion of the payload wavelength band is also required. To accomplish this simply and inexpensively, schemes for reducing the number of wavelengths used in routing packets and for facilitating reception of wavelengths constituting the payloads that are to

59

undergo optoelectric conversion are important. In addition, when optoelectric conversion increases costs, a scheme such as reducing the number of optoelectric conversion interfaces to the extent that the decrease in throughput does not affect network performance is also important [5-1-3].
General optical packet CWDM Optical packets received by an optical router CWDM

All Wavelength WDM light source Wavelength

Fig. 5.1.1.6. (a) Wavelength use by an intermediate optical router in an optical backbone network (MTU measure required), (b) Optical multiplexed packet generation circuit

Optical Buffer Configuration


To avoid optical packet collisions, the optical fiber delay line buffer described in Chapter 3 will be used. The most realistic fast serial-to-parallel conversion requires high power consumption. Slow light, whose speed can be regulated, is almost the same from a functional standpoint as an optical fiber delay line buffer. However, the frequency band that can be taken is not very large, and optical RAM technology is still realistically far in the future. On the other hand, fiber delay line buffers have been proven to assign 3 types of delay (0, 1, and 2) to 160Gbps packets without optoelectric conversion, and 31 types of delay have been assigned at 10Gbps [5-1-4]. Since this buffer does not require optoelectric conversion and does not change the light characteristics, there is almost no frequency band limitation. Conventionally, recirculating-type (feedback-type) and non-recirculating-type (feedforward-type) optical fiber delay line buffers have been proposed. With the recirculating type, the optical signal degrades because recirculation overlaps, the degree of signal degradation for each packet differs because the recirculation differential varies, and

Wavelength Time
Wavelength

Wavelength Time Payload


(a)
Wavelengthto-time conversion circuit Wavelength

DWDM

CWDM

Header

Broadband amplifier

Broadband modulator

Fast Broadband Optical switch Wavelength

Time

Time

Time

Time

(b)

60

optical signal processing becomes more difficult. On the other hand, with the nonrecirculating-type, the degree of signal degradation can be approached uniformly, and optical signals are easy to process. However, the fiber delay line utilization efficiency under ideal characteristic conditions is not as good as for the recirculating type. A nonrecirculating-type optical buffer mechanism must be established for reducing the required number of optical switches or fiber delay lines while controlling the packet loss rate.

Optical Packet Switch Configuration


This optical buffer is placed at the output side. The output buffer method has better delay and throughput characteristics than the input buffer method in which head of line (HOL) blocking occurs. On the other hand, since a large switch with N times the bus speed is required, implementation is believed to be more difficult than with the input buffer method. However, if the switch module is configured by bundling together N 1N switches, a single output port is equipped with N lines. This enables equivalent performance to be obtained as when a large switch with N times the bus speed is used. Also, since a 1N optical switch is simpler than an NN optical switch, we expect that it can be implemented sooner.

Optical Buffer Management


To maintain optical packets in an optical buffer, the delay time, which is the fixed time from when the optical packet arrives at the packet switch until it arrives at the output buffer, must be found for optical packets. Therefore, to process all optical packets that arrive consecutively, buffer management for processing at most N packets within an interval corresponding to the packet length is required. When simple round robin scheduling is used, the computational complexity is O(N). If the number of ports increases, processing will no longer keep up. A simple buffer management method is required. For example, the processing of each processor is simplified and high throughput can be achieved when the circuit configuration is complex by using the parallel pipeline processing mechanism described in reference [5-1-5].

Physical Signal Format


Conventionally, in the OSI seven-layer model, the link layer and physical layer are defined below the network layer. With IP or Ethernet, ATM, and token ring networks, packets (frames, cells, tokens) had been converted to bit strings in the physical layer. In other words, encoding and decoding are performed so that 0 and 1 information is distributed randomly and uniformly (mark ratio = 0.5), and 0 or 1 signals physically flow for parts where there are packets, naturally, but also when there are no packets. However, this is not the case for optical packet switching. In many research projects, optical signals flow when packets are sent (regardless of whether or not modulation/demodulation is performed) and signals do not flow in intervals where there are no packets. This is because the header and payload are formed using different modulation or light characteristics to make it easier to separate the header from the payload. However, since the receiver will no longer be able to perform clock recovery if the no-signal interval continues, a high-performance receiver will be required. In the future, we must determine which of the following methods is best: (1) Develop a high-performance receiver, (2) Develop a technology in which other signals may be entered in the header part without 61

problems instead of performing modulation/demodulation in a similar manner as with electrical communications, and (3) If both of the previous methods are difficult, perform modulation in parts with no packets at the link entry and drop these signals without performing switching at the next node.

5.1.2. Lightpath Network


Definition of a Lightpath
The concept of a lightpath is the polar opposite of that of optical packet switching. In a network that performs optical packet switching, intermediate nodes determine the packet destination by looking at information within the header constituting a packet. On the other hand, in a network that sets lightpaths, intermediate nodes determine the data destination in advance by linking their own input/output lines to establish connections based on control information that was sent in advance rather than determining the destination for each packet. Then, when data (not limited to packets) arrives, the data is forwarded towards the appropriate output port according to that connection. When optical signals are entered at the input, they are neither converted to electrical signals nor are logical information within the optical signals changed until the signals reach the exit. In other words, since the set of circuits that constitute the data channel, that is, the path consists of light, this data communication channel is called a lightpath.
11 12 2 1
Wavelength switch
OXC

11 1
mux

S1 S2 S3
Lightpath

D1 1 2 3 D3 D2 22 3

demux
OXC

12 22 3 2
mux

demux

OXC

Wavelength switch (internally contains optical cross-connects)

Fig. 5.1.2.1. Lightpaths and Wavelength Switches

Lightpath Uses
Lightpaths are used to provide services that cannot be sufficiently provided by only packet switching. For example, they directly connect paths between hosts for network users who cannot allow information loss or users who want to communicate by using a special protocol. This enables providing true bandwidth guarantees and supporting application innovation. Lightpaths seem promising as a new circuit switching technology together with the development of wavelength multiplexing technologies such as DWDM. In the near future, lightpaths are expected to be used as a means of traffic engineering. In other words, they will be used in a form in which paths are connected between intermediate edges rather than between hosts (Fig. 5.1.2.2 (a)). However, this is a scenario for the NXGN in which optical packet switching cannot be used, and is a typical design principle violation that diminishes the advantages of both packet switching and circuit switching and makes the network more complex. In the NWGN, it is important to cultivate lightpath network technologies based on the end-to-end principle, which

62

assumes that a path is provided up to the host or more aggressively up to an application on the host (Fig. 5.1.2.2 (b)).
Control network Packet switching Packet switching Control network

Lightpath switching (a) Edge to edge

Lightpath switching (b) Host to host

Fig. 5.1.2.2. Lightpath Provision To provide lightpaths between hosts or applications, the current degree of multiplexing, which is several dozen waves, is insufficient. Wavelength resources or fiber resources are finite, and even in the future, for example, the number of lightpath service users will be far more limited than the number of packet-switched service users. However, lightpaths will be effective when it is easy to predict required bandwidth and quality assurance is required such as for network broadcast services using lightpaths. On the other hand, high density high multiplexed WDM such as 1000-wave multiplexing has been demonstrated, and we believe that lightpaths will continue to provide a technological foundation that can be used by end users.

Distributed Control of Paths


In a conventional circuit-switching network (telephone network or SONET/SDH), paths had been set by centralized control. However, with centralized control, delays may occur because of load concentration, and delays may potentially increase according to the location of the centralized control server. Naturally, control interruptions due to a server failure or failure of a link to a server may also occur. If paths are directly connected up to end hosts, more nodes may be employed than the number of ASs in the current Internet or the number of routers targeted by OSPF, and it is doubtful whether the network will be able to function using the current centralized distributed control. Therefore, a means of distributing control for setting lightpaths should be considered. For example, let's consider the situation when users acquire wavelengths. To prevent some user from being unable to reserve wavelengths when the same wavelengths are reserved at the same time as other requests, the number of probed wavelengths for finding free wavelengths is kept to a minimum. Therefore, a situation in which some user will no longer be able to reserve wavelengths is avoided. This method can prevent users from simultaneously reserving the same wavelengths better than the method of probing all wavelengths. However, since the advantage of reducing the number of wavelengths is lost if none of the wavelengths to be probed is free, a free wavelength detection mechanism based on a learning function is required in each host. Introducing a distributed control mechanism for uniformly assigning paths to all users and for differentiating service quality is also desirable.

63

Exchange of Lightpath Control Information and Service


To provide lightpaths over a wide range, the paths must be set crossing multiple domains rather than only in a single domain. In this case, the problem of determining the degree of routing information to be advertised between neighboring domains arises in a similar manner as in a packet-switched network. Not only are domain administrators reluctant to disclose all routing information, but since detailed information is also more troublesome to process at the receiving side, it is appropriate to use information abstracted to some degree. Since user requests for paths will become more severe, it is desirable for QoS information (remaining bandwidth or utilization rate between border nodes, maximum and minimum delay, etc.) to also be included. To reserve wavelengths when setting lightpaths, free wavelengths must be obtained by probing signaling. At that time, domain administrators may not wish to notify neighboring domains of the availability of wavelengths that were not ultimately used. Therefore, there are three possibilities: (1) advertise the availability of only one wavelength, (2) advertise the availability of all probed wavelengths, and (3) advertise something between these two extremes. It is difficult to decide which of these options is appropriate. However, if an administrator is preventing more than the required information from being disclosed and there are no free resources, there will be repeated attempts to set a path, which will take time. The end user could feel dissatisfied and end up being alienated. To avoid the pattern indicated by this example, instead of reducing wavelength availability information that is to be advertised each time signaling is performed, it is preferable to advertise a lot of routing information or prepare abundant resources in advance and not limiting this information to signaling notifications. Control information should be provided not only to neighboring domains but also to end users. When internal conditions frequently change, by passing abstracted information to users, the users can consider QoS when using the network.

Wavelength Conversion
According to past research, wavelength conversion has clearly been somewhat meaningful for improving lightpath setting performance wherever it has been used. However, its introduction is dependent on cost (initial cost or power cost). Therefore, some points concerning the effects of introducing wavelength conversion are presented below. (1) Boundary between the end user and network: The user provides the end host interface. To control costs, the user will provide the minimum required wavelength multiplexing. On the other hand, the carriers also reduce costs by controlling the number of optical fibers that are used, and since the large-scale effect of accommodating traffic from multiple users is expected, wavelength multiplexing will increase. As a result, the number of wavelengths will differ significantly on the access link and carrier network. From a performance standpoint, there is a significant effect in placing a wavelength conversion function at the boundary between the end user and network. In addition, the same type of wavelength can be placed at the host, resulting in a cost benefit from a production standpoint. (2) Boundary between domains: Locating wavelength conversion at a domain boundary is important in that it simplifies network management. For example, it

64

can reduce advertisements of wavelength usage information, which was described earlier, and also reduce the volume of routing information advertisements. Although the transparency of light will be lost, one wavelength conversion method temporarily reverts to electricity. This method has the advantage that signal smoothing (i.e., regeneration, reshaping, and retiming) can be performed. On the other hand, since the transparency of light is lost and costs tend to rise, there are many future topics of research regarding the introduction of wavelength conversion. Another method in the optical region is the collective conversion of a wavelength group. This method is particularly effective when all hosts are furnished with wavelength groups in the same band.

5.1.3. Integration of Optical Packets and Lightpaths


The difference between packets and paths is nothing more than the difference in means of providing information to end users. Since each of these methods of providing information has separate control media and data transfer media, this is wasteful from the standpoint of service providers and infrastructure providers, and resource sharing is desirable. To implement the integration of packets and paths, both physical resources and control mechanisms must be shared. Unifying the control mechanisms of the contradictory switching principles of packets and paths will necessarily simplify the network. Regardless of the physical resources or control mechanism, simplifying the network will make it easier for the network to support new services. Also, infrastructure resources can be assigned more flexibly according to service usage conditions.
O/E/O
Co or ntrol dat inf o a

O/E/O
rma

O/E/O

tion

O/O/O

O/E/O

n Co

a rm info rol t

n tio

O/O/O

O/O/O

O/O/O O/O/O

O/O/O

Packet

Path

Fig. 5.1.3.1. Multiplexing Path Control Signals on Packet-Switched Links One technique for sharing the control mechanism is to multiplex packets for path control signals on packet-switched links. Fig. 5.1.3.1 shows the multiplexing of path control signals on packet-switched links. After the O/E/O optical packet switch for generating optical packets in Fig. 5.1.3.1 receives path control signals at the electrical interface, it converts those control signals to optical packets. Although packet-switched packets and path control packets flow on the same link, higher priorities can be set for path control packets (particularly signaling packets) to prevent situations from occurring too often in which paths cannot be set.

65

Traffic monitor C-Plane


parallel wave transmission

Forwarding table Look-up

Routing Engine
Packet header, control packet (packet, path)

Control / Data

OPS
Packet payload

Data
buffer
Control / Data

MEMS (OCS)

Path

Fig. 5.1.3.2. Example of the Internal Configuration of an Integrated Packet and Path Node Based on an O/O/O Optical Packet Switch Fig. 5.1.3.2 shows an example of the internal configuration of an integrated packet and path node based on an O/O/O optical packet switch. Lightpath services are provided by using some of the lines (i.e., wavelengths) shown in blue in Fig. 5.1.3.2. Optical packet services are provided by using the lines (wavelengths) shown in red and green. Control packets communicate with the appropriate control module via the OPS regardless of whether they are for packet switching or path switching. Another technique that can be used for sharing the control mechanism is path control distribution. On the Internet, a centralized type of distributed control is performed. To also be able to support future conditions in which the number of nodes increases dramatically and the topology varies, this distribution of control should be carried forward. On the other hand, although the topology does not vary and the number of nodes does not increase dramatically on a lightpath network, there will be an increase in the number of host nodes, and control must also take into consideration scalability in the wavelength multiplexing direction. In addition, with inter-domain control, sufficient information is not necessarily obtained, and self-organized control will be required for performance increases or maintenance in that case. Therefore, the benefits of carrying forward distributed control will be significant.

Data

References
[5-1-1] Naoya Wada, Hiroaki Harai, and Fumito Kubota, Optical Packet Switching Network Based on Ultra-Fast Optical Code Label Processing, IEICE Transactions on Electronics, Vol. E87-C, No. 7, pp. 10901096, July 2004. [5-1-2] S. J. Ben Yoo, Fei Xue, Yash Bansal, Julie Taylor, Zhong Pan, Jing Cao, Minyong Jeon, Tony Nady, Gary Goncher, Kirk Boyer, Katsunari Okamoto, Shin Kamei and Venkatesh Akella,High-Performance Optical-Label Switching Packet Routers and Smart Edge Routers for the Next-Generation Internet, IEEE

66

Journal on Selected Areas in Communications, Vol. 21, No. 7, pp. 10411051, September 2003. [5-1-3] Masataka Ohta, Efficient Composition and Decomposition of WDM-based Optical Packet Multiplexed Packets, Technical Reports of the IEICE (PN200676), January 2007. [5-1-4] Hideaki Furukawa, Hiroaki Harai, Naoya Wada, Naganori Takezawa, Kenichi Nashimoto, and Tetsuya Miyazaki, A 31-FDL Buffer Based on Trees of 1x8 PLZT Optical Switches, in ECOC 2006 Technical Digest, September 2006. (Paper No. Tu4.6.5). [5-1-5] Hiroaki Harai and Masayuki Murata, High-Speed Buffer Management for 40Gb/s-Based Photonic Packet Switches, IEEE/ACM Transactions on Networking, Vol. 14, No. 1, pp. 191204, February 2006.

5.2. Optical Access (Harai, Morioka) 5.2.1. FTTH (Fiber-To-The-Home)


In a broad sense, FTTH indicates an environment in which optical fiber is installed to each home to provide high-speed communication lines or service. Some other synonyms are FTTC (C = Curb) and FTTB (B = Buildings). FTTH service is spreading rapidly in Japan. Since FTTH service began in 2001, the number of users has grown rapidly. By the end of 2003, there were more than 1,000,000 users, and by the end of 2006, approximately 7,940,000 users had subscribed. In addition, infrastructure is in place that will potentially enable more than 40,000,000 users to subscribe.

5.2.2. Current FTTH: Single Star, Double Star, PON (Passive Optical Networks)
As currently considered, FTTH is divided into three types as shown in Fig. 5.2.2. These are SS (single star), PDS (passive double star), and ADS (active double star). In every type, an OLT (optical line terminal) is located at the central office side (towards the network) and an ONU (optical network unit) is located at the subscriber side, and optical fiber is laid between them. A single star configuration provides a dedicated optical fiber between the OLT and ONU. Its name derives from the star configuration of the wiring from the OLT to the ONUs. A double star configuration uses a topology in which a relay point is placed between the OLT and ONUs, and optical fiber is wired in a star configuration centered on that relay point. Since multiple relay points can be placed along the way from the OLT to create a two-stage star configuration, this is called a double star configuration. Since the type of FTTH that splits light into several optical signals or combines optical signals by using an optical coupler at the relay point is PDS, this is often known as a PON (passive optical network). On the other hand, the type of FTTH that performs active processing such as switching at the relay point is ADS. In Japan, FTTH service is provided by using SS- and PON-type networks. They both have different advantages and disadvantages based on their respective topologies. For example, the advantages of SS-type networks are that they can provide dedicated access lines to users and provide or upgrade different services separately. On the other hand, the advantages of PON-type networks are that they can conserve the number of fibers that

67

are laid and enable downstream broadcast communications (from the OLT towards the ONUs) to be performed. To facilitate FTTH service, standardization has been performed for PON-type networks. For example, the ITU-T has standardized G-PON (ITU-T G.984, 1.25Gbps or 2.4Gbps, accommodates up to 64 ONUs, supports distances up to 20 km). The IEEE has standardized GE-PON (IEEE 802.3ah Ethernet PON, 1.25Gbps, accommodates at least 16 ONUs, supports distances up to 10 km or 20 km). Currently, in Japan, many services providing up to 100Mbps to each home have been introduced on both SS- and PON-type networks.
SSSingle Star
Backbone network OLT

PON (passive double star)


Backbone network OLT

ADS (active double star)


Backbone network OLT

Star coupler

Switch

ONU

ONU

ONU

ONU

ONU

ONU

ONU

ONU

ONU

ONU

ONU

ONU

Fig. 5.2.2. Current Types of FTTH

5.2.3. Next-Generation PON


Good prospects for the next-generation PON include 10 Giga-Ethernet, in which the transmission speed of communication frames has been upgraded from 1Gbps to 10Gbps, and WDM-PON, in which WDM functions have been added to transmission/reception devices. These methods can increase the bandwidth available to users tenfold or by a multiple of the number of wavelengths by just changing the OLTs and ONUs and not making any changes to the existing optical infrastructure. When WDM is used, by assigning one wavelength to each user, separate bandwidths can be occupied and used by each user as in an SS-type network. For example, in Korea, a Fast Ethernet WDM-PON network featuring 32 wavelengths with 100GHz spacing and 125Mbps per wavelength has been investigated [5-2-2]. Also, from the service providers' viewpoint, there is also movement towards extending networks over longer distances, not just adding bandwidth. However, there are also some PON-specific problems. Even if 10GE-PON is introduced, the greater the number of branches or distance is, the smaller the bandwidth that can be sent/received per ONU will be. For example, if 10GE-PON branches 32 times, it will only provide approximately 300Mbps. Even though this is not as much as is currently available, all subscribers will have to upgrade at the same time if a 10Gbps interface is installed.

5.2.4. New-Generation Optical Access


We believe that in the future, at least a 10Gbps-class access line will be required per user. At that time, sufficient bandwidth will not be available using only the type of PON

68

described above, and either numerous upgrades will be required or PON itself will have to be significantly improved. The following sections will describe various means that can be considered for new-generation optical access.

5.2.4.1. WDM-PON
WDM-PON enables the bandwidth to be increased by using existing PON fiber and couplers. It enables multiple wavelengths to be sent to one user or either GE-PON or 10 Gigabit-PON to be provided using one wavelength. Therefore, future upgrades will be even easier than PON, which provides only one channel. However, with the existing access infrastructure, wavelength multiplexing may be limited due to aerial wiring or branching loss. In addition, WDM-PON depends on the degree of multiplexing and the communication speed, and new optical amplifiers may have to be introduced to compensate for branching loss. Fig. 5.2.4.1 shows the average available wavelength per user (approximation) versus number of multiplexed wavelengths and number of branches, where the transmitted light from the OLT is +27dBm, the received light for each wavelength at the ONU is -24dBm, the input loss at the coupler filter is 2dB, the transmission loss (including loss by the splicer) is 0.35dB/km, the transmission distance is 20 km, there is no amplifier, and the system margin is 5dB. With 16 users, the number of wavelengths can be increased to 256 wavelengths, and 160Gbps can be received per user. However, when there are 64 users, the number of wavelengths can be increased to at most 64 wavelengths, and the bandwidth per user in that case is 10Gbps.
send 27dBm recv-24dBm

Average Bandwidth

160 140 120 100 80 60 40 20 0 256 128 16 32 32 # Subscribers 64 8 16 64 # Wavelengths

Fig. 5.2.4.1. Average Available Bandwidth per User Versus Number of Multiplexed Wavelengths and Number of Branches.

5.2.4.2. Single Star


As described earlier, with an SS-type network, the interval between the OLT and ONU is transparent, the bandwidth can be flexibly increased, and this type of network is excellent with respect to expandability. Although it has been pointed out that the cost is high for providing the current 100Mpbs-class service, once the fiber infrastructure has been laid, upgrade costs are limited only to individual ONUs and related OLTs, and

69

migration is easy. In addition, since signal loss is small compared with PON networks, SS is also oriented towards long-distance use. Therefore, the advantages of using SS configurations for access lines in the future are clear.

5.2.4.3. WDM-Direct
WDM-Direct is a concept in which WDM is directly connected at a host (ONU). For example, a user connects to Internet service with a certain wavelength and connects to a bandwidth assurance service with a separate wavelength. One wavelength can also be used for broadcasting. In addition, multiple wavelengths can be used for sending/receiving data. Simultaneous upgrades like with PON will also be unnecessary. Either WDM-PON or SS can be used as an implementation topology. However, although this was also pointed out for wireless access, signal processing for determining the wavelength to use for connecting to the network must be provided as an additional function.

References
[5-2-1] 2006 White Paper on Information and Communications in Japan, hppt://www.soumu-go.jp/ (in Japanese. English version is also available). [5-2-2] Soo-Jin Park, Chang-Hee Lee, Ki-Tae Jeong, Hyung-Jin Park, Jeong-Gyun Ahn, and Kil-Ho Song, Fiber-to-the-Home Services Based on Wavelength-DivisionMultiplexing Passive Optical Network, Journal of Lightwave Technology, Vol. 22, No. 11, pp. 25822591, November 2004.

5.3. Wireless Access [Inoue]


Fig. 5.3 shows network images in the vicinities of users or communication devices, where various types of sensors or personal communication devices are interconnected wirelessly. On the wireless access network side, base stations will be interconnected using radios. This makes it possible to place them more densely and to increase the speed of access links while consuming lower power. The sensors and communication devices will configure networks such as ad-hoc networks, wireless multihop networks, and personal area networks. These networks can be wirelessly interconnected to each other, enabling the sensors and communication devices to be connected to optical core networks to implement global communications. Since the numbers of base stations, sensors, and communication devices will increase significantly in the future, effective frequency utilization technologies are required for efficient and scalable communication while preventing interference. Cognitive radio is expected to be one of such technologies.

70

Overlay Control Layer

User Authentication

Global Network Access Net Access Net Optical Core Access Net

Mobility Support

Access Net Access Net

Multiple access: optimal route selection Topology change, Homeoffice net intermittent connectivity

Ad hoc networks

Sensors

Fig. 5.3. Image of New Generation Wireless Mobile Access Network The network must support communication with traffic patterns and traffic volumes that differ from the conventional ones for such devices as sensors, which are expected to initiate small traffic volumes, and high-resolution video or 3D video applications, which are expected to generate enormous traffic volumes, in addition to the current voice communications and data access. From the viewpoint of guaranteeing communications during emergencies, attention must also be focused on assuring communication stability and reliability. In addition, energy-efficient communication (low power consumption) is also essential for communication devices and sensors, which are mainly battery powered.

5.3.1. High-density Arrangement of Base Stations


To implement higher speed mobile communications with reduced power consumption, base stations must be arranged with higher density so that the distance between user terminals and base stations is shortened. An effective means of accomplishing this is to not only connect base stations to core networks by wired connections as is conventionally done, but to also wirelessly connect base stations to each other or to the core network. This increases the degrees of freedom for locating base stations and enables configuration of tiny cells called nano cells, pico cells, and femto cells. However, effective implementation of such densely placed base stations is not feasible unless wireless relay link bandwidths can be sufficiently allocated. Applications of wireless multihop network or ad-hoc network technologies, which have been widely researched, must be further investigated.

71

One method for arranging base stations with high density is to make practical use of various fixed networks. For example, small base stations that form tiny cells can be located in homes and connected to a mobile core network via broadband links. Another effective method is to connect base stations to the CATV networks that are installed around commercial or residential areas [5-3-1]. A 6MHz frequency bandwidth allocated for just one TV channel can allow a relatively high volume of communications at data rates of several dozen Mbps.

5.3.2. Interconnection of Heterogeneous Networks and Heterogeneous Devices


Wireless access networks that are directly connected to the infrastructure network will continue to use various wireless access technologies as they have in the past. Consequently, wireless access will become more diverse. In addition, wireless base stations using wireless multihop connections will be arranged with high density as described above. The evolution and maturing of personal communication devices will gradually lead to the implementation of a genuine personal area network in the vicinity of a user. A personal area network will also need to be connected to the infrastructure network via another personal area network or a communication device located outside of the personal area network. The interconnection of these kinds of diverse wireless access networks, personal area networks, ad-hoc networks, and personal communication devices will enrich the so-called ubiquitous network environment. Diverse communication services and application services can be expected to emerge, and the ability to continue communications during disasters and other emergencies can also be increased. Research is also being conducted regarding a communication technology called a Delay or Disruption Tolerant Network (DTN) for delivering information in a network where a long-term delay or interruption may occur frequently. The new generation network architecture should fully support and include DTN as its integral part. A body area network (BAN) is a network that is one step smaller than a personal area network. It is a network in which sensors surrounding a human body for medical uses are interconnected using wireless technologies with other sensors that are implanted in the body by embedding or ingestion. The sensed data that in-body sensors provide to outside sensors are used for human health monitoring and medical treatment. The IEEE 802.15.BAN has already been formed as an organization for standardizing wireless technologies for these purposes.

5.3.3. Increase in Speed of Access Links to User Homes and Expansion of Areas Using High-Speed Links
Optical links are widely used for business in urban areas, especially in Japan. On the other hand, the choices for residential-oriented data access for consumers in apartment buildings or detached houses currently are xDSL using telephone lines, CATV networks, and optical links. To respond to demands for higher communication speeds in the future, the current access speeds of optical links (up to 100Mbps or 1Gbps shared by up to 32 users) must be increased. Increasing the speed of optical link sharing technologies known as G-PON or GE-PON or increasing the speed by using other technologies will be future research goals. In areas that cannot be serviced by high-speed links, wireless broadband circuits will have to be provided. The use of wireless access technologies based on fixed 72

wireless access (FWA) called WiMAX, and mesh network technologies based on wireless LANs can be considered. By 2011, Japan is converting its terrestrial television broadcast from the current analogue system to a digital system. To enable the digital system to cover all the areas that have been covered by the current analogue system, transmitting television signals over the telecommunication lines is very essential. Therefore, the optimum technology should be selected for future access links by considering the unification of communications and broadcasting and taking cost into consideration.

5.3.4. Communication and Position Detection Technologies


When user convenience is taken into consideration, wireless technology is the most actively applied for indoor networks in homes or offices. Wireless LANs with speeds exceeding 500Mbps will be available near future. Wireless USB using Ultra Wide Band (UWB) technology is replacing wired USB around personal computers (PCs). Energyconserving, high-speed, short-distance communications can be performed by using sensor network-oriented low-speed UWB technology. Similarly, visible light communication technology using white Light Emitting Diodes (LEDs) is also expected to be implemented. Although this has the disadvantage that it cannot be used in locations where darkness is required, it can be used for inexpensive high-speed communication in daylight. Another technology that plays an important role together with wireless is power line communication technology. As line speeds increase, the transmission distance tends to get shorter and the area in which communications can be performed tends to get smaller. This is exemplified by the inability of a wireless LAN to cover an entire house because of a narrow frontage or hallway or complicated room layout. This problem can be overcome by using the previously installed electrical wires for a communication network. Although this method has some problems such as interference due to leakage current or interference passing through the transformer, it is expected to be widely used in the future because it overcomes the limitations of wireless access. Currently, no technology is perfect for detecting or monitoring the positions of users or communication devices. Every technology has advantages and disadvantages. Devices equipped with methods based on detecting wireless LAN signals have low precision and are unstable, although they are cheaper. To achieve higher precision these devices require implementing expensive, special functions in a general-purpose wireless LAN module. Passive tags are inexpensive, but they require detectors emit electrical power to collect information from the tags. Active tags may be applicable for goods that do not move. However, to detect a moving item, the active tag requires frequent transmission of radio waves, which increases power consumption. White LED communication, which was described earlier, can also be used to detect position. However, since the receiver must be positioned within an unobstructed direct line of sight from the transmitter because of the straightness of light, this method is not widely applicable. A method of affixing sheetshaped pressure sensors onto a floor can be used. However, even if pressure is detected, the individual who applied the pressure cannot be identified. Another method of installing multiple cameras indoors and using image recognition has also been tried not only to determine the position of a user, but also to estimate who it is and what he or she is doing. Although there are a variety of methods as described above, since each method has its merits and demerits, a detection system with high precision must be created by appropriately combining multiple methods while holding down costs. 73

5.3.5. Context-Aware Network of the Future


If the communication infrastructure mentioned above is built, the huge amount of information related to entities that exist in and constitute the real world such as people, objects, and the environment will be obtained from the network. This information can be used to develop context-aware communication services. In addition, currently implemented location-aware services that provide information according to the user's location and services that provide information about presumed user preferences from data such as a user profile and purchase history will evolve further in the future. Services will be provided in which a time axis is also introduced and the system will recognize or assume what the user currently wants, or the current mainstream searches based on keywords will evolve to services that accurately answer more complex or ambiguous inquiries from users. Networks currently recognize communication devices that are owned by a user and connected to the network based on the connection address or model of computer rather than the user identifier. At the service level, user-specific contents, profiles, or behavior histories linked to user names are recognized. In the future, the network will recognize and understand the user himself by obtaining and processing more varied and detailed user-related information in the network system. This will enable the system to evolve to an advanced information communication system that more accurately assists human activities. A search service, which is typified by Google, focuses on obtaining and organizing all information of the real world. Important information can be obtained from the real world, which consists of people, objects, and the environment. Therefore, in the future, it will be important to obtain a massive amount of primary, accurate information about the real world. In this regard, access networks that include sensor networks, which will play this role, will be increasingly important.

References
[5-3-1] Expansion of PHS Service by Using Cable TV Networks--Successful in Actual Experiments--, NICT press release, http://www2.nictgo.jp/pub/whatsnew/press/h18/070312/070312.html, March 12, 2007 (in Japanese).

5.4. PDMA [Ohta]


Packet division multiple access (PDMA) is a paradigm for packet network-oriented cellular communications. Conventional cellular networks, which centered on using telephones, had been designed to suit the communication characteristics of telephones. In the future, the mobile communication infrastructure will continue to be concentrated in the Internet, and telephone traffic will only occupy a negligible volume that can be ignored. At this time, the network layer of the Internet will be connectionless, and there will generally be no relationship between individual packets. For example, if a large number of sensors send out packets intermittently, a steady flow of packets as is expected with telephone usage is unlikely. PDMA is a paradigm suitable for cellular communication that takes into consideration this kind of traffic in a general packet network and is suited to its communication characteristics.

74

In a wireless LAN, CSMA/CA is used for packet multiplexing. PDMA also uses CSMA/CA for packet multiplexing in a limited frequency band which is effectively shared for upstream and downstream traffic generated from all cells. In addition, a precisely different frequency allocation is not necessary because interference between cells is automatically regulated by CSMA/CA and the available frequency band in any specific vicinity is assigned nearly uniformly to each user. Since inter-cell interference between different wireless carriers is also regulated, all carriers can share the same frequency band without mutual adjustment, and a limited frequency band can be used even more effectively. With the PDMA concept, the cellular communication network is being redesigned for use as a computer network, which is represented by the Internet. Introducing PDMA to satisfy Internet requirements is a first step towards creating a new generation network for the cellular network.

5.4.1. Telephones and the Internet


The conventional mobile communication protocols were considered mainly for use with telephones. However, in the future, as the Internet becomes the one and only societal information and communication infrastructure, it will subsume the telephone networks. The relative importance of the data system (which is currently represented by web access) will increase, and telephone traffic will be transmitted as a part of that system. Therefore, future mobile communication protocols should be designed by considering only the Internet. The communication characteristics of telephones and the Internet differ significantly. The communication characteristics of the Internet can be expressed in a word by "connectionless." Communication on the Internet is carried out in terms of packets. Generally, there is no relationship between different packets, and individual packets are handled completely independently in the network. Although they use the same packetbased communications, connection-oriented communication methods such as X.25, which establish a virtual circuit (VC) in advance, have significantly different qualities than the telephone networks. Connectionless communications can be imitated in a connection-oriented network by introducing connection-less server (CLS). However, this only imitates connectionless communications and is inefficient. While the information and communication infrastructure is generally migrating to the Internet, a communication method that has a direct similarity to Internet methods should currently be used in lower layer protocols. A major problem when a connection-oriented mobile communication method is used on the Internet is that frequency bands end up being allocated to individual users. Although allocated frequency bands may be fixed like with telephones or they may vary according to congestion conditions or the communication volume, it is assumed that the communication volume does not vary significantly in the short term. However, in the connectionless Internet, the traffic volume often varies significantly in the short term. As a result, communication capacity is not optimized since such situations frequently occur as allocated bandwidths are wasted when the traffic volume is low or sufficient bandwidth cannot be allocated when a momentarily large volume of traffic is required to be supported.

75

Even in the current Internet, connection-oriented protocols such as TCP exist in the transport layer. However, TCP is a protocol that uses up the entire allocated bandwidth as long as there is data to be sent. It makes no attempt to define a bandwidth that TCP requires. Also, the TCP state only exists at the end terminals, intermediate network devices cannot accurately infer the amount of required bandwidth by the flow. The interposition of the intermediate network device is a violation of the Internet's end-to-end principle, and certain Internet characteristics are significantly harmed if it is forcibly introduced. For example, terminals can freely use a transport protocol other than TCP on top of IP by implementing the protocol in the terminals only. However, if some function in the network is specialized to TCP, the freedom available at the terminal is lost. Another problem when a communication method suited for telephones is used on the Internet is that communication in the Internet is generally not bi-directional or continuous. With telephones, voice data flows bi-directionally and can be divided into very tiny time intervals. If appropriate control signals are inserted into this kind of data flow, control information can be quickly exchanged between communicating parties. However, with the Internet, data flows are generally intermittent and one-way. The bandwidths and packet counts for downstream flows are not the same as those of upstream flows even for TCP, which is more or less bi-directional. If a higher level application intermittently uses one TCP connection, TCP often will have no packets flowing at all for certain periods. In addition, it is generally impossible for intermediate network devices to predict this kind of behavior. Note that ADSL and CATV Internet providers have assumed usage patterns in which residential clients download web contents and believe that the downstream flow requires large bandwidth while the upstream flow requires small bandwidth. However, this characteristic only occurs in special circumstances of the client-server model when the server can only be maintained by network providers or large companies, rather than a characteristic of the Internet. With communications according to the peer-to-peer model such as when a web server is placed in a home or video images are transmitted from a home or mobile terminal to another home or mobile terminal, even if the bandwidths required by access terminals are momentarily asymmetric, both the upstream and downstream flows will require large bandwidths of the same order. On the other hand, a fixed bandwidth of the same order is allocated for upstream and downstream flows on a backbone and no problem occurs since temporal variations in the symmetry of individual communications are averaged.

5.4.2. Internet Support of Wired Communications


For wired communications, the communication capacity of Internet backbones both in Japan and worldwide have exceeded the communication capacity of telephone backbones for quite awhile, and Internet support is steadily advancing. Although it had formerly been natural to establish a virtual circuit (VC) for each communication as in X.25, switch virtual connection (SVC) service for business in ATM, which is an extension of that technology, did not work well and has practically disappeared. The technology that arose in ATMs place is Ethernet based on CSMA/CD. Also, ADSL and CATV Internet have replaced ISDN based on X.25. Some vestiges of ATM remain in the internal specifications of these technologies. Although there exists central office equipment for which the external interface is ATM, it is meaningless for Internet access, and nowadays, most external interfaces have become Ethernet.

76

In addition, time division multiplexing (TDM) such as SONET/SDH is unnecessary on the Internet, since individual packets contain complete information on their headers. In the internet, multiplexing should be performed in terms of packets, and devices or protocol headers for time division multiplexing are wasteful. Although 10Gbps Ethernet standards also have features that support SONET/SDH framing and it is helpful when Ethernet traffic flows on existing SONET/SDH networks when new Internet backbones are created, it will be most efficient to directly link routers using dark fiber and directly use Ethernet. Although Internet support does not currently require QoS assurance, it does not prohibit it. QoS assurance is difficult to obtain in a CSMA/CD environment. However, with the current Ethernet, the physical layer is normally a full-duplex one-to-one connection, and if priority control is performed on the data link layer by using IEEE 802.1p, QoS assurance is theoretically possible. PDMA applies these kinds of concepts to mobile wireless communications.

5.4.3. PDMA Overview


PDMA, which applies packet multiplexing concepts to mobile wireless communications, performs multiplexing in terms of individual packets. However, multiplexing is more difficult for packets in mobile wireless communications than in optical fiber communications since a wireless network is essentially a many-to-many physical layer, unlike optical fiber, which is a full-duplex one-to-one simple physical layer in which the sending side handles all controls. First, since individual packets are completely independent and the traffic volume is also impossible to estimate theoretically, a communication slot must be allocated each time to each packet. An appropriate multiplexing technology for accomplishing this is CSMA/CA, and its effectiveness has been proven by the success of IEEE 802.11. In addition, since the required bandwidths for upstream and downstream flows vary asymmetrically with time and a wide bandwidth is required even for the upstream flow, a communication slot should be allocated each time to each packet, without even separating upstream and downstream transmission channels for duplexing. CSMA/CA technology can be used directly to accomplish this. Even for the transmission channel between cells, if the entire bandwidth is allocated to one channel and shared by all cells without explicitly using a separate channel at a neighboring cell, communication slots are cooperatively assigned among cells by CSMA/CA. Moreover, this is effective when communication demand is concentrated at one cell since the maximum available band in an individual cell is greater than when separate channels are assigned to individual cells. In addition, if cells share a channel and communication slots between cells are assigned completely automatically and dynamically by CSMA/CA, similar assignments can even be made between cells that belong to different wireless carriers. Therefore, when multiple carriers use the same channel, adjustments will be made completely automatically and dynamically, and separate channels no longer must be assigned to each carrier. Also, relaying of traffic from regions that cannot be reached by radio waves such as indoor areas can be performed collectively for all wireless carriers.

77

Although these benefits can also be obtained to a certain degree when CDMA performs dynamic channel assignment, the main difference with PDMA is in the supported speeds for traffic variation. Since PDMA, like CDMA, enables simultaneous communication with multiple cells by using the same RF circuit, make-before-break style smooth handover can be achieved by using only one RF circuit.

5.4.4. PDMA Performance Analysis


First, let us ignore the CSMA/CA overhead and make a very rough comparison of PDMA performance with that of the conventional method. Let the available communication bandwidth be denoted by B and assume that if this is divided properly into N channels, which are assigned to each cell, then the distance between cells that are assigned the same channel is sufficiently far and interference between the cells can be ignored. If a fixed bandwidth channel is assigned to a cell, then the available band in each cell is B/N. Even in PDMA, if steady communications of the same volume are performed in each cell, then the available band in each cell will be B/N if the CSMA/CA overhead is ignored. In other words, PDMA performance is equivalent to that of the conventional method even under conditions that are favorable to a voice communication paradigm in which steady communications of the same volume are performed in each cell. If we consider that the conventional method cannot support the kinds of differences between upstream and downstream communication volumes, which vary over short intervals that are seen on the Internet, differences between cells, or differences between wireless carriers, then it is apparent that PDMA is a much better method. Next, let us evaluate the performance when CSMA/CA overhead is taken into consideration. We assume here that the CSMA/CA parameters are equivalent to those of IEEE 802.11a (the packet spacing is the minimum when no packet is retransmitted because packet collision or loss is detected), the cell radius is 500 m (cell radius of PHS (personal handyphone system) base station), and the physical layer bandwidth is 100Mbps, and obtain the communication speed when the packet length is 1500 bytes (maximum packet length of the conventional Ethernet) and the data link overhead is 34 bytes (as for IEEE 802.11) per packet. The expected value of the gap between packets for CSMA/CA control in IEEE 802.11a, which is averaged when no packets are retransmitted at normal priority, is 110.5s. However, since the propagation delay between senders/receivers in IEEE 802.11a is assumed to be much smaller than 1s, if a propagation delay of 1.66s at a distance of 500 m is added, the gap between packets will be 128s. On the other hand, since it takes 122.7s to transmit a 1534-byte packet at a 100Mbps speed, the communications seen on the network layer will be 47Mbps, approximately half of the physical layer bandwidth of 100Mbps. If we consider that communication slots are assigned completely dynamically, this is reasonably efficient. In addition, if different parameters than those of IEEE 802.11a are used, these numbers can be improved even more.

78

5.4.5. Emergency Communications and QoS Assurance


Since CSMA/CA, unlike CSMA/CD, inserts slight random delays before transmitting packets, certain packets can be sent preferentially relative to other packets by making the amount of delay slightly less than that of ordinary packets. IEEE 802.11 uses this mechanism to preferentially transmit control packets. This mechanism can also be used in a similar manner to preferentially transmit communications containing important public service announcements during emergencies or to send QoS-assured packets. If the bandwidth consumed by QoS-assured communications were to be kept from exceeding a fixed amount of the total bandwidth, control packets or communications containing important public service announcements during emergencies would not be obstructed. Bandwidth can be guaranteed for QoS-assured communications by forcibly interrupting other best-effort communications. The spectrum usage fee for terminals using PDMA should be free as it is for wireless LANs or a flat-rate since no bandwidth is occupied for best-effort communications. However, collecting pay-for-use fees is appropriate for QoS-assured communications.

5.5. Transport Layer Control [Murata]


This section describes a TCP congestion control method based on the Lotka-Volterra competition model as an example of biologically-inspired self-organizing control. A great deal of research has already been conducted concerning TCP, and recently, research is continuing for standardizing High Speed TCP, which can transmit data even on gigabit-class lines. However, the basic operations are the same as the TCP Reno version in which the window size is sequentially increased as long as there is no packet loss and the window size is decreased if there is packet loss. Since packet loss suggests that congestion is occurring in the network, this is an excellent control that conforms to the end-to-end principle. However, neglecting congestion while increasing the window size until packet loss occurs creates a significant problem especially in a high-speed network. In other words, when packet loss occurs, packets must be retransmitted, and as the retransmitted volume becomes enormous, a significant decline in throughput occurs. The basic idea of TCP Symbiosis, which is introduced here, is that the resource usage conditions in the lower layer (the available bandwidth) is monitored and the window size is changed to match that value rather than relying only on packet loss to detect congestion. In other words, the window size N(t) changes according to the following equation.
N (t ) d N ( t ) = 1 N (t ) dt K

where represents the natural growth rate within a species and K is the available bandwidth. The above equation is a logistic curve that can be adapted to high-speed lines where the window size increases exponentially in the initial phase. On the other hand, since the available bandwidth, i.e., K becomes smaller as the window size becomes larger, the window size increment rate slows down. A logistic curve is often used in mathematical ecology. A logistic curve states that the population of a species increases explosively in the initial state since there are many resources, but the growth rate is suppressed because the resources decrease as the population becomes larger. 79

Carrying capacity of the environment


120

Populations converge equitably

100 80 60 40 20 0 0 20 40

Species 1 Species 2

TCP Symbiosis requires the available bandwidth to be known and uses an inline network measurement technique to do so [5-5-1]. This is a new traffic measurement technique that uses burst transmissions within the TCP window size for measuring available bandwidth while sending TCP data. A logistic curve is not used as a simple analogy from mathematical ecology but rather because a great deal of related research concerning the Lotka-Volterra competition model has proved its effectiveness in TCP congestion control. First of all, a network can be thought of as providing its resources to an unspecified large number of users who basically are in a competitive relationship. In particular, the end-to-end principle suggests that mediation of resource contention according to controls within the network should be avoided. However, in a state of unrestricted competition, resources are clearly squandered by excessive competition even in the TCP example. TCP Symbiosis is a solution to smoothly implement a coexistent relationship among users. Consider TCP Symbiosis when there are two connections. This can be represented by the following equations:
N ( t ) + N2 ( t ) d N1 ( t ) = 1 1 N2 ( t ) dt K N ( t ) + N1 ( t ) d N2 ( t ) = 1 2 N1 ( t ) dt K

where represents the decrease in the growth rate due to inter-specific competition. If this were based on mathematical ecology, it would indicate the inter-specific competition for a certain resource. However, the following is clearly stated in Reference [5-5-2], for example. The effect of competition within a species is stronger than the effect received from another species with which it is in competition. In other words, stability occurs when the self-suppression effect of population control within one's own species is greater than the suppression effect received from another species. Even when there is competition, species can coexist only when each competing species has strong internal competition. In other words, a relationship that can also be referred to as competitive symbiosis holds. Another important point is that fairness must be guaranteed. Since implementation of the end-to-end principle does not allow intervention by a mediator, fair control must be

Species population

60

80

100

New species appears in this environment

80

included in the protocol between end terminals or, in other words, a mechanism for guaranteeing fairness between connections must be included in TCP. TCP increases the window size each time it receives an ACK (confirmation of delivery between endterminals). As a result, differences occur in the method of increasing the window size according to differences in the RTT (round trip time) between end terminals, and throughput is determined depending on the RTT. With TCP Symbiosis, if the rate of change of the window size is normalized with respect to the RTT, fair bandwidth allotment can be expected regardless of differences in the RTT. The effectiveness of TCP Symbiosis is shown in [5-5-3]. Importantly, properties of the proposed method such as stability, extendibility, and parameter characteristics are being clarified according to mathematical analytical techniques by extensions of past research regarding the LotkaVolterra competition model.

References
[5-5-1] C.L.T. Man, G. Hasegawa, and M. Murata. An Inline Measurement Method for Capacity of End-to-End Network Path, Proceedings of the 3rd IEEE/IFIP Workshop on End-to-End Monitoring Techniques and Services (E2EMON 2005), May 2005. [5-5-2] Ei Teramoto. Mathematical Ecology, February 1997 (in Japanese). [5-5-3] Hasegawa, G., and Murata, M., TCP Symbiosis: Congestion Control Mechanisms of TCP Based on Lotka-Volterra Competition Model, Proc. Of Workshop on interdisciplinary systems approach in performance evaluation and design of computer & communications systems (Inter-Perf 2006), CD-ROM, (Oct. 2006).

5.6. Addressing and Routing [Teraoka] 5.6.1. Internet Addressing Architecture


Fig. 5.6.1 shows the addressing architecture in the Internet. Each node in the Internet is identified by a fully qualified domain name (FQDN). A FQDN describes all hierarchical labels (strings) from a root (".") in the hierarchical structure of the domain name system (DNS) [5-6-1, 5-6-2]. For example, "www2.nict.go.jp" is an example of an FQDN. Since the DNS naming hierarchy is a single tree structure, an FQDN is unique in the Internet. Also, since the DNS naming hierarchy is a logical structure, the FQDN of a certain node does not depend on the connection (or topological) location within the Internet of the corresponding node. Therefore, the FQDN can be called a node identifier. An FQDN is converted to an IP address by a name server. An IP address is assigned to a node interface by IPv4, which has a 32-bit space, or IPv6, which has a 128-bit space. Since a node that has multiple interfaces has multiple IP addresses, the relationship between an FQDN and IP addresses is generally a 1-to-N. IPv4 addresses [5-6-3] and IPv6 addresses [5-6-4] both consist of a network prefix and an interface identifier. The terms used in IPv4 and IPv6 differ slightly. We will use IPv6 terms in the subsequent discussion. The network prefix is a number for identifying the subnet to which the node is connected uniquely within the Internet, and the interface identifier is a number for identifying the interface uniquely within the subnet. Therefore, the IP address identifies the node interface uniquely within the Internet. Since the network prefix of an IP address 81

changes if the node moves to another subnet, the IP address of a certain node (interface) depends on the connection location of that node within the Internet. Therefore, the IP address can be called a node (interface) locator. To use an Internet application, the user specifies the target node by using the FQDN, which is a character string. The application accesses a name server to convert the FQDN to an IP address. In this way, the application handles the IP address, which is a locator, as a node identifier. Let us consider the case when the application communicates by using TCP as the transport layer protocol. The application establishes a TCP connection between sockets at its own node and the destination node. A socket is a set consisting of an IP address and port number. The TCP connection is identified by a set of four pieces of information consisting of the IP address and port number of the application's own node and the IP address and port number of the destination node. In this way, TCP also handles the IP address as a node identifier. TCP requests packet transmission by indicating the destination node's IP address to IP, which is the network layer protocol. IP forwards the packet based on the network prefix of the destination node's IP address (destination address) to deliver it to the target subnet. Within the target subnet, the packet is delivered to the target node based on the interface identifier of the target IP address. In this way, IP forwards the packet based on a locator.
Node identifier Application layer FQDN Node identifier mapping IP Address e.g., 133.243.3.35

e.g., nwgn.nict.go.jp

Node identifier Transport layer A node is specified by the IP address IP Address

i/f locator Network layer A packet is routed based on the IP address IP Address

(a) identifier and locator in each layer

IP address

network prefix

interface identifier

(b) structure and meaning of IP address

Fig. 5.6.1. Internet Addressing Architecture

82

5.6.2. Internet Addressing Problems


As described in the previous section, although the IP address is handled as a node identifier in the application layer and transport layer, the IP address essentially is a locator that depends on the connection position of the node within the Internet. In other words, the IP address has two meanings in the Internet as an identifier and a locator. This problem is referred to as the "duality of the IP address." The duality of the IP address causes the following kinds of problems regarding mobility, multihoming, and security [56-5].

Mobility Problem
When a node moves from the subnet, where it was originally connected (home link) to another subnet (external link), its IP address changes. Fig. 5.6.2.1 shows a situation in which this node's IP address changes from IP-A to IP-B when the node moves. If the user attempts to communicate by using the FQDN to specify this node, the packets will end up being delivered to the home link and the user will ultimately be unable to communicate because the FQDN is converted to the IP address for when this node is connected to the home link (IP-A). Similarly, if the node moves during communications, packets will end up being routed to the connection point before the move (IP-A), and communications cannot continue. Even if packets are forwarded to the connection point after the move, for example, the TCP connection cannot be maintained after the node is moved because the TCP connection is identified by the IP addresses of both ends.
IP-A

Internet
IP-B

Fig. 5.6.2.1. Problems on Mobility

Multihoming Problem
Multihoming is a concept in which a node or site is connected to the global network by multiple routes. Fig. 5.6.2.2 (a) shows a situation in which a node is connected to the global Internet through multiple interfaces that each use separate routes. This is called node multihoming. On the other hand, Fig. 5.6.2.2 (b) shows a situation in which a certain organization is connected to the global Internet via multiple access networks. This is called site multihoming. Multihoming provides the benefits of fault tolerance whereby communications can continue by using another route even if a failure occurs on one route, route selection whereby routes can be used for different purposes according to communication properties, and load distribution whereby load concentration on a specific route can be prevented. In Fig. 5.6.2.2 (a), let us assume that the multihomed node is using interface-A to establish a TCP connection. At this time, let us assume that the source address of packets sent by this node is IP-A. Let us consider the case in which 83

communications continue by switching to interface-B because a failure occurs in the route using interface-A during communications. Once this switch is made, the source address of the packets sent by this node becomes IP-B. Since the TCP connection is identified by the IP addresses and port numbers of both nodes, when the IP address changes as described in the above fault tolerance-related example, the TCP connection can no longer be maintained.

Global Internet

Global Internet

ISP-A Prefix-A

ISP-B

ISP-A Prefix-A Site-A Prefix-A Prefix-B

ISP-B Prefix-B

Prefix-B

(a) node multi-homing

(b) site multi-homing


Fig. 5.6.2.2. Multi-homing connection

Security Problem
On the Internet, IPsec [5-6-6] is provided as a security protocol in the network layer. IPsec implements source node authentication, packet tampering prevention, and packet encryption. When IPsec is used, a relationship called a security association (SA) is established by negotiating encryption algorithms and keys that are to be used between both nodes in advance. An SA is identified by the destination node's IP address and a number called the security parameter index (SPI). Therefore, if the destination node moves to another subnet during communications using IPsec, the SA must be reestablished or updated because the destination node's IP address has changed.

5.6.3. Addressing Architecture in AKARI


As described earlier, the Internet has various problems that are caused by the duality of the IP address. The simplest effective method of solving these kinds of problems is to separate identifiers and locators [5-6-5]. Assume that an identifier is information that is not dependent on the node's connection point within the network. An identifier, which is unique in the entire network, is assigned to a node. For the user, the representation format of the identifier should be a string such as an FQDN. However, within protocol processing, the identifier should be a fixed-length numerical value. On the other hand, a locator is information representing the node's connection point within the network. A

84

locator, which is unique in the entire network, is assigned to an interface. Since AKARI is a network architecture for large-scale networks, a locator should have a hierarchical structure if scalability is taken into consideration. As Fig. 5.6.3 shows, a node is identified by an identifier from the application layer to the transport layer. When a packet is sent, the source identifier and destination identifier are converted to locators when they are passed from the transport layer to the network layer, and the network layer forwards packets based on the destination locator. When a packet is received, the source locator and destination locator are converted to identifiers when they are passed from the network layer to the transport layer, and the transport layer and application layer identify the communication destination according to the source identifier. By separating identifiers and locators in this way, the abovementioned problems concerning mobility, multihoming, and security are solved as described below.
mapping Application layer FQDN (character string) Identifier (bit string)

Transport layer

A node is specified by the identifier

Identifier

mapping Network layer A packet is routed based on the locator Locator

(a) Identifier and Locator in each layer

Identifier Locator The locator uniquely identifies the interface

The identifier uniquely identifies the node

(b) Meaning of identifier and locator


Fig. 5.6.3. Addressing architecture of AKARI

Mobility
Consider a situation in which the node identified by Identifier-A moves from the home link to an external link. Assume that Locator-A is assigned to this node at the home link and Locator-B is assigned at the external link. Even if the locator changes because the node moves, since Locator-A and Locator-B are both converted to Identifier-A at the

85

destination node, communications can continue in the transport layer and application layer after the move.

Multihoming
Assume that a multihomed node had been communicating by using the Locator-A route. Consider a situation in which a failure occurs on this route and this node attempts to continue communicating by using the Locator-B route. Although the source locator of packets sent by this node changes from Locator-A to Locator-B after the failure, both Locator-A and Locator-B are converted to the same identifier at the destination node when the packets are passed from the network layer to the transport layer. Therefore, communications can continue in the transport layer and application layer after the route change.

Security
Assume that a security protocol such as IPsec is also used in the network layer in AKARI. The IPsec SA is established by using the identifier. Even if the locator changes in a mobility or multihoming environment, the IPsec SA can be maintained after the move or route change since the identifier is unchanged.

References
[5-6-1] P.V. Mockapetris. Domain names concepts and facilities, RFC 1034, November 1987. [5-6-2] P.V. Mockapetris. Domain names implementation and specification, RFC 1035, November 1987. [5-6-3] J. Postel. Internet Protocol, RFC 791, September 1981. [5-6-4] R. Hinden and S. Deering. IP Version 6 Addressing Architecture, RFC 4291, February 2006. [5-6-5] Masahiro Ishiyama, Mitsunobu Kunishi, Keisuke Uehara, Hiroshi Esaki, and Fumio Teraoka. Lina: A new approach to mobility support in wide area networks. IEICE Transactions on Communication, Vol. E84-B, No. 8, pp. 20762086, August 2001. [5-6-6] S. Kent and K. Seo. Security Architecture for the Internet Protocol, RFC 4301, December 2005.

5.7. Layering [Teraoka]


In a protocol layering architecture, the layer (N) abstracts the layers below it and provides (N)-service to layer (N+1) via the (N)-SAP (service access point). Layer (N) also uses the (N-1)-service provided by layer (N-1) via the (N-1)-SAP. In a layering architecture, lower layer functions are abstracted for upper layers in this way, and as long as the interfaces between layers are followed, layer independence is achieved so that no problem will occur even if the method of implementing a layer's functions is changed.

86

5.7.1. Conventional Layering Architecture Problems


Although the ability to design each layer independently by obeying a layering architecture is an advantage, negative consequences may also result in a network where the environment dynamically changes significantly. An example can be seen in a mobility environment. Fig. 5.7.1.1 shows the handover procedure in IPv6. In this example, a mobile node (MN) is connected to an access router (AR) by a wireless LAN based on IEEE 802.11. The access router is located in the Internet at a boundary between wired and wireless parts.
RA wait for RA signaling DAD (4) (5)

L3 L2
radio quality is going down (1) (2)

(3)

channel channel scan switch communication disruption time time

Fig. 5.7.1.1. Handover process in IPv6 Handover processing is performed in the following order. 1. The link layer (L2) detects a deterioration of communication quality, it scans the available wireless channels to determine the AR that the mobile node should connect to after the handover. An interval on the order of seconds is required for the wireless channel scan. 2. By switching the channel that it will use, L2 switches the AR to which it will be connected (end of L2 handover processing). Switching the wireless channel to be used takes approximately several milliseconds. 3. The network layer (L3) awaits the reception of a router advertisement (RA) message sent from the AR. In the IPv6 neighbor discovery specifications [5-7-1], the minimum RA interval is 3 seconds, and in Mobile IPv6 [5-7-2] specifications, the minimum RA interval is 30 milliseconds. 4. By receiving the RA message, L3 knows that L2 handover occurred. It then generates a new IPv6 address and executes duplicate address detection (DAD). DAD processing takes several seconds. 5. If the address is not a duplicate one, L3 sends a signaling message to the location server and receives its confirmation response (end of L3 handover). In the handover processing described above, the mobile node is unable to communicate from the beginning of step (1) until the end of step (5). If the link layer and network layer were organically linked, handover processing can be performed faster as follows (see Fig. 5.7.1.2).

87

L3

(3) DAD (2) notification (4) request of handover of quality start down

(7) signaling

(6) notification of
handover completion

L2
channel (1) scan radio quality is going down channel switch (5) time

comm. disruption time

Fig. 5.7.1.2. Handover process based on cross-layer architecture 1. L2 regularly scans only wireless channels that are being used by neighboring ARs and selects a candidate handover destination AR in advance. 2. If the communication quality deteriorates to a certain degree, L2 asynchronously reports this to L3. 3. L3 begins the handover preparation process as soon as it gets information related to the new AR from L2. It generates the post-handover IPv6 address, and finishes duplicate address detection. 4. L3 directs L2 to start execution of L2 handover. 5. L2 executes L2 handover process by switching to the wireless channel that is being used by the AR, as indicated by L3 (end of L2 handover). 6. When L2 handover process ends, L2 asynchronously reports this to L3. 7. When L3 receives the end of L2 handover report from L2, it executes signaling process (end of L3 handover). In the handover process described above, since the incommunicable interval is only during step (5) and step (6), it can be dramatically shortened compared with the conventional handover procedure [5-7-3]. This kind of architecture in which control information is exchanged between layers is generally called a cross-layer architecture.

5.7.2. Cross-Layer Architecture in AKARI


In the previous section, we used handover processing as an example to describe the advantages of a cross-layer architecture in which the link layer (L2) and network layer (L3) are naturally linked. However, a cross-layer architecture does not only link L2 and L3 and does not only have advantages in a mobile environment. The following kind of general-purpose cross-layer architecture is introduced in AKARI (see Fig. 5.7.2.1) [5-7-3]. First, it assumes that control information can be exchanged between any layers rather than only between adjacent upper and lower layers. For example, TCP, which is the transport layer protocol, can use link layer information,

88

or the application protocol can use network layer information. This architecture takes into consideration the exchange of control information between any layers and introduces the Inter-Layer System (ILS), which passes vertically through each layer (see Fig. 5.7.2.1). Control information of each layer is exchanged via the ILS between any layers.


(4) (N+m+1) Layer

PE

AE
(3) (1)

(N+m) Layer

PE

AE
(2)

(N+1) Layer

PE

AE

(N) Layer

PE

AE Inter-Layer System
(4) Response

(1) Request

(2) Confirm

(3) Indication

Fig. 5.7.2.1. Cross-layer architecture of AKARI If the control information that is exchanged between layers were specific to each protocol or device that is used, each time a new protocol or device was added, the existing system would have to be adapted to the new protocol or device, and system maintenance management would be inefficient. Therefore, in the AKARI cross-layer architecture, the control information that is exchanged between layers is assumed to be abstracted information that does not depend on the protocol or device. In the OSI reference model, a protocol entity (PE) that executes protocol processing exists in each layer. To abstract control information, the AKARI cross-layer architecture introduces an abstract entity (AE) in addition to the PE (see Fig. 5.7.2.1). There is a one-to-one correspondence between PEs and AEs. To send control information to another layer, the AE abstracts PE-specific information, and to receive control information from another layer, the AE converts the abstracted information to PE-specific information. Next, let us consider interactions between protocol layers. Providing the following three interaction types is considered sufficient for natural interactions between protocol layers. (1) A layer issues a request for acquiring information from another layer (information acquisition interaction) 89

(2) A layer notifies another layer of the occurrence of an asynchronous event (event notification interaction) (3) A layer directs another layer to perform an action (action directive interaction) An information acquisition interaction is used to get control information from a certain layer. For example, for the link layer, it is used to get information about the link layer protocol, media type (such as 1Gbps Ethernet or IEEE 802.11b wireless LAN), or current communication quality. An event notification interaction is used to notify another layer of an event that occurs asynchronously in a certain layer. For example, it might be used for the link layer to notify that a connection to an access point was disconnected during wireless LAN communication. An action directive interaction is used to perform a specific action in a certain layer. For example, it might be used for the link layer to direct to switch to a specific wireless channel. The following four primitives are introduced for the above interactions (see Fig. 5.7.2.1). (1) Request: Request conveyed by a certain layer to another layer (2) Confirm: Confirmation response to a request (3) Indication: Notification of an asynchronous event by a certain layer to another layer (4) Response: Confirmation response to an indication Information acquisition, event notification, and action directive interactions are implemented as follows by using primitives as shown in Fig. 5.7.2.2.

request

request

response

request

(N)-layer

confirm

(N)-layer

confirm indication
event

confirm
(N)-layer

(a) information request type

(b) event notification type


Fig. 5.7.2.2. Primitives

(c) action request type

Information acquisition type: request confirm Event notification type: request confirm, (event occurrence), indication response Action directive type: request confirm For example, assume that L2-LinkType is defined as a primitive for getting the link layer protocol or media type. For the network layer to get the link layer protocol or media information, the network layer sends L2-LinkType.request via the ILS to the link layer. Parameters such as the target network interface are stored in this primitive. In response to this request, the link layer stores the requested information in L2-LinkType.confirm and

90

returns it to the network layer via the ILS. Next, when the link layer is a wireless LAN, assume that L2-LinkUp is defined as a primitive for conveying a notification that the connection to an access point is completed. To get this event notification, the network layer first sends L2-LinkUp.request to the link layer to register an event notification in advance. In response to this request, the link layer returns L2-LinkUp.confirm to convey to the network layer that it received the request. When the event (connection to an access point) actually occurs later, the link layer conveys L2-LinkUp.indication to the network layer. When the network layer receives this indication, it returns L2-LinkUp.response to the link layer as a confirmation response. Finally, assume that L2-LinkConnect is defined as a primitive for causing the link layer to switch the wireless LAN channel. To direct the link layer to perform this action, the network layer sends L2-LinkConnect.request to the link layer. In response to this request, the link layer returns L2-LinkConnect.confirm to the network layer to convey that it received the request.

References
[5-7-1] T. Narten, E. Nordmark, and W. Simpson. Neighbor Discovery for IP Version 6 (IPv6), RFC 2461, December 1998. [5-7-2] D. Johnson, C. Perkins, and J. Arkko. Mobility Support in IPv6, RFC 3775, June 2004. [5-7-3] Kazutaka Gogo, Rie Shibui, and Fumio Teraoka. An l3-driven fast handover mechanism in ipv6 mobility. In Proceedings of SAINT2006, IPv6 Workshop, January 2006.

5.8. Security [Teraoka] 5.8.1. AAA: Authentication, Authorization, and Accounting


As with the current Internet, the new generation network is considered to be a distributed management type network in which many management domains are interconnected rather than a central management type network. A management domain is a range of networks that are managed by the same management policy such as an Internet service provider (ISP) or corporate network or campus network. In this kind of multimanagement domain configuration, a user normally enters into a contract with one management domain (for example ISP-A) to access the global network via ISP-A. In the new-generation network, the region in which one management domain provides network access services will also be limited and worldwide network access services will be implemented by cooperative service contracts being concluded between management domains. When a user requests a network access service, the ISP wants to confirm who the user is and the kind of rights he or she has and wants to determine the amount of resources that are used. On the other hand, the user wants to confirm that the communication lines currently being accessed (such as a wireless LAN) are provided by a legitimate ISP. The functions for answering these questions are referred to as Authentication, Authorization, and Accounting (AAA). Even in the new generation network, AAA functions will be required with respect to various services, not just network access services.

91

AAA Architecture Designed by the IETF


Fig. 5.8.1 shows the AAA architecture designed by the IETF. The IETF AAA architecture is broadly divided into a front end and back end. The front end protocol stipulates communication between the end user and network's (ISP's) AAA servers, and the back end protocol stipulates communication between the AAA servers of each ISP. The Protocol for Carrying Authentication for Network Access (PANA) [5-8-1] is being standardized as the front end protocol and may become a proposed standard in 2007. The Diameter Base Protocol [5-7-2] has become a proposed standard for the back end protocol. In Fig. 5.8.1, a user who has entered into a contract with ISP-B is attempting to access the Internet via ISP-A. ISP-A and ISP-B are assumed here to have exchanged contracts concerning user roaming. The PANA client (PaC) is running on the user's computer. PaC requests authentication and delegation of authority to the PANA authentication agent (PAA) of ISP-A. Since the PAA of ISP-A does not have this user's authentication and rights information, it issues a processing request to the AAA back end. Specifically, it transmits an authentication and delegation of authority requests to the Diameter client AAAc. AAAc of ISP-A transfers the authentication and delegation of authority request to the Diameter server AAAh of ISP-B, with whom this user has entered into a contract, via the Diameter relay server AAAr of its own ISP. AAAh confirms the user's authentication and rights and returns an authentication and delegation of authority response. The authentication and delegation of authority response is transferred to PaC by traveling in the reverse direction along the transfer path of the authentication request. If this user is confirmed to be a proper user and to have the right to use ISP-A, then this user will be able to access the Internet via ISP-A.

back-end (DIAMETER) ISP-B AAAr


auth. AAAh request auth. request auth. request AAAh

AAAc PAA
authentication request

auth. AAAr auth. auth. response response response ISP-A

authentication response

PaC

(user@ISP-A)

front-end (PANA)
Fig. 5.8.1. AAA architecture in IETF

92

AKARI AAA Architecture


Consider an AKARI AAA architecture that assumes that the new generation network is a distributed management wide-area network based on the interconnection of a large number of management domains and that each user is registered in one management domain (or a small number of management domains) of his or her own choice. In this case, each user's registration information will be managed in a distributed manner by the individual contracting ISPs. In other words, the management configuration of the entire new generation network and the relationships between users and network operators will probably be very similar to the current ones. The new generation network will probably be operated using a cooperative distributed management system that excludes centralized management. If so, an AAA architecture like the one designed by the IETF will also be appropriate in the new generation network.

References
[5-8-1] D. Forsberg, Y. Ohba, B. Patil, H. Tschofenig, and A. Yegin. Protocol for Carrying Authentication for Network Access (PANA), Internet Draft (Work in progress), March 2007. [5-8-2] P. Calhoun, J. Loughney, E. Guttman, G. Zorn, and J. Arkko. Diameter Base Protocol, RFC 3588, September 2003.

5.9. QoS Routing [Ohta] 5.9.1. QoS Routing Problems


QoS routing is a routing scheme that advertises the state of properties such as the currently available bandwidth or delay on each link and searches for the optimum route that satisfies QoS conditions concerning bandwidth and delay. However, various problems are known to exist concerning the following items, and these problems are closely interrelated. (1) Inability to aggregate routes (2) Route oscillation (3) NP completeness of optimum route calculation (4) Advertisement scalability (5) Drop in reliability of routing information when a hierarchy of routing information is created (6) Loss of routing information for destination locality when a hierarchy of routing information is created (7) Optimum selection of inter-domain route (8) Combinations of resource constraints with multicasting First, problem (1) concerns the inability to aggregate routes. When the terminals in a certain address range are distributed in locations that are topologically close to each other, for the best effort packet routing, a hierarchy of routing information can be created and the same route to all terminals in that address range, which are located far away, is

93

applicable. However, the route for QoS routing is not determined using only addresses. Generally, various parameters such as bandwidth, delay, or price are used for the QoS conditions in QoS routing of communications, and since the route that is to be selected differs according to these QoS conditions, the route must be calculated for each individual communication. For example, even communications having identical QoS conditions will not necessarily take the same route since the communication that reserved a route later may not be able to use the same route because the communication that reserved that route earlier used bandwidth resources. Even if the routes that are to be used by multiple communications unexpectedly happen to be identical as a result of the route calculations, the routes cannot easily be aggregated since there is also a possibility that different routes will be used because of later changes in the environment. For example, even if the routes were aggregated, the route signaling message must transport individual communication port numbers or different QoS conditions for each communication in general, and since the signaling message length or processing time is proportional to the number of communications, no reduction is achieved with respect to the order of the computational complexity. When the amount of routing information in a large network is enormous and a hierarchy is created, even if multiple communications appear to travel along the same route from a higher level, different routes may be taken internally because of problem (5). In other words, when QoS routing is performed, the routes of individual communications may not be able to be aggregated, and even if they are forcibly aggregated, the order of the computational complexity or routing table entries will not decrease. QoS routing must be performed for each individual communication, and each individual communication must also have a routing table. This may also be a problem with respect to scalability. For problem (2), which concerns route oscillation, let us consider bandwidth, for example. Assume that the current available bandwidth of each link is advertised. If a communication uses a link, the remaining bandwidth of that link decreases, and in some cases, falls below the bandwidth required by that communication. In this case, if the route for that communication is recalculated for some reason (such as a detour because an intermediate link failed), a situation will occur in which that link is judged to be unavailable and either a separate route will be selected or no route will be found. If, as a method of avoiding this problem, the amount of resources that are being used by each communication is advertised, not only will problem (4) worsen but the amount of information will no longer be able to be reduced by creating a hierarchy since each communication is advertised individually. As another method of avoiding this problem, if the route is determined according to centralized calculations by a PCE, sender, or receiver, the self-reserved route and its effect can be tentatively known at the routedetermination site. However, if a hierarchy of routing information is created, the effect of self reservation on the internal state will no longer be known because of problem (5) or (6) (Fig. 5.9.1.1). In Fig 5.9.1.1 (a) and (b), as a result of the reservation of a bandwidth of 5, an available bandwidth of 10 is advertised when information is simplified at a higher level of the hierarchy. However, because of the internal state, the original bandwidth of 15, which is the case when the bandwidth of 5 is not reserved, is inconsistent with the advertised bandwidth of 10. In other words, the original available bandwidth when a bandwidth of 5 is not reserved is not known from the information that is obtained at the higher level of the hierarchy, and oscillation can no longer be prevented even by performing centralized calculations. Although there is also a method in which the route is not recalculated, this is just a functional limitation, and it is unlikely that a

94

user will agree to continue to use a high-cost route when a less expensive route becomes available under a volume-charge accounting system.
10
Reservation of a bandwidth of 5

20 15

15 10

20 15

Simplification Reservation of a bandwidth of 5

15 10

(a) Example in which the simplified reserved bandwidth changes because of a new reservation

10
Reservation of a bandwidth of 5 R

20 15

10 5

20 15

Simplification Reservation of a bandwidth of 5

R 10 10

(b) Example in which the simplified reserved bandwidth does not change because of a new reservation

Fig. 5.9.1.1. Loss of Internal Available Bandwidth Information Due to Simplification Accompanying the Creation of a Hierarchy Problem (3) concerns the NP completeness of the optimum route calculation. When there are multiple additive constraints on QoS that are added each time a link or router is traversed, such as price or delay, the calculation of a route that simultaneously keeps these constraints below a certain value or minimizes one constraint while holding another constraint at a certain value is NP complete, which is a degree of difficulty in which a polynomial time solution method with respect to the number of links or routers cannot be found. To avoid this problem, it is necessary to reduce the number of links or routers within each layer by creating an aggressive hierarchy of routing information or to minimize the sum of appropriate functions of price and delay. Problem (4) concerns the increase in routing information that flows because the network scale increases. To prevent this, it is necessary to aggregate the routing information of a network of a certain scale and create a hierarchy of routing information 95

so that it appears to be a simpler topology from the outside. In addition, when the network scale increases, a multi-layer hierarchy must be created. However, creating a hierarchy of routing information will cause problem (5) or (6) to occur. Problem (5) occurs when a hierarchy of routing information is created. Information is lost because the information of lower layers is aggregated and the resulting information may no longer be reliable. If routing information is not reliable and a situation actually occurs in which QoS can no longer be satisfied during signaling, route reservations will fail and the failures will repeatedly occur even when retries are attempted. Although a method called crankback, which remembers the point of failure and avoids it, can be used, if crankback is repeated multiple times, processing will become extremely complicated, signaling time may also increase boundlessly, and the route that is found will not necessarily be the optimum one. An increase in signaling time also becomes a significant impediment for dynamic route recalculation. Problem (6) also occurs when a hierarchy of routing information is created. However, for the transit QoS when traversing a certain area, routing information can generally be aggregated to a certain degree of accuracy even though problem (5) occurs. Also, the receiving end can obtain routing information of all layers in its immediate vicinity, and the sending end can similarly obtain routing information of all layers in its immediate vicinity. However, only the outermost layer information of the routing information in the vicinity of the sending end can be seen from the receiving end. Therefore, although various routes from the receiving end to the outermost layer of the sending end can be calculated, the aggregated routing information that the receiving end can obtain does not enable it to know whether there is actually a route among them that can reach the sending end while satisfying the QoS conditions, and if there are multiple candidates, which among them is the optimum route (Fig. 5.9.1.2). For the hierarchy shown in Fig. 5.9.1.2, when routes are calculated from the receiver to the sender, since there exists a route to the sender that satisfies QoS conditions from point B while the QoS conditions are not satisfied at point C, which is ahead of point A, the receiver should select a route that uses point B. However, even though the receiver receives route information advertisements of each layer around itself and knows that the route from the receiver to either point A or point B satisfies QoS conditions, it does not know whether or not the route from point A or from point B to the sender satisfies QoS conditions unless it has routing information that is advertised in the immediate vicinity of the sender. Nevertheless, information for the vicinity of the sender cannot be advertised in the vicinity of the receiver because of the creation of hierarchies. Although crankback can deal with this problem for the time being, this is certainly not a satisfactory solution.

96

A C Sender Receiver

Fig. 5.9.1.2. Routing Information Hierarchies Problem (7) concerns the method of calculating the optimum route among multiple routes for which different intermediate ISPs (or communications carriers) can be selected when a volume-charge accounting system is used. For a flat-rate accounting system, route selection may be left to the ISPs since the optimum route for ISPs is the lowest cost route. However, for a volume-charge accounting system, the optimum route for the ISP is the route providing the maximum profit, and this route is usually the one with the highest cost to the user. Problem (8) concerns combinations of resource reservations with multicasting. When multicasting is performed, since the routes to the receiving ends, which may be extremely numerous, are impossible to calculate from the sending end, the routes are calculated at the receiving ends, and routes towards the sending end are merged. However, if the route selection policies differ at each receiving end, a tree with the sending end as the root may not be well formed, and loops may be produced. Since many multicast routing methods cannot perform inter-domain routing well, inter-domain QoS routing like in problem (7) seems hopeless.

5.9.2. Elimination of QoS Routing Problems


Various types of problems are associated with QoS routing as described above. Since these problems are closely interrelated and cannot be brought under control by attempting to deal with individual problems sequentially, measures for solving them all simultaneously are required. The following points must be recognized to accomplish this. (1) Route aggregation for QoS communication is impossible (2) Multicast routing includes QoS routing (3) Exaggerated advertisement by ISPs must not be allowed (4) QoS routing information required for each communication can be carried by the signaling message of that communication (5) Route selection is entrusted to the user

97

First, as stated in (1), we must clearly recognize that it is impossible to aggregate the routes of different communications in QoS routing. At first glance, this seems to imply that there will be no scalability. However, QoSguaranteed communications occupy resources. A volume-charge accounting system should be used according to the amount of resources that are occupied and the time those resources are occupied. Network providers can increase the bandwidth or router processing speed according to their revenues. At the same time, network providers should increase route calculation capabilities or routing tables according to their revenues, and no scalability problem will actually occur. The next point that we must recognize is that route aggregation is also impossible for multicast routing. A multicast receiver generally is a group of terminals rather than an individual terminal, and a multicast address is allocated to a group of terminals. There is no relationship between the closeness of multicast addresses and the similarity of destinations. Even if an attempt is made to allocate similar addresses to similar groups, multicast receivers change dynamically, and a meaningful similarity cannot be defined for group similarity with respect to routing. Therefore, route aggregation is impossible in multicast routing, and an individual routing table entry is occupied for each multicast address. In other words, as stated in (2), multicast communications occupy finite resources of the routing table entries, and it is apparent that at least part of the problem of inter-domain multicast routing is the same as the inter-domain QoS routing problem. Of course, this means that the multicast routing protocol must be unified with the QoS routing protocol, not that existing multicast routing protocols will be able to be used by anyone. A meaningful situation involving point (3) is preventing network providers from advertising unreachable QoS conditions. For example, if a certain network provider sets the delay or price to zero and the bandwidth to infinity when a hierarchy of routing information is created, many users can be attracted to that network provider, and if that network provider can actually achieve the requested QoS at the requested price, the revenue of that network provider will increase. However, not only will crankback be necessary if the requested QoS cannot be achieved, but even if the requested QoS can be achieved, if there exists another network provider that can achieve the requested QoS at a lower price, it will be the users' loss (that is, the selected network provider's gain). If this situation is neglected in an environment where there is competition between network providers, every provider will advertise that the delay or price is set to zero and the bandwidth is set to infinity to attract customers to itself. As a result, retries will randomly occur while crankback is eventually performed for all paths, and this situation can no longer be characterized as QoS routing. Therefore, if we impose the constraints that when each network provider issues an advertisement, the advertisement must be greater than or equal to the achievable delay or price and less than or equal to the achievable bandwidth of that network provider, then the advertisement will be reliable. If routes are selected according to advertisements, then the reservations will always succeed and crankback will be unnecessary except when the achievable cost or QoS changed because of multiple simultaneous reservations. If another reservation is made at the same time and a reservation fails, the failed reservation should be retried from the start. However, in hierarchical routing, the information that can be advertised is the QoS information when a relevant provider's network is passed through, and it is impossible to advertise the QoS to all internal destinations. Therefore, if a communication destination

98

is within a certain provider's network, the route up to the entrance of that provider's network can be calculated while taking QoS into consideration. However, whether or not the QoS conditions can be satisfied beyond that point is unknown, and even if there is a route that satisfies the QoS conditions, the entrance from which the QoS conditions can be satisfied is unknown. To solve this problem without performing crankback, advertisement information from the vicinity of the communication destination should be individually sent to the destination. If this information is sent in advertisements, no hierarchy will be created and the advertisement volume will increase boundlessly. However, no problem will occur if the portion that we have sent is carried in the signaling messages as described in (4). Similarly, if the available amount of resources when each reservation does not exist is carried in the signaling message of that reservation while the current amount of resources are advertised, route oscillation can be reduced while holding down the increase in the amount of advertisements. An idea concerning point (5) is as follows. Although inter-domain route selection is determined according to a policy, the policy must be determined by the user rather than the network provider in a similar manner as the selection of a long-distance or international carrier for telephones. Although the user must know sufficient routing information in order to determine the policy, since routing information can be reduced according to hierarchical routing based on the concepts discussed for (3) or (4), this will not particularly become a burden for the user. When multi-casting is performed, to prevent mismatches with the policies of the receiving sides, the sending side should determine the policy, and the sending-side's policy should be transferred to the receiving sides according to signaling messages when necessary. The above methods can be used to eliminate QoS routing problems, and multiple hierarchies can be created to enable QoS routing to operate even in a large-scale network or inter-domain environment.

5.10. Network Model [Otsuki, Morioka, Morikawa]


In this section, we will first discuss network models in a general sense and will then take the NGN as an example to consider the kind of NGN that will be possible and what that NGN should be like if AKARI design principles are applied. We will deal with the network model from a standardization viewpoint while also keeping in mind that the AKARI design will be reflected in NGN Release 3 and following releases in the future.

5.10.1. Network Model Definition


A network model is built by extending inputs, outputs (results), and the changes in a network (effects assigned within the network) into schematic diagrams, function descriptions, and component technologies. It varies according to required conditions of network connections. Network model creation is also referred to as an architectural topdown approach. In addition, by calculating output results for certain inputs by using numerical expressions or simulating output results for certain inputs, it can also be used to evaluate internally used technologies or designs to a certain degree. It is indispensable for network science, which was described earlier. The OSI seven layer model is an abstract model that is used the most. It treats the network as a layered structure and indicates the functional roles in each layer. Fig.

99

5.10.1.1 shows the OSI reference model in which Internet protocols are applied as an example. However, recent network circumstances cannot be explained by a simple layered model. For example, PPP, which had often been used as a technology prior to ADSL also has a protocol stack for the telephone network. This means that the network layer and higher layers of the Internet protocol stack are stacked as applications on top of the telephone network's protocol stack. A similar situation is true for overlay networks. They can be considered as extensions in the vertical direction. However, it is important to keep in mind that lack of support in a lower layer can prevent a service from being provided in an ideal form in a higher layer. On the other hand, even in terminals and controls within a network, which are called routing protocols or signaling, protocols such as DNS or wireless access controls each have separate protocol stacks, which exhibit a horizontal extent that spans multiple networking technologies rather than just networks using the same technology. These can extend horizontally according to the number of applications. Various application services can be provided by combining these protocol stacks appropriately. Cooperation between layers has also been important so far. The reasons why cooperation between various layers is required will be explained later.
OSI Reference Model

7 Application layer

HTTP, SMTP, SNMP, FTP, Telnet SMTP, SNMP, FTP, Telnet NetBIOS, PAP TCP, UDP, SPX, NetBEUI

6 Presentation layer 5 4 3 2 1

Session layer Transport layer Network layer Data link layer Physical layer

IP, ARP, RARP, ICMP, DHCP, IPX, NetBEUI

Ethernet, token ring, PPP, frame relay


RS-232, telephone lines, UTP, wireless, optical cable

Fig. 5.10.1.1. OSI Reference Model

A network model is a method of representing the network, and user requests must be implemented in the form of applications that include the network. Realistically, requests are implemented as functional parameters. However, in many cases the requests are probably for technological designs of networks that should be implemented. Although user requests indicate what users want, requests such as a desire for no delays, for always

Extent in vertical direction Extent in horizontal direction

Fig. 5.10.1.2. Protocol Stacks

100

being connected, or for never being cut off are practically impossible to achieve. Moreover, the representation method itself already is in terms of functional parameters, and in seeking a realistic solution, there is the risk that the value received by users will end up being quantified independently. Therefore, we propose an AKARI value model here based on people as shown in Fig. 5.10.1.3. All applications are manifested because users (people) exist. Applications or contents exist in the network, and computers or conversational partners also may exist through the network in the traditional way. Even if virtual spaces are implemented, media exist between the network and people, and various information (including perceptions) are exchanged. Although it is impossible to directly transmit all perceptions people have by using current technologies, this is similar to services such as Web 2.0, which have already begun to be carried out through computers.
Phone call, e-mail, Web, blogs, animation, music, publications, dictionaries, translation, maps, shopping, auctions, banking, storage, games, virtual spaces,
Implementation of societal requirements and network infrastructure
Voice (hearing) Image (sight) Text (language) Personal information Time Labor Emotions Trust Sense of security Feeling of satisfaction
Peta-bps class backbone network, 10Gbps FTTH 100 billion devices, M2M, 1 million broadcasting stations Principles of competition and user-orientation Essential services (medical care, transportation, emergency services) Safety, peace of mind (privacy, monetary and credit services, food supply traceability, disaster services) Affluent society, disabled persons, aged society, longtail applications Monitoring of global environment and human society Integration of communication and broadcasting Economic incentives (business-cost models) Ecology, sustainable society Human potential, universal communication Ubiquity Integration and simplification Network model Power saving Evolvability Diversity Safety Large capacity Scalability Openness Robustness

design requirements and by further combining these organically

Fig. 5.10.1.3. AKARI Value Model (Proposed)

5.10.2. UNI, NNI, and ANI and Host and Router


UNI and NNI are words that originally came from the realm of telephone networks. They indicate demarcation points (boundaries) of functions in a network. UNI indicates a user-network interface, and NNI indicates a network-network interface. In a telephone network, end terminals and network nodes clearly differ functionally, and inside the network, control signals and information transmission signals are completely separated. Recently, the demarcation point of responsibility between an Internet terminal and the Internet or between a home router and the service provider is also referred to a UNI. Although a UNI is also often used simply as the demarcation point between a terminal and the network and an NNI is used as the demarcation point between networks, they are

101

functionally sets of many interface boundaries. A logical interface structure ranging from physical signal rules for transmitting information to application layers that are implemented in the network is required. Also, the types of protocols that are used for control become more numerous as functions cross more branches. At an NNI, required signals are stipulated for an information transmission connection that spans different networks. However, even with exactly the same network configuration, if a limitation occurs for some reason, feasible interconnectivity will be lost. Various reasons that might cause this range from operational differences to functions that are implemented in the network. An ANI, which indicates an application-network interface, is an interface that an application serviced by the network uses to operate. Its purpose is to enable the network to be controlled by the application when some of the network control functions are stipulated. However, since functions that can control the network are disclosed, it is not realistic for all functions to be built, and the freedom of the application is often significantly limited due to the current tradeoff between freedom and operational stability, which cannot coexist. Freedom is expected to be very different just be enabling the user to select the API for providing the operation of a predetermined sequence. This ANI was stipulated in the NGN that was standardized in autumn 2006. However, whether this is to be provided to the user or application service provider depends on the telecommunications carriers that will implement the NGN. A node located at an end is often called a host. A router is a node for providing internetworking in the Internet. Although the functional difference between a host and router is not very great, from a role standpoint, a router has many more network interfaces and has been specialized for routing protocol processing. A host is used for more diverse network terminals and its use continues to increase as it includes all devices connected to a network ranging from large-scale computer systems to PCs, mobile terminals, and sensors. Of course, there tends to be an extremely great difference in the functions of these devices. The network not only supports these differences in terms of scalability, but must also incorporate and support the diverse functions. In the AKARI architecture, node positioning changes according to role. If we consider a path-packet integrated architecture, the part that should be observed most closely is a host acting as a UNI or end terminal. In other words, whether a user (person) requests path setup or packet switching is meaningless. The application should decide according to the conditions and network characteristics. The method used in AKARI should allow the delivery means (path or packet) to be selected within the protocol stack of the terminal rather than changing the Web application, for example, as in the conventional method or having the application decide.

102

Communication request Circuit type decision

Application

Packet

Path

Path signaling

Fig. 5.10.2.1 Path-Packet Integrated Architecture

5.10.3. Open NGN 5.10.3.1. Background for the Appearance of the NGN and NGN Features
Factors promoting the appearance of the NGN include problems of the current Internet and problems of telecommunications carriers. The problems of the current Internet concern security and communication quality. By designing and building a network so that anyone can use the network safely and securely, the Internet can become a foundation of society. Also, by equipping it with quality control functions, new services can be provided such as enterprise networks or emergency communications. Three subjects of concern to telecommunications carriers include reduction of operational costs, seamless linking of services, and creation of new revenue sources. Operational costs are reduced by integrating fixed-line telephone networks, mobile communication networks, data networks, and broadcasting networks. Services surrounding subscribers are seamlessly linked by implementing bundled services of fixed-line telephone, mobile telephone, data, and broadcasting. A conversion from a connection fee levy model to a function usage fee levy model is achieved and new sources of revenue are created by raising network functionality. The NGN that emerges from this kind of background will be based on the Internet and will incorporate the good aspects of fixed-line telephones or mobile telephones. The NGN will be based on the Internet because it will inherit the superior features of the Internet, which are "low cost" and the "guarantee of autonomy for applications or services." High reliability and high quality, which are features of the telephone network, will be guaranteed by introducing authentication technologies and communication quality control technologies. Also, the mobility of mobile telephones will be guaranteed by introducing mobility-support technologies. In other words, the NGN can be considered as the Internet with added functions that are required for implementing highly reliable, highquality services. Two of the functions that will be added deserve special mention. These are "access line authentication" and "communication session management." Without exaggeration, the NGN will be the Internet equipped with functions for authenticating access lines and managing communication sessions. Note that even if something is called the NGN does 103

not mean that new technologies have been introduced. The technologies for supporting the NGN are being discussed by the IETF (Internet Engineering Task Force).

Access Line Authentication


An advantage of telecommunications carriers is that they can authenticate access lines (access line identification in a fixed-line network and SIM card authentication in a mobile network). If subscriber information that enables the terminal to be identified can be obtained by access line authentication, security can be firmly guaranteed. A private network that cannot be accessed by a third party can be built without requiring the user to have a special device. In addition, strong authentication can easily be implemented for building customer-oriented electronic payment systems. Problems such as spam or phishing in the current Internet are caused by the fact that authentication is not conducted reliably. Using the access line authentication function provided by the NGN will enable a first step to be taken towards constructing an environment in which anyone can access the network safely and securely.

Communication Session Management


By managing communication sessions, voice telephone that was being provided on a desktop PC can be switched to a mobile phone while maintaining the session and TV telephone on a desktop PC can be switched to voice telephone on a mobile phone. Seamless access can be implemented, and diverse usage methods will flourish. Also, linking with presence server information will enable the optimum means of communication according to a schedule or the whereabouts of the communication destination to be suitably selected. Moreover, communication session management will be an essential function for communication quality control. An on-demand VPN service (VPN servers in which the connection destination or bandwidth is specified on use), which can be implemented by managing communication sessions and performing QoS control for each session, has the power to completely change corporate networks.

5.10.3.2. Openness
Three types of interfaces are defined in the NGN. These are the ANI (applicationnetwork interface), NNI (network-network interface), and UNI (user-network interface). An NGN-specific interface, which differs from those of the telephone network, is the ANI for using functions in the service stratum. In existing networks, third parties cannot provide services by using network functions. Services are provided by telecommunications carriers. In contrast, by using an ANI, a third party will be able to provide users with new services that use functions such as authentication, session control, and bandwidth control. This hides the possibility of completely changing the business model of telecommunications carriers, not only of third parties. By providing network functions to outside parties, a telecommunications carrier can convert from a line usage fee levy model to a function usage amount levy model to implement a platform business. Third parties will be able to build diverse services by using network functions that were previously reserved solely for the telecommunications carrier.

104

The advantage of the Internet lies in the end-to-end design principle, which enables diverse services to be built on IP. Similarly, in the NGN, by opening up NGN functions to outside parties and entrusting various players to build new services, innovative services that could not be built on the existing Internet will be able to flourish on the NGN.

contents & application

network business such as contents delivery business

interface

[ITU-Ts NGN Architecture] application ANI end user function 1 service stratum other network 2 NNI

application platform

content authentication, pricing agent, portal

1 network service
communication user authentication, pricing providing communication service and internet access

ex. service control function

ex transport function

transport stratum

UNI

Fig. 5.10.3.2. Open Interfaces Making network functions, previously unavailable to outsiders, publicly available to third parties and entrusting service development to third parties suggests a change to a business model that is similar to the MVNO (mobile virtual network operator) of a mobile communications network. An MVNO provides services by using the network functions of a mobile telecommunications carrier. Various levels of MVNO are considered according to the types of functions that are used. There is a model in which the MVNO performs location management or billing together with customer support. Then there is also a model that devotes itself to customer support or marketing and entrusts location management or billing to the mobile telecommunications carrier.
release network functions to third parties and consign service developments

application service stratum session authent location pricing management ication management bandwidth customer presence control support telecommunications carrier transport stratum

Fig. 5.10.3.3. MVNO and NGN

105

Various business models are also considered for the NGN according to the level to which telecommunications carriers make network functions publicly available to third parties. By designing the NGN in this way, we establish a model that produces gains to both the telecommunications carriers and third parties. By entrusting the development of services to third parties, we can expect that diverse services will flourish and that the NGN will contribute to a world in which users also can receive significant returns.

5.11. Robustness Control [Murata]


When considering the tendency of systems to increase in scale and complexity, information networks are no exception. Various approaches have been considered for resolving these problems. One such approach alters the conventional signaling technique itself. Signaling is used to synchronize communication states in sending and receiving nodes or routing nodes. The use of soft state communications is a technique that aims to increase system robustness with respect to measures for dealing with node failures or changes in topology due to the addition or removal of nodes [5-11-1]. With soft state communications, control packets are regularly exchanged to maintain states. For example, with RSVP, end nodes regularly send out control messages to maintain the flow states in RSVP routers, that is, to keep reserved resources from being released. If a link or intermediate router fails, the control messages will no longer be delivered to an RSVP router, and in that case, the RSVP router releases the resources according to a timeout and the state is initialized. State management according to this scheme is simple. With conventional hard state communications, the state is maintained between sending and receiving nodes based on the assumption that control messages are always delivered. However, soft state communications have the advantage that control messages basically can also be implemented on a best-effort basis. Since state transitions are simplified, there is a small margin for software bugs to be introduced.
Receiver Node
Default state

ReceiverNode

Node Sender Hard-state communication

SenderNode Soft-state communication

Schematic diagram showing the exchange of communications was created based created on the paper "On the Robustness of Soft- State Protocols" by Vishal Misra.
V iis h a l M i s r a O n tth e R o b u s ttn es s o f S o ft - S ta t e P r o tto co l s s a M O h e R b s ne s o of t- t a te oc ol s

Fig. 5.11.1. Robustness control However, if information networks continue to become increasingly larger and more complex in the future, soft state communications will probably be insufficient for maintaining robustness. When the scale of a system increases, multiple simultaneous failures rather than single failures become more regular and are no longer unexpected events. As a result, the margin for introducing software bugs steadily increases and 106

human error is also more likely to occur during operation management. When designing the new generation network architecture, a fundamentally new design technique is needed rather than just introducing soft state communications. Conventionally, network design for normal conditions was carried out first without taking failures into consideration. Later, robustness was generally increased often by assuming single failures. Detour control is a typical example of this kind of approach. As a result, if multiple simultaneous failures occur, robustness deteriorates suddenly, and a noncommunicable state occurs. In the new generation network architecture, it is important for design techniques to take into consideration multiple simultaneous failures as well as traffic fluctuations that depend on the occurrence of different traffic patterns than normal such as DDoS attacks. Both survivability (packet deliverability and end-to-end path connectivity) and sustainability (the maintenance of network functions) are required even if failures are widespread overall. If the user can recognize in a variety of ways that the network guarantees that its functions will be maintained even if the extent of failures increases, then the user will have peace of mind. This is the essence of a "safe and secure network (dependability)." Previously, information network design often focused on increasing performance under normal conditions with no failures. However, with this kind of design, which should be thought of as trying to achieve efficiency for efficiency's sake, the performance that is achieved is often immediately overtaken by developments in communications technologies. This is where the limitations of previous research techniques become apparent. It is also linked with the complaint against previous theoretical research in questioning "whether it made any contribution to information network development." One control technique for guaranteeing robustness is self-organization. Basically, entities in the network perform control only according to local communications, and target functions in the entire system, viewed macroscopically, emerge as a result. If distributed control can be implemented by using only local communications, its simplicity is directly linked to a guarantee of robustness. Soft state communications should be used for the local communications. However, if this is carried forward further, an approach can be considered in which each entity interacts only with the environment. This approach is called stigmergy [5-11-2]. In addition, these techniques naturally are self-distributed, and as a result, scalability and adaptability to changes in the communication environment that include even failures can be implemented. However, although convergence to an optimal solution during normal operation is considered to be slower and the performance of this technique is not necessarily higher than that of the conventional design technique, the various advantages that were described compensate for these faults.

107

System tuning
Quickly overtaken by technological developments

Conventional network

Performance
New network

Survivability Sustainability Dependability

Number of simultaneous failures, severity of failure, degree of change in the environment,

References
[5-11-1] Lui, J. C. S., Misra, V. and Rubenstein, D., On the Robustness of Soft State Protocols, Proc. of 12th IEEE Intl. Conf. on Network Protocols (ICNP04), pp.5060 (2004). [5-11-2] Naoki Wakamiya and Masayuki Murata. Biologically Inspired Information Network Architecture, Transactions of the IEICE, Vol. J89-B, No. 316-323, March 2006 (in Japanese). [5-11-3] Masayuki Murata. Biologically Inspired Information Network Architecture, IEICE, Conference of Self-Growing and Repairing Networks Due to Complexity, December 2006 (in Japanese). [5-11-4] Masayuki Murata, Network Architecture and Future Research Directions, IEICE Technical Report on Photonic Networks (PN2005-110), 63-68, March 2006 (in Japanese).

5.12. Overlay Network [Nakauchi, Murata]


Overlay network technology is a virtualization technology for building a virtual network on an upper layer to conceal the diversity or limitations of lower layers. An overlay network is formed by an application technology on a layer above the network layer. Specifically, a logical network that consists of a set of end nodes (hosts) is formed on a physical IP network that consists of a set of routing nodes (routers). Generally, when an overlay network is mentioned, it indicates either the concept of a virtual network or the virtualization technology itself. It is definitely not a new general concept as illustrated, for example, by a peer-to-peer network. However, recently, overlay networks have been receiving attention again from a network architecture viewpoint since PlanetLab [5-12-1] began in the US in 2002. An overlay network has the following two advantages. First, it provides a solution that enables diverse network services to be quickly developed in an upper layer without taking into consideration the variety or limitations of lower layers. Some examples of this with respect to network services include contents duplication, contents delivery (CDN), contents discovery, contents sharing (P2P), multi-site distribution of contents (application

108

layer multicasting), and overlay routing to enable highly reliable routing, QoS routing, and routing for emergency communications [5-12-2]. Second, it enables new network architecture experiments to be conducted without changing the underlying physical network. For example, when a new communications protocol is developed, an experiment can only be executed in a laboratory-scale experimental environment built only of nodes on which that protocol is installed. On the other hand, if overlay network technology is used, a new protocol experiment can be conducted globally so that it coexists with actual traffic on an actual wide-area network. Therefore, an overlay network is essential as an experimental environment of the new generation network architecture. When an overlay network is viewed as a basic component of the new generation network architecture, the overlay network will have two mutually contradictory aspects. In other words, an overlay network is a "solution technology" for avoiding the limitations or problems of lower layers to provide higher quality, highly reliable network services. On the other hand, when aiming to directly establish design principles for an ideal architecture from the start through a scrap and build approach as in this project, an overlay network that basically targets solutions should be unnecessary. However, overlay network technology is considered to fulfill a role as a basic component with respect to the following two points. The first is the "sustainable evolution" of the new generation network. An overlay network supports the basic principle of being sustainable and evolutionary, which was indicated in Section 4.1.2. When application requests are considered to change with the times, it is difficult to consider that a single network technology will be sustained in its current form for two or three decades. Since the Internet was not self-evolutionary, it ended up as an incompatible architecture that is full of patches. Considering this past experience, we plan to create a network architecture that has sustainable evolution as a design principle so it can flexibly deal with changes in application requests. Some technical requirements for implementing sustainable evolution are as follows.

Function Migration Mechanism


For each function in a routing node, develop a function migration mechanism or node operating system to enable either basic functions or overlay functions as a network architecture to be dynamically changed.

Migration Policy
Determine an appropriate policy for selecting either basic functions or overlay functions as a network architecture. The second point is the provision of "user controllability" of the new generation network. An overlay network should satisfy the design requirement of openness, which was indicated in Section 2.4. When APIs for user or network services are provided at a level corresponding to a lower level such as the network level, for example, rather than at an upper level such as is used for the socket APIs of end terminals, the user can directly write programs for routing nodes. This controllability can be implemented by developing resource management technology in an overlay network. The advantages of overlay network technology can be utilized by introducing a mechanism that enables the user to directly control fixed resources for routing nodes. For example, if all of the resources of each routing node are divided in half, we can configure a network that combines both the functions of a production-quality network and a testbed network so that the conventional

109

network continues operating by using one half of the resources and new network functions are examined using the remaining resources. Some technological requirements for implementing user controllability are as follows.

Advanced Resource Management Mechanism


Develop a resource management mechanism or node operating system to enable user resources to be provided while suppressing the performance deterioration of forwarding/routing at routing nodes.

Resource Linking Mechanism


Develop a resource linking mechanism that enables information to be exchanged/shared between forwarding/routing resources and user resources at routing nodes.

Routing Node APIs


Determine resource definitions and APIs to be provided to users. Develop APIs that enable users to directly specify the function migration described above. However, principles such as KISS and controlled transparency as well as the causes that prevented the proliferation of active networks, which had aimed to develop programmable routers, must be carefully studied. No matter what kind of network architecture is designed, the coexistence of an overlay network can always be considered from the viewpoint of sustainability and development of the network architecture because adding new network functions as the network architecture necessitates verifying those functions on an overlay network. Specific criteria must be determined for deciding that migration to the new network architecture should be performed and an installation process in that network architecture must be established based on those verification results. Overlay network technology was described above as a solution technology that enables diverse network services to be quickly developed on an upper layer. However, it is not just a solution technology but can be used to inform the design of a new network architecture. For example, overlay routing that assumes the existence of emergency communications is likely to be perceived as a solution technology since it assumes current Internet use and rarely occurs. When the routing methods in a new generation network are to be designed from scratch, overlay routing concepts are extremely useful. Nowadays, emergency communications during disasters also continue to be an important problem from the viewpoint of building a safe and secure network. Basically, the following three concerns need to be addressed. (1) What should be done to guarantee the continuation of communications in disaster-stricken areas? (2) What should be done to guarantee communications resources related to disasters? (3) How should a network that is robust to disasters be constructed? Some solution strategies that have been discussed include ad-hoc network technology for (1) and traffic control technology for (2). However, although node or line duplexing technology is the central topic of research for (3), this is an extension of the conventional technology, and cost problems will likely be a barrier. IP routing was developed from the

110

start to have excellent fault-tolerance. If the objectives of conventional IP routing are considered, plans based on IP routing with new functions added to it will also appear. The use of a routing overlay, which is described here, is a complementary attempt to solve the problems of IP routing, which continue to materialize as the scale of the Internet increases. It may enable packets to be delivered by using a path with better performance than when packet delivery is entrusted to IP routing. In other words, a routing overlay searches for possible approaches that differ from just enhancing an IP network as a common foundation.
End node C

Overlay node

50msec

50msec

Fault occurs

Shortest path according to IP (150msec)

Fig. 5.12.1. Passing Through Overlay Nodes

Fig. 5.12.2. Passing through Overlay Nodes When There are No Faults

Although a routing overlay is a routing technology in which overlay network technology is applied, it has also been pointed out recently that a better path may be able to be allocated by a routing overlay even during normal times when there are no faults [5-12-3]. This depends on the existence of a shorter path than the path that is used according to IP routing. Fig. 5.12.2 shows an example in which the packet transfer delay was used as a performance measure. The path that passes through end node C has a shorter delay than the path having a delay of 150msec according to IP routing. Functions that are implemented by a routing overlay are summarized as follows.

Implementation of Highly Reliable Routing


Faults are detected and recovered faster than by the current IP routing.

Implementation of Routing that Takes QoS into Consideration


Appropriate QoS is guaranteed for applications by linking application requests and routing.

Implementation of Routing Based on a Policy


As long as it is closed within services, routing is also easier to implement based on various operation policies. To show the effectiveness of routing overlays, several measurement results that appeared in the references are presented below. The following results were reported in reference [5-12-4]. (1) Between certain end nodes, while the packet loss rate was 30% when paths were passed through according to IP routing, the loss rate was held to 10% by a routing overlay. (2) Between certain end nodes, using a routing overlay showed better results than the packet transfer delay of 82% according to IP routing. Also, the average packet delay improved from 97msec to 68msec. 111

The following results were reported in reference [5-12-3]. (1) For the border gateway protocol (BGP), while an interval on the order of minutes was required until a new route was discovered in response to a network fault, a new route could be discovered in 18 seconds on average using a routing overlay. (2) A detour route could be discovered in all cases in experiments using a routing overlay. Although the abovementioned experimental results were obtained using universitycentered experimental networks, similar results have also already been reported for commercial networks [5-12-5]. Note that better results have also been shown for emergency communications, which were sufficiently effective even during faults due to network attacks [5-12-6]. In a similar manner as for other overlay networks, an end node functions as a routing node in an overlay network, and a routing overlay [5-12-3, 5-12-4] delivers packets according to an independent route search even when a fault occurs. Since IP routing targets all nodes, when a fault occurs, time is required until a route for bypassing it is determined. On the other hand, the number of target nodes is limited in a routing overlay. Therefore, when a problem such as a fault occurs, it can be dealt with immediately. This enables a routing overlay to be applied to emergency communications. For more details, see reference [5-12-2].

References
[5-12-1] PlanetLab, http://www.planet-lab.org/ [5-12-2] Masayuki Murata, Creating Highly Reliable Networks Using a Service Overlay, Journal of the IEICE, Special Section on the Configuration and Control of Advanced Information and Communication Networks for Disasters, Vol. 89, No. 3, pp. 792-795, September 2006 (in Japanese). [5-12-3] D. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris, Resilient overlay networks, Proc. of 18th ACM Symposium on Operating Systems Principles (SOSP), Sept. 2001. [5-12-4] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris, The case for resilient overlay networks, Proc. of the 8th Annual Workshop on Hot Topics in Operating Systems (HotOS-VIII), May 2001. [5-12-5] H. Rahul, M. Kasbekar, R. Sitaraman, and A. Berger, Towards realizing the performance and availability benefits of a global overlay network, Proc. of Passive and Active Measurement Conference, March 2006. [5-12-6] N. Feamster, D. G. Andersen, H. Balakrishnan, and M. F. Kaashoek, Measuring the effects of Internet path faults on reactive routing, Proc. of ACM SIGMETRICS 2003, June 2003.

112

5.13. Layer Degeneracy [Ohta] 5.13.1. The End-to-End Principle and the Data Link Layer and Physical Layer
The end-to-end principle is a basic principle of the Internet, which states that all communications protocol operations that can be performed by the terminals are performed within the terminals (upper layers such as the transport layer or application layer) and not performed in the network. This directly leads to the greatest possible simplification of functions of the IP layer, which is the network layer of the Internet. As a result, IP headers contain only the minimum required information, and congestion control, which had been performed as an essential function in the conventional network layer, is also performed on the Internet in the transport layer within the terminals. The KISS principle, which is typified by the end-to-end-principle that was present in the original Internet design, should also be inherited by the new generation network architecture, and by assuming a network architecture that contains a common layer in which functions are simplified as in the original IP protocol, functions that are duplicated in various layers can be eliminated. In this way, there are practically no requests to the data link layer for operating a common network layer protocol in which functions are simplified, and practically all common network layer protocols operate on the data link layer. However, this is not compatible with the request that diverse data link layers should be used in a new generation network. The end-to-end principle is what joins the upper layers within the terminal through thin network layers as shown in Fig. 5.13.1.1 (a). The network layers connecting the upper layers are thin layers as shown in Fig. 5.13.1.1 (b). However, if lower layers (data link layer and physical layer) are added to this as shown in Fig. 5.13.1.2, it is apparent that the lower layers of each link also intervene between the upper layers, rather than just the network layers, as shown in Fig. 5.13.1.2 (b). Even if the trouble is taken to simplify the network layer, since the lower layers are more complex, the original property of the new generation network ends up being lost.

113

Terminal Application layer Transport layer Router Network layer

Network

Terminal Application layer Transport layer Router Network layer Network layer

Network layer

(a) Layered structure


Application layer Application layer

(b) Layers from end terminal to end terminal

Fig. 5.13.1.1. End-to-End Principle and High (3 or higher) Layer Relation Therefore, it is apparent that the end-to-end principle requires not just the simplification of the network layer, but also includes the simplification of lower layers. First, there certainly should not be any unnecessary complexity in the physical layer but since information cannot be delivered if the physical layer disappears, it cannot be completely eliminated. The physical layer should have the minimum possible complexity according to the physical medium in use. On the other hand, the data link layer should be limited to only providing the minimum functions that are requested from the network layer or physical layer. This layer may be eliminated in some cases. Let us consider the following two examples in which the thinnest possible lower layers are used: optical fiber, which has an ultra-wideband transmission bandwidth, and airborne radio waves required for mobile communications.

Tranport layer

Tranport layer Network layer

Network layer

Network layer

Network layer

114

Terminal Application layer Transport layer Router Network layer Datalink layer PHY

Network

Terminal Application layer Transport layer

Router Network layer DataDatalink link layer layer


PHY PHY

Network layer DataDatalink link layer layer


PHY PHY

Network layer Datalink layer PHY

(a) Layered structure PHY Datalink layer Network layer Transport layer
Application layer Application layer

Tranport layer Network layer

Network layer

Datalink layer

Datalink layer

Datalink layer

Datalink layer

Network layer

Datalink layer

PHY PHY

PHY

PHY

(b) Layers from end terminal to end terminal

5.13.2. Thin Lower Layer When Optical Fiber is the Transmission Medium
First, let us consider using optical fiber for the transmission medium. Optical fiber simply is a medium for ultra-wideband transmission in a point-to-point link. Therefore, its point-to-point simplicity should be leveraged as the physical layer. Let us consider optical circuit switching (OCS), which is a communication system that reduces physical layer functions as much as possible. Optical amplifiers eliminate limitations other than bandwidth in many cases, and the physical layer is format-free within the network. However, since the ultra-wideband property that the physical layer has cannot be fully utilized in a point-to-point link, the use of a multiplexing technology such as WDM is essential from a practical perspective. When the physical layer is format-free, the data link layer can be completely eliminated from within the network in the data plane, including functions up to the framing function, and will exist only in the terminals. If L2 OCS is removed and all OCS is L3 OCS, the data link layer is completely eliminated even in the control plane (excluding non-OCS-based control packet transmissions for signaling). Another method of making the lower layers thinner by using optical fiber as the transmission medium is "optical packet multiplexing," which shows the ultra-wideband property of the physical layer directly in the packet network layer. This enables the dynamic characteristics of packet multiplexing to be used to the fullest extent. Since ultra-wideband light cannot be directly modulated, a multiplexing technology such as 115

PHY

WDM is required in the physical layer. However, the physical layer and network layer are directly connected without using WDM multiplexing for packet multiplexing, and the entire transmission bandwidth (such as all wavelengths) can be directly provided to individual packets. By directly supporting packets in the physical layer and embedding L3 headers, the data link layer is eliminated. Although the payload part other than the L3 header generally may be format-free within the network, this is not the case when the network layer is IP since each router must read information of the payload part when ICMP control messages are generated for packets.

5.13.3. Thin Lower Layer Airborne Radio Waves as the Transmission Medium
Another example of a transmission medium is airborne radio waves. In this case, since the physical layer utilizes a many-to-many broadcast model, which differs from optical fiber, terminal identification according to MAC addresses and the suppression of simultaneous transmissions as well as the packet multiplexing or packet duplexing techniques that accompany it are essential in the data link layer. On the other hand, terminal identification and techniques such as packet multiplexing or packet duplexing are not required in the physical layer. The PDMA concept eliminates TDD, TDMA, CDD, CDMA, etc. in the physical layer and shows all available radio bandwidth transmission speeds to the packet network layer.

116

Chapter 6. Testbed Requirements [Inoue]


The testbed for a new generation network should be able to test all new transmission or switching technologies, network protocols, and applications. To reach these goals, the following guidelines should be followed.

Use the Results of Research and Development


Research and development for the new generation network targets all layers from the physical layer to the application layer. Therefore, an existing testbed network that is built using existing transmission technologies or switching devices should be extended to support testing new technologies and protocols of the link layer and higher layers present in network nodes, user nodes, network control systems, and application platforms. Similarly, lower layer technologies that include the physical layer such as newly developed photonic network technologies or high-speed wireless technologies should also be incorporated and tested in new testbeds, which will also gradually provide a platform for testing higher layer technologies and protocols required by the new generation network architecture.

Guarantee Flexibility
Since the architecture of the new generation network or the set of protocols for its various layers have not been determined yet, and since diverse protocols, methods, and architectures may coexist without the set of protocols being collected together in several standards in a future network, the testbed must have a high degree of flexibility to support different protocols, methods, and architectures. In other words, the hardware and software that constitute the testbed must have a high degree of flexibility to support unknown methods and their combinations that may exist in the future. The programmable routers that are the subject of active research and development in the GENI project of the National Science Foundation (NSF) of the United States satisfy this objective. More specifically, although this is similar to the concept of software radio or cognitive radio, a hardware and software configuration that can support various transmission methods, network protocols, and applications is desirable.

Provide a Diverse Communications Environment


The main constituent of a conventional testbed network is a high-speed wired network. In contrast, the testbed of a new generation network should also contain other diverse networks such as wireless networks, home networks, and sensor networks. In addition, it would be helpful if an environment were provided for a wireless network that enables communication to be tested when a high-speed moving body is actually in motion. For a home network, the wireless network within the home, consumer electronics network (home appliances network), and access network up to the home should be able to be unified and provided in a form close to that of the actual environment. The ability to test diverse application

117

services when the unified network is linked together with a sensor network in the home is also desirable. For wireless mobile communications, providing the following environments or functions merits consideration: o Space such as iron poles or building rooftops on which wireless base stations can be installed and wired circuits for connecting installed wireless base stations to the network. Wireless routing nodes installed in moving objects (such as taxis, busses, or trains). Access to public wireless networks such as cellular, PHS, or wireless LAN hot spots

o o

Provide the Latest Existing Technologies


To enable realistic research for achieving incremental improvements, the latest existing technologies should be incorporated in the testbed so that they can be freely accessed and used for testing the basis of innovation.

Provide a Secure Research Environment


Since research essentially should proceed under conditions of absolute secrecy, a policy is required for testbed use such that research can be concluded without disclosing the contents of that research.

Enable the Usefulness or Effectiveness of New Ideas to be Proven


As mentioned earlier, transmissions methods, network controls, and applications must be able to be tested. In addition, to show that these methods, controls, and applications are useful, an environment is required that enables them to be tested on links where other various kinds of traffic are flowing or to be tested by excluding other traffic.

Enable Proof of Operability with Actual Services


For example, in an application system that uses the current Internet, operability must be guaranteed even if congestion occurs. Traffic must flow under various conditions to test routing nodes. To accomplish this, the testbed must support to multiplex various kinds of traffic generated by real applications or by traffic generating devices.

Enable a Common Architecture to be Assembled and Shared


An overarching design like AKARI will be the foundation on which component technology development and protocol development proceed. It is importance to maintain the independence of each project as well as the ability of each project to reference common functions of other projects. In addition, as a bottom-up approach, it is important to create an environment to enable common parts to be collected from different projects or combine different projects for testing the functionality of the overall architecture.

118

Chapter 7. Related Research [Nakauchi] 7.1. NewArch


The NewArch project (2000-2003) [7-1] was a DARPA-supported collaborative project of USC/ISI, MIT, LCS, and ICSI that began in 2000. It was a systematic attempt to reconsider the Internet architecture from scratch without adhering to the existing IP technology or end-to-end principle. NewArch, which defined a network architecture as "advanced design principles for making use of protocols and algorithms," aimed to build a "new Internet foundation" that includes diverse networks such as mobile phones and cable TV while making use of the advantages (interoperability, robustness, diversity, heterogeneous environment, distributed management function, low cost, ease of connectivity, and accountability) of the conventional Internet. To accomplish this, NewArch investigated basic functions such as address design, the layered structure of protocols, the creation of modules, and so on. The results of NewArch included proposals for a new addressing method that separates identifiers and locators (FARA), a dynamic protocol configuration method called rolebased architecture (RBA), a new routing method that gives the route selection right at the domain level to the user (NIRA), and router-supported transport method that targets networks with large bandwidth delay products (XCP). However, NewArch took no active steps to standardize or spread these results, but instead stopped at increasing opportunities for reconsidering the network architecture.

7.2. GENI/FIND
GENI (Global Environment for Network Innovations) [7-2, 7-3, 7-4] is a program of the National Science Foundation (NSF) of the United States (budget: approximately $300 million from MREFC; total budget: approximately $367 million; term: 5 to 7 years). It aims to develop a shared global facility (testbed) for promoting research and development of new Internet architectures or network services. The objectives of this project assert that a common network foundation enabling multiple network experiments to be conducted simultaneously and independently is required one to resolve problems of the existing Internet architecture concerning stability, security, QoS, and the like, two to construct experimental environments on actual networks using new network technologies, and three to incorporate innovative technologies such as optical, mobile, and sensor technologies. Like NewArch, GENI uses a "clean slate" approach for designing the architecture from scratch as its design policy. Five technical working groups (WG) have been established in GENI. These working groups are listed below along with their main topics of interest. (1) Research Coordination WG (Co-chair: David Clark, Scott Shenker) Making the scientific case for GENI, and ensuring that the requirements of the research community inform GENI's design.

119

(2) Facility Architecture WG (Co-chair: Larry Peterson, John Wroclawski) The management framework that ties the physical resources into a coherent facility. (3) Backbone Network WG (Chair: Jennifer Rexford) The underlying fiber plant, optical switches, customizable routers, tail circuits, and peering relationships. (4) Distributed Services WG (Co-chair: Tom Anderson, Amin Vahdat) Distributed services that collectively define GENI's functionality. (5) Wireless Subnets WG (Co-chair: Dipankar Raychaudhuri, Joe Evans) Various wireless subnet technologies and deployments. GENI considers PlanetLab as a prototype when asserting the validity of a shared global facility. However, GENI is more advanced than PlanetLab in several respects. First, although PlanetLab is limited to only general-purpose PCs as the elements for configuring the overlay network, GENI supports more diverse nodes such as sensors or mobile terminals and low-speed links. In addition, the GENI facility is more userfriendly since it provides more diverse network services. FIND (Future Internet Design) [7-5, 7-6] is a long-term program of the NSF that started in 2006 (budget: approximately $40 million from NeTS; term: undetermined). It aims to establish the Internet architecture of the future. While GENI aims to construct a global facility, FIND is focusing on comprehensive network architecture design research. Since individual FIND projects can benefit by using the GENI facility, synergistic results are expected. Overall, FIND is an umbrella program that consists of many relatively small-scale projects like conventional NSF-funded programs rather than a program that aims to establish a unified network architecture. In its initial fiscal year of 2006, FIND allocated a total of $12 million to 26 projects.

7.3. Euro-NGI/Euro-FGI
Euro-NGI (2003-2006) [7-7, 7-8] is a transnational project that belongs to the Information Society Technologies (IST) of the European Sixth Framework Program (FP6) (category: Network of Excellence; total budget: 5 million euros, participating organizations: 59). Its objectives include the exchange of information related to next generation networks, the unification of ideas and knowledge, and coordination between projects. Specifically, it focuses on the six areas of core networks, fixed access, mobile access, IP networking, and service overlays to establish basic technologies of multinetwork services that can support fixed mobile convergence (FMC), seamless mobility, and context awareness while accommodating diverse access networks such as sensor networks or personal area networks (PAN). This project is expected to continue in the Seventh Framework Program (FP7) as Euro-FGI (category: Network of Excellence; term: 2006-2008). Note that in FP7, "The Network of the Future" is given as one of the pillars of the information and communication technologies (ICT) field, and a total budget of 200 million euros is expected to be allotted. This will be broken down as 14 million euros for the Network of Excellence, and as research projects, at least 84 million euros for large 120

scale integrating projects (IP) and at least 42 million euros for small or medium scale focused research actions (STREP).

References
[7-1] http://www.isi.edu/newarch/ [7-2] http://www.nsf.gov/cise/cns/geni/ [7-3] http://www.geni.net/ [7-4] GENI Planning Group, GENI: Conceptual Design, Project Execution Plan, GENI Design Document 06-07, January 2006. http://www.geni.net/GDD/GDD-06-07.pdf [7-5] http://www.nets-find.net [7-6] http://find.isi.edu [7-7] http://eurongi.enst.fr/ [7-8] 'A view on Future Communications' http://eurongi.enst.fr/archive/172/AviewonFutureCommunications.doc [7-9] http://cordis.europa.eu/fp7/

121

Chapter 8. Conclusions
This conceptual design document is the first step towards the implementation of a new generation network architecture. It includes societal requirements, future basic technologies, design principles for designing a network architecture based on those requirements and technologies, and conceptual design examples of several key parts based on those design principles. Our approach is to concentrate our efforts on designing a new generation network while using testbeds to evaluate the quality of those designs experimentally. The most important goals of our efforts are design principles for an architecture that is comprehensively optimized and stabilized. However, until the final design is complete, even these design principles are not fixed, but can change according to feedback through the design and evaluation process. The AKARI architecture will be sustainable and evolutionary. The crisis confronting the current Internet must not be repeated. The information infrastructure that has become such an integral part of society can no longer be scrapped and rebuilt. It is imperative for the information infrastructure to provide surplus capacity in order to enhance the quality of society in the future. Sufficient capacity will enhance quality. However, the infrastructure must not be made more complex by uniting technologies. The role of an architecture is to select and integrate, that is, to guide towards the direction of more simplicity. A scheme for implementing a virtual space on the network solves societal problems stemming from security, which is a weakness of the Internet. Endowing the network core with robustness provides society with a sense of security that cannot be provided by superficial improvements. A sentiment included in the term "new generation" is to act based on free ideas that are not constrained by the limitations of existing technologies. That standpoint is both novel and neutral. To implement a new generation network architecture, the participation of many network architects who have knowledge of network technology in general is important. This conceptual design document just indicates the directions in which the process should advance. It also goes without saying that application fields and basic technological fields must be linked. New generation network research will help promote further advances towards detailed design in the future.

122

Appendix. Definition of Terms


Switching: Circuit: Path: Packet: Routing: A mechanism for communicating by sharing resources. An individual channel constituting a route for data. A set of circuits that becomes a route for data. A tiny bundle of divided data that includes an address. A function for determining the route on which data advances when it reaches a node having multiple links.

Addressing: A policy for assigning information for identifying a location on the network. Circuit switching: A method of communicating after allocating a circuit before communication begins. Packet switching: A method of communicating by dividing data into packets. Nodes (switches) perform communication processing in terms of individual packets without determining the route before communication begins. Virtual circuit: A mechanism for virtually allocating the route on which data will flow before communication starts in order to perform packet-switched communications efficiently. This differs from packet switching in that tags are added to packets rather than addresses, and the tags are rewritten at each node. It differs from circuit switching in that a probabilistic loss of packets is permitted at nodes or on links of the communication route. Scalability: Reliability: The ability of a system to estimate the absolute maximum number of entities within the system and support them. The degree to which a system can recover even if faults (including congestion) occur or, conversely, the degree to which it does not breakdown.

Availability: The useable operating ratio of a network. For example, if the noncommunicable time is 3.6 seconds per hour (3600 seconds), then the availability is 99.9%. Connectivity: A communicable state. Interoperability: The ability of multiple entities that are implemented according to certain common rules to communicate with each other. Next generation network (NGN): [ITU-T Y.2001 definition (entries in parentheses are supplements)] A packet-based network that uses multiple broadband QoSenabled transport technologies to provide telecommunication services. Its service-related functions (service stratum) are independent from underlying transport-related technologies (transport stratum). It enables unfettered access for users to networks and to competing service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.

123

The Internet: A set of interconnected networks with global reachability, which is built using the IP protocol.

124

Вам также может понравиться