Академический Документы
Профессиональный Документы
Культура Документы
com
2012 CABLExpress. All rights reserved. All trademarks are property of their respective owners.
Introduction
With the ratification of new industry standards and increased demands on data center throughput, 40/100G Ethernet will be an integral component of the next generation data center. In fact, it is already an emerging influence on how organizations plan, build and operate their existing data center architecture. The market proves this: manufacturers are already responding to the increased demand for Ethernet hardware, including cabling products, switches and transceivers. Among several other factors, an increase in global IP traffic is a major reason for the increase in data production. With the ever-increasing number and diversity of networked devices, broadband speed and computing power, global IP traffic will quadruple over the next four years [1]. Another factor contributing to the exponential growth of information is the advent of big data. A relatively new industry term, big data refers to sets of data so large, they are measured in terabytes, exabytes and even zettabytes. While these data sets provide for more accurate analysis of trends and statistics than traditional sets of data, the storage of all this information can pose a challenge. The exponential growth in information means processing speeds have to increase as well, so as not to slow access to data. And in fact, they are. Butters Law, a lesser known parallel to Moores law, states that data throughput from one optical fiber doubles every nine months.
The implementation of 40/100G Ethernet is dependent upon a variety of organizational factors, including existing infrastructure, budget, throughput demand and leadership priority. However, it is clear that the stage is set for the most dramatic The worlds information is doubling every change related to data center two years. In 2011 the world will create fiber optic infrastructures since their inception. In this paper, we a staggering 1.8 zettabytes. By 2020 the will discuss the deep impact that world will generate 50 times the amount this network speed transition of information and 75 times the number has on data center cabling infraof information containers while IT staff to structure, and the decisions that manage it will grow less than 1.5 times. organizations will need to make to accommodate these changes.
High-performance cabling that can transfer data over 40/100G Ethernet will be a necessary addition to data centers looking to keep up with this digital data growth. Virtualization: the double edged sword. Virtualization can help data centers save on capital expenses, improve operational efficiency and create a more agile infrastructure. There are many types of virtualization, from desktop to storage to server virtualization. We will discuss server virtualization in particular, because it calls for fewer, more efficient servers, which translates to fewer server connections. And because there are fewer connections, is it important that these connections work properly.
Unfortunately, most data centers do not contain cabling infrastructure designed to meet the high-performance capabilities that virtualization demands. This is particularly true for data centers built in the 1980s, before high-performance cabling even existed. Decreasing tolerance for downtime. When data transactions are interrupted due to network downtime, it translates to a very real loss of revenue. In 2011, Amazon.com made $1,523.59 in revenue per second [2]. Considering how quickly lost revenue can add up, it makes sense that there is an extremely low tolerance for network downtime.
The effect of downtime on revenue is even greater when considering end-user experience. According to one source, network downtime measured for user experience and business needs costs an average of $5,600 per minute !
[3]
Network administrators should have a contingency plan in place in the event of network failure. However, one of the most effective ways to mitigate this issue is to make sure the existing network is able meet the demands of increasing data throughput, including upgrading networks to be capable of handling 40/100G speeds. Managing capital expenses. While migrating to 40/100G Ethernet creates an up-front capital expense, it saves data centers in the long run by future-proofing infrastructure. Not only will data centers be prepared for the increasing demands on data throughput, but the high-performance cabling infrastructure required of 40/100G Ethernet can grow with future hardware upgrades, instead of having to be torn out and replaced. A Change in Data Center Cabling Standards. In 2010, the Institute of Electrical and Electronics Engineers (IEEE) published the standard for migrating to 40/100G Ethernet. The standard, titled 802.3ba, sent a clear message to data center operators and manufacturers alike that the road to 100G was imminent, and the industry needed to start preparing for this migration as soon as possible.
area, or MDA. All equipment links back to the MDA. Other terms used to define this area include: main cross connect, main distribution frame (MDF), and central patching location (CPL). The principle of a structured cabling system is to avoid running cables from active port to active port (often referred to as pointto-point). Instead, all active ports are connected to one area (the MDA), where the patching is done. This is also where moves, adds and changes (MACs) take place. TIA-942 calls for the use of interconnect points, which are typically in the form of patch panels (also referred to as fiber enclosures). Patch panels allow for patch cables (or jumpers) to be used in the front of the racks or cabinets where the equipment is housed. The patch cable would then connect to a fiber optic trunk, and then to another patch panel in the MDA. There are several advantages to implementing a structured cabling system. First, using fiber optic trunks significantly reduces the amount of cabling bulk both underfloor and in overhead conveyance. Implementing a structured cabling system also reduces airflow congestion, which reduces power usage.
Another distinct advantage to a structured cabling system is that it allows for modularity: connector changes can be made without having to remove horizontal or distribution cabling. For example, a chassis-based switch with 100BFX ports is connected to a patch panel using SC fiber optic jumpers. To upgrade the chassis and install new blades with LC ports, you no longer have to replace the entire channel as you would with a point-topoint system. Instead, the module within the patch panel is replaced. Underfloor and overhead cabling would remain undisturbed. However, it should be noted that this method does add insertion loss to the channel because it adds more mating
points. To offset insertion loss created by additional mating points, high-performance fiber optic cables should be used for implementation.
Connectivity Options
When migrating to 40/100G speeds, there are two connectivity options to consider when planning your cabling infrastructure.
Computer Room
40G Ethernet
100G Ethernet
Option 1 uses long-haul (LX) transceivers with single-mode (SM) cabling. Data is transmitted via serial transmission. In a serial transmission, one fiber is dedicated to carry transmitting data and another fiber to carry receiving data. These two fibers make what is referred to as a channel. A channel is defined as the fiber, or group of fibers, used to complete a data circuit. Until recently, serial transmission has been used for Ethernet speeds up to 10G. This setup is typically not used in data centers because it is built for long distances. It is also very expensive, despite the abundance (and therefore low cost) of single-mode cabling. In order to work effectively over long distances, the lasers used in LX transceivers are extremely precise and expensive. This drastically increases the overall cost of an LX/SM connectivity solution. Option 2 uses short-haul (SX) transceivers with multi-mode (MM) cabling. Data is transmitted via parallel optic transmission. Parallel optic transmission aggregates multiple fibers for transmission and
reception. For 40G speeds, four fibers can transmit at 10G each, while four fibers can receive at 10G each. This means a total of eight strands of fiber will be utilized for a 40G Ethernet channel. The same principle applies for 100G, except the number of fibers increases. Ten fibers at 10G each transmit data, and ten fibers at 10G each receive. A total of twenty fibers make up a 100G Ethernet channel. This connectivity setup is much more ideal for migrating to 40/100G Ethernet. First, because it works well under the short distances found within a data center. Second, although multi-mode cabling costs more than single-mode, Option 2 is just one-quarter the cost of Option 1. This is because SX transceivers use a vertical-cavity surface-emitting laser, or VCSEL, which is much less expensive than its LX counterpart.
Connectors
The LC fiber cable connector is the most accepted connector used in the data center, especially for high-density network applications. The LC connector has one fiber, and is typically mated in pairs for a duplex connection. Possibly the most drastic change data centers will undergo in migrating to 40/100G Ethernet is a change from the LC connector to the MPO-style connector. Developed by Nippon Telegraph and Telephone Corporation (NTT), MPO stands for multi-fiber push-on. (A popular brand of the MPO-style connector, US Conecs MTP, is often incorrectly used to refer to all MPO connectors similar to using Band-Aid for adhesive bandage. An MPO-style connector can house up to 72 fibers in one connector. However, when dealing with such a high number of fibers, it can be difficult to terminate the assemblies while
staying within optical loss budgets. This is why twelve remains the standard number of fibers in an MPO-style connector, while leading-edge manufacturers currently offer up to 24-fiber MPO-style connectors.
Fiber Types
If multi-mode cables are being used to migrate to 40/100G Ethernet, it is recommended they be OM3 or OM4 fiber, replacing any OM1 or OM2 fiber cables. OM4, the newest fiber type on the market, transmits the most bandwidth and is more effective over longer distances. However, it is also more expensive. Data centers must balance cost against bandwidth and distance needs when choosing between OM3 and OM4. As with other factors considered when migrating to 40/100G Ethernet, every data center has unique needs and business-driven objectives there is no one-size-fits-all solution.
References [1] The 2011 IDC Digital Universe Study: http://www.emc.com/leadership/programs/digital-universe.htm [2] YCharts: http://ycharts.com/financials/AMZN/income_statement/annual [3] Data Center Knowledge: http://www.datacenterknowledge.com/archives/2011/08/10/true-costs-of-data-center-downtime/